[zfs-discuss] query on ZFS
Can anyone help explain what does out-of-order issue mean in the following segment? ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The pipeline operates on I/O dependency graphs and provides scoreboarding, priority, deadline scheduling, out-of-order issue and I/O aggregation. I/O loads that bring other filesystems to their knees http://blogs.sun.com/roller/page/bill?entry=zfs_vs_the_benchmark are handled with ease by the ZFS I/O pipeline. Thanks, Annie ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] query on ZFS
Annie Li writes: Can anyone help explain what does out-of-order issue mean in the following segment? ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The pipeline operates on I/O dependency graphs and provides scoreboarding, priority, deadline scheduling, out-of-order issue and I/O aggregation. I/O loads that bring other filesystems to their knees http://blogs.sun.com/roller/page/bill?entry=zfs_vs_the_benchmark are handled with ease by the ZFS I/O pipeline. Thanks, Annie ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss As an example, it says that, even if a read was issued by an application 'after' ZFS had started to work on a group of write I/Os, the read could actually issue ahead of some of the writes. -r ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS improvements
6322646 ZFS should gracefully handle all devices failing (when writing) Which is being worked on. Using a redundant configuration prevents this from happening. What do you mean with redundant? All our servers has 2 or 4 HBAs, 2 or 4 fc switches and storage arrays with redundant controllers. We used only RAID10 zpools but we still had them corrupted. http://www.opensolaris.org/os/community/zfs/faq/#whyno fsck Writing such a tool is effectively impossible. For the one known corruption bug we've encountered (and since fixed, we provided the 'zfs_recover' /etc/system switch, but it only works for this particular bug. Without understanding the underlying pathology it's impossible to fix a ZFS pool. I think this is a very importand drawback of ZFS. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS improvements
1) ZFS must stop to force kernel panics! As you know ZFS takes to a kernel panic when a corrupted zpool is found or if it's unable to reach a device and so on... We need to have it just fail with an error message but please stop crashing the kernel. This is: 6322646 ZFS should gracefully handle all devices failing (when writing) With S10U3 we are still getting kernel panics when trying to import a corrupted zpool (RAID10) gino This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS vs. rmvolmgr
Hi, sorry, I needed to be more clear: Here's what I did: 1. Connect USB storage device (a disk) to machine 2. Find USB device through rmformat 3. Try zpool create on that device. It fails with: can't open /dev/rdsk/cNt0d0p0, device busy 4. svcadm disable rmvolmgr 5. Now zpool create works with that device and the pool gets created. 6. svcadm enable rmvolmgr 7. After that, everything works as expected, the device stays under control of the pool. can't open /dev/rdsk/cNt0d0p0, device busy Do you remember exactly what command/operation resulted in this error? See above, it comes right after trying to create a zpool on that device. It is something that tries to open device exclusively. So after ZFS opens the device exclusively, hald and rmvolmgr will ignore it? What happens at boot time, is zfs then quicker in grabbing the device than hald and rmvolmgr are? So far, I've just said svcadm disable -t rmvolmgr, did my thing, then said svcadm enable rmvolmgr. This can't possibly be true, because rmvolmgr does not open devices. Hmm. I really remember to have done the above. Actually, I've been pulling some hairs out trying to do zpools on external devices until I got the idea of diasbling the rmvolmgr, then it worked. You'd need to also disable the 'hal' service. Run fuser on your device and you'll see it's one of the hal addons that keeps it open: Perhaps something depended on rmvolmgr which release the device after I disabled the service? For instance, I'm now running several USB disks with ZFS pools on them, and even after restarting rmvolmgr or rebooting, ZFS, the disks and rmvolmgr get along with each other just fine. I'm confused here. In the beginning you said that something got in the way, but now you're saying they get along just fine. Could you clarify. After creating the pool, the device now belongs to ZFS. Now, ZFS seems to be able to grab the device before anybody else. One possible workaround would be to match against USB disk's serial number and tell hal to ignore it using fdi(4) file. For instance, find your USB disk in lshal(1M) output, it will look like this: udi = '/org/freedesktop/Hal/devices/pci_0_0/pci1028_12c_1d_7/storage_5_0' usb_device.serial = 'DEF1061F7B62' (string) usb_device.product_id = 26672 (0x6830) (int) usb_device.vendor_id = 1204 (0x4b4) (int) usb_device.vendor = 'Cypress Semiconductor' (string) usb_device.product = 'USB2.0 Storage Device' (string) info.bus = 'usb_device' (string) info.solaris.driver = 'scsa2usb' (string) solaris.devfs_path = '/[EMAIL PROTECTED],0/pci1028,[EMAIL PROTECTED],7/[EMAIL PROTECTED]' (string) You want to match an object with this usb_device.serial property and set info.ignore property to true. The fdi(4) would look like this: thanks, this sounds just like what I was looking for. So the correct way of having a zpool out of external USB drives is to: 1. Attach the drives 2. Find their USB serial numbers with lshal 3. Set up an fdi file that matches the disks and tells hal to ignore them The naming of the file /etc/hal/fdi/preprobe/30user/10-ignore-usb.fdi sounds like init.d style directory and file naming, ist this correct? Best regards, Constantin -- Constantin GonzalezSun Microsystems GmbH, Germany Platform Technology Group, Global Systems Engineering http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: ZFS improvements
On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B. Rang wrote: That's only one cause of panics. At least two of gino's panics appear due to corrupted space maps, for instance. I think there may also still be a case where a failure to read metadata during a transaction commit leads to a panic, too. Maybe that one's been fixed, or maybe it will be handled by the above bug. The space map bugs should have been fixed as part of: 6458218 assertion failed: ss == NULL Which went into Nevada build 60. There are several different pathologies that can result from this bug, and I don't know if the panics are from before or after this fix. If that can help you, we are able to corrupt a zpool on snv_60 doing the following a few times: -create a raid10 zpool (dual path luns) -making a high writing load on that zpool -disabling fc ports on both the fc switches Each time we get a kernel panic, probably because of 6322646, and sometimes we get a corrupted zpool. gino This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: ZFS improvements
Gino writes: 6322646 ZFS should gracefully handle all devices failing (when writing) Which is being worked on. Using a redundant configuration prevents this from happening. What do you mean with redundant? All our servers has 2 or 4 HBAs, 2 or 4 fc switches and storage arrays with redundant controllers. We used only RAID10 zpools but we still had them corrupted. Redundant from the viewpoint of ZFS. So either zfs mirror of raid-z. The point of the bug is to better handle failures on devices in non-redundant pools. For redundant pools, ZFS is able to self-heal problems as they arise. If you maintain redundancy at the storage level, then it's harder for ZFS to deal with problems. We should still behave better than we do now thus 6322646. Can you post your zpool status output ? -r This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror drives on a _striped_ pool?
Hi Mark, Mark J Musante wrote: On Tue, 10 Apr 2007, Constantin Gonzalez wrote: Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them resilver, then detach them and import the pool from those? You'd want to export them, not detach them. Detaching will overwrite the vdev labels and make it un-importable. thank you for the export/import idea, it does sound cleaner from a ZFS perspective, but comes at the expense of temporarily unmounting the filesystems. So, instead of detaching, would unplugging, then detaching work? I'm thinking something like this: - zpool create tank mirror dev1 dev2 dev3 - {physically move dev3 to new box} - zpool detach tank dev3 On the new box: - zpool import tank - zpool detach tank dev1 - zpool detach tank dev2 This should work for one disk, and I assume this would also work for multiple disks? Thinking along similar lines, would it be a useful RFE to allow asynchronous mirroring like this: - dev1, dev2 are both 250GB, dev3 is 500GB - zpool create tank mirror dev1,dev2 dev3 This means that half of dev3 would mirror dev1, the other half would mirror dev2 and dev1,dev2 is a regular stripe. The utility of this would be for cases where customer have set up mirrors, then need to replace disks or upgrade the mirror after a long time, when bigger disks are easier to get than smaller ones and while reusing older disks. Best regards, Constantin -- Constantin GonzalezSun Microsystems GmbH, Germany Platform Technology Group, Global Systems Engineering http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Gzip compression for ZFS
Erblichs wrote: My two cents, ... Secondly, if I can add an additional item, would anyone want to be able to encrypt the data vs compress or to be able to combine encryption with compression? Yes, I might want to encrypt all of my laptop's hard drive contents and I might also want to have compression used prior to encryption to maximise the utility I get from the relatively limited space. Darren ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Something like spare sectors...
Mark Maybee wrote: Anton B. Rang wrote: This sounds a lot like: 6417779 ZFS: I/O failure (write on ...) -- need to reallocate writes Which would allow us to retry write failures on alternate vdevs. Of course, if there's only one vdev, the write should be retried to a different block on the original vdev ... right? Yes, although it depends on the nature of the write failure. If the write failed because the device is no longer available, ZFS will not continue to try different blocks. So if I unplug a USB hard drive at an inopertune time, ZFS will panic Solaris? Or to put it differently, if power goes away from disk before the operating system, things will go badly? Darren ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Poor man's backup by attaching/detaching mirror
Hi, How would you access the data on that device? Presumably, zpool import. yes. This is basically what everyone does today with mirrors, isn't it? :-) sure. This may not be pretty, but it's what customers are doing all the time with regular mirrors, 'cause it's quick, easy and reliable. Cheers, Constantin -- Constantin GonzalezSun Microsystems GmbH, Germany Platform Technology Group, Global Systems Engineering http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] 120473-05
Hello zfs-discuss, In order to get IDR126199-01 I need to install 120473-05 first. I can get 120473-07 but everything more than -05 is marked as incompatible with IDR126199-01 so I do not want to force it. Local Sun's support has problems with getting 120473-05 also so I'm stuck for now and I would really like to get that IDR running. Can someone help? -- Best regards, Robert mailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] RAID-Z resilver broken
On Sat, Apr 07, 2007 at 05:05:18PM -0500, in a galaxy far far away, Chris Csanady said: In a recent message, I detailed the excessive checksum errors that occurred after replacing a disk. It seems that after a resilver completes, it leaves a large number of blocks in the pool which fail to checksum properly. Afterward, it is necessary to scrub the pool in order to correct these errors. After some testing, it seems that this only occurs with RAID-Z. The same behavior can be observed on both snv_59 and snv_60, though I do not have any other installs to test at the moment. A colleague at work and I have followed the same steps, included running a digest on the /test/file, on a SXCE:61 build today and can confirm the exact same, and disturbing?, result. My colleague mentioned to me he has witnessed the same 'resilver' behavior on builds 57 and 60. The box which these steps were performed on was 'luupgraded' from SXCE: 60 to 61 using the SUNWlu* packages from 61! # cat /etc/release Solaris Nevada snv_61 X86 Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 26 March 2007 # mkdir /tmp/test # mkfile 64m /tmp/test/0 /tmp/test/1 # zpool create test raidz /tmp/test/0 /tmp/test/1 # mkfile 16m /test/file # digest -v -a sha1 /test/file sha1 (/test/file) = 3b4417fc421cee30a9ad0fd9319220a8dae32da2 # # zpool export test # rm /tmp/test/0 # zpool import -d /tmp/test test # mkfile 64m /tmp/test/0 # zpool replace test /tmp/test/0 # digest -v -a sha1 /test/file sha1 (/test/file) = 3b4417fc421cee30a9ad0fd9319220a8dae32da2 # zpool status test pool: test state: ONLINE scrub: resilver completed with 0 errors on Wed Apr 11 15:19:15 2007 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /tmp/test/0 ONLINE 0 0 0 /tmp/test/1 ONLINE 0 0 0 errors: No known data errors # zpool scrub test # # zpool status test pool: test state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub completed with 0 errors on Wed Apr 11 15:22:30 2007 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /tmp/test/0 ONLINE 0 017 /tmp/test/1 ONLINE 0 0 0 errors: No known data errors I don't think these checksum errors are a good sign. The sha1 digest on the file *does* show to be the same so the question arises: is the resilver process truly broken (even though in this test-case the test file does appear to unchanged based on the sha1 digest) ? Marco -- # make mistake make: don't know how to make mistake. Stop ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] 120473-05
Robert Milkowski wrote: Hello zfs-discuss, In order to get IDR126199-01 I need to install 120473-05 first. I can get 120473-07 but everything more than -05 is marked as incompatible with IDR126199-01 so I do not want to force it. Local Sun's support has problems with getting 120473-05 also so I'm stuck for now and I would really like to get that IDR running. Can someone help? Hi This patch will be on SunSolve possibly later today, tomorrow at latest I suspect as it has only justed being pushed out from testing. I have sent the patch in another mail for now. Enda ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: ZFS improvements
Anton B. Rang wrote: This might be impractical for a large file system, of course. It might be easier to have a 'zscavenge' that would recover data, where possible, from a corrupted file system. But there should be at least one of these. Losing a whole pool due to the corruption of a couple of blocks of metadata is a Bad Thing. That could be handy for any number of data-transport, borked-system-recovery and even some forensic-like tasks: zscavenge badpool | zfs recv So, suppose a user has a few hundred gb of data that they'd like copied directly onto our fileserver. They bring me a ZFS-formatted external USB drive. Instead of mounting it and messing with their data, I zscavange /dev/usbdevice, and then write that into a pool that I'm comfortable messing with. It works just as well for someone trying to recover a truly borked system. One could recover the data without making any changes whatsoever to the drive, so that I can put everything back the way I found it -- in case I can't fix it, the next person who tries to fix it has a clean slate. -Luke smime.p7s Description: S/MIME Cryptographic Signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror drives on a _striped_ pool?
On April 11, 2007 11:54:38 AM +0200 Constantin Gonzalez Schmitz [EMAIL PROTECTED] wrote: Hi Mark, Mark J Musante wrote: On Tue, 10 Apr 2007, Constantin Gonzalez wrote: Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them resilver, then detach them and import the pool from those? You'd want to export them, not detach them. Detaching will overwrite the vdev labels and make it un-importable. thank you for the export/import idea, it does sound cleaner from a ZFS perspective, but comes at the expense of temporarily unmounting the filesystems. So, instead of detaching, would unplugging, then detaching work? I'm thinking something like this: - zpool create tank mirror dev1 dev2 dev3 - {physically move dev3 to new box} - zpool detach tank dev3 If we're talking about a 3rd device, added in order to migrate the data, why not just zfs send | zfs recv? -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS vs. rmvolmgr
1. Connect USB storage device (a disk) to machine 2. Find USB device through rmformat 3. Try zpool create on that device. It fails with: can't open /dev/rdsk/cNt0d0p0, device busy If your disk was originally formatted with pcfs or ufs, it would be automounted when connected. If you didn't unmount it prior to zpool create, that might be the problem. If that's the case, you might not actually need all the fdi hackery, simply unmount the disk, e.g.: $ rmmount -l /dev/dsk/c2t0d0p0:1rmdisk,rmdisk0,CRUZER,/media/CRUZER $ rmumount rmdisk0 rmdisk0 /dev/dsk/c2t0d0p0:1 unmounted -Artem. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Poor man's backup by attaching/detaching mirror
How would you access the data on that device? Presumably, zpool import. This is basically what everyone does today with mirrors, isn't it? :-) But that's not possible here because we can't deport (or import) a subset of a pool, correct? So I could detach a disk, but that disk is no longer part of a pool and can't be imported unless I'm missing something. -- Darren Dunham [EMAIL PROTECTED] Senior Technical Consultant TAOShttp://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area This line left intentionally blank to confuse you. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] RAID-Z resilver broken
ugh, thanks for exploring this and isolating the problem. We will look into what is going on (wrong) here. I have filed bug: 6545015 RAID-Z resilver broken to track this problem. -Mark Marco van Lienen wrote: On Sat, Apr 07, 2007 at 05:05:18PM -0500, in a galaxy far far away, Chris Csanady said: In a recent message, I detailed the excessive checksum errors that occurred after replacing a disk. It seems that after a resilver completes, it leaves a large number of blocks in the pool which fail to checksum properly. Afterward, it is necessary to scrub the pool in order to correct these errors. After some testing, it seems that this only occurs with RAID-Z. The same behavior can be observed on both snv_59 and snv_60, though I do not have any other installs to test at the moment. A colleague at work and I have followed the same steps, included running a digest on the /test/file, on a SXCE:61 build today and can confirm the exact same, and disturbing?, result. My colleague mentioned to me he has witnessed the same 'resilver' behavior on builds 57 and 60. The box which these steps were performed on was 'luupgraded' from SXCE: 60 to 61 using the SUNWlu* packages from 61! # cat /etc/release Solaris Nevada snv_61 X86 Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 26 March 2007 # mkdir /tmp/test # mkfile 64m /tmp/test/0 /tmp/test/1 # zpool create test raidz /tmp/test/0 /tmp/test/1 # mkfile 16m /test/file # digest -v -a sha1 /test/file sha1 (/test/file) = 3b4417fc421cee30a9ad0fd9319220a8dae32da2 # # zpool export test # rm /tmp/test/0 # zpool import -d /tmp/test test # mkfile 64m /tmp/test/0 # zpool replace test /tmp/test/0 # digest -v -a sha1 /test/file sha1 (/test/file) = 3b4417fc421cee30a9ad0fd9319220a8dae32da2 # zpool status test pool: test state: ONLINE scrub: resilver completed with 0 errors on Wed Apr 11 15:19:15 2007 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /tmp/test/0 ONLINE 0 0 0 /tmp/test/1 ONLINE 0 0 0 errors: No known data errors # zpool scrub test # # zpool status test pool: test state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub completed with 0 errors on Wed Apr 11 15:22:30 2007 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /tmp/test/0 ONLINE 0 017 /tmp/test/1 ONLINE 0 0 0 errors: No known data errors I don't think these checksum errors are a good sign. The sha1 digest on the file *does* show to be the same so the question arises: is the resilver process truly broken (even though in this test-case the test file does appear to unchanged based on the sha1 digest) ? Marco ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] RAID-Z resilver broken
On 4/11/07, Marco van Lienen [EMAIL PROTECTED] wrote: A colleague at work and I have followed the same steps, included running a digest on the /test/file, on a SXCE:61 build today and can confirm the exact same, and disturbing?, result. My colleague mentioned to me he has witnessed the same 'resilver' behavior on builds 57 and 60. Thank you for taking the time to confirm this. Just as long as people are aware of it, it shouldn't really cause much trouble. Still, it gave me quite a scare after replacing a bad disk. I don't think these checksum errors are a good sign. The sha1 digest on the file *does* show to be the same so the question arises: is the resilver process truly broken (even though in this test-case the test file does appear to unchanged based on the sha1 digest) ? ZFS still has good data, so this is not unexpected. It is interesting though that it managed to read all of the data without finding any bad blocks. I just tried this with a more complex directory structure, and other variations, with the same result. It is bizarre, but ZFS only manages to use the good data in normal operation. To see exactly what is damaged though, try the following instead. After the resilver completes, zpool offline a known good device of the RAID-Z. Then, do a scrub or try to read the data. Afterward, zpool status -v will display a list of the damaged files, which is very nice. Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re[2]: [zfs-discuss] Re: Poor man's backup by attaching/detaching mirror
Hello Darren, Wednesday, April 11, 2007, 7:39:18 PM, you wrote: How would you access the data on that device? Presumably, zpool import. This is basically what everyone does today with mirrors, isn't it? :-) DD But that's not possible here because we can't deport (or import) a DD subset of a pool, correct? DD So I could detach a disk, but that disk is no longer part of a pool and DD can't be imported unless I'm missing something. You're correct about current state. However it should be possible to atomically detach all submirrors at the same time without destroyng their label (actually with relabelling them) and import them as separate pool. User data would be consistent in the same way as you would detach svm submirror. IIRC there's RFE for it. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: zfs destroy snapshot takes hours
rebooting fixed it - before rebooting, i ran the zdb script suggested above - it created a 114MB file. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] # devices in raidz.
Mike, This RFE is still being worked and I have no ETA on completion... cs Mike Seda wrote: I noticed that there is still an open bug regarding removing devices from a zpool: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783 Does anyone know if or when this feature will be implemented? Cindy Swearingen wrote: Hi Mike, Yes, outside of the hot-spares feature, you can detach, offline, and replace existing devices in a pool, but you can't remove devices, yet. This feature work is being tracked under this RFE: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783 Cindy Mike Seda wrote: Hi All, From reading the docs, it seems that you can add devices (non-spares) to a zpool, but you cannot take them away, right? Best, Mike Victor Latushkin wrote: Maybe something like the slow parameter of VxVM? slow[=iodelay] Reduces toe system performance impact of copy operations. Such operations are usually per- formed on small regions of the volume (nor- mally from 16 kilobytes to 128 kilobytes). This option inserts a delay between the recovery of each such region . A specific delay can be specified with iodelay as a number of milliseconds; otherwise, a default is chosen (normally 250 milliseconds). For modern machines, which *should* be the design point, the channel bandwidth is underutilized, so why not use it? NB. At 4 128kByte iops per second, it would take 11 days and 8 hours to resilver a single 500 GByte drive -- feeling lucky? In the bad old days when disks were small, and the systems were slow, this made some sense. The better approach is for the file system to do what it needs to do as efficiently as possible, which is the current state of ZFS. Well, we are trying to balance impact of resilvering on running applications with a speed of resilvering. I think that having an option to tell filesystem to postpone full-throttle resilvering till some quieter period of time may help. This may be combined with some throttling mechanism so during quieter period resilvering is done with full speed, and during busy period it may continue with reduced speed. Such arrangement may be useful for customers with e.g. well-defined SLAs. Wbr, Victor ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re[2]: [zfs-discuss] 120473-05
Hello Enda, Wednesday, April 11, 2007, 4:21:35 PM, you wrote: EOCSMSI Robert Milkowski wrote: Hello zfs-discuss, In order to get IDR126199-01 I need to install 120473-05 first. I can get 120473-07 but everything more than -05 is marked as incompatible with IDR126199-01 so I do not want to force it. Local Sun's support has problems with getting 120473-05 also so I'm stuck for now and I would really like to get that IDR running. Can someone help? EOCSMSI Hi EOCSMSI This patch will be on SunSolve possibly later today, tomorrow at latest EOCSMSI I suspect as it has only justed being pushed out from testing. EOCSMSI I have sent the patch in another mail for now. Thank you patch - it worked (installed) along with IDR properly. However it seems like the problem is not solved by IDR :( -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs sharing
I'm not sure if this is an nfs/autofs problem or zfs problem... But I'll try here first... On our server, I've got a zfs directory called cube/builds/izick/. In this directory I have a number of mountpoints to other zfs file systems.. The problem happens when we clone a new zfs file system, say cube/builds/izick/foo, any client system that already have cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It looks like a blank directory on the client systems, but on the server it would be fully populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client does nothing.. The only way to fix this is to unmount the directory on the client, which can be invasive to a desktop machine.. Could there be a problem because the zfs files systems are nested? Is there a known issue with zfs-nfs interactions where zfs doesn't tell nfs properly that there has been an update, other than the just the mountpoint? thanks... Tony This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs sharing
Anthony J. Scarpino wrote: I'm not sure if this is an nfs/autofs problem or zfs problem... But I'll try here first... On our server, I've got a zfs directory called cube/builds/izick/. In this directory I have a number of mountpoints to other zfs file systems.. The problem happens when we clone a new zfs file system, say cube/builds/izick/foo, any client system that already have cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It looks like a blank directory on the client systems, but on the server it would be fully populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client does nothing.. The only way to fix this is to unmount the directory on the client, which can be invasive to a desktop machine.. Could there be a problem because the zfs files systems are nested? Is there a known issue with zfs-nfs interactions where zfs doesn't tell nfs properly that there has been an update, other than the just the mountpoint? thanks... This is a known limitation - you would need to add entries to your automounter maps to let the client know to do mounts for those 'nested' entries. We're working on it - since the client can see the new directories and detect that they're different filesystems, we could do what we call 'mirror mounts' to make them available. See http://opensolaris.org/os/project/nfs-namespace/ for more on this and other work. Rob T ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
Hello Ignatich, Thursday, April 12, 2007, 12:32:13 AM, you wrote: I Hello, I I believe that ZFS and it's concepts is truly revolutionary to the I point that I no longer see any OS as modern if it does not have I comparable storage functionality. Therefore I think that file I system/disk manager with similar qualities should be written for Linux. I Does Sun have plans to dual license ZFS as GPL so it can be ported to I native Linux? I If not, is it legal to write ZFS clone from scratch while I maintaining binary compatibility with original? I Jeff mentioned in his blog that Sun filled 56 patents on ZFS related I technologies. Can anybody from the company provide me with more I information about this? I If porting ZFS to Linux kernel is not an option and I were to implement I different file system with ZFS ideas in mind how can I be safe and not I break any Sun patents? 1. there's a project to port ZFS to Linux withing FUSE 2. I hope zfs won't be dual licensed - I'm not a lawyer but dual licensing seems to me to provide more problems Now the idea to have ZFS on different platforms is appealing (providing 100% compatibility) but on the other hand real competition is good for market, including open source one. Technologies like ZFS or DTrace make Solaris more different in a good way and just for competitiveness it could be actually good if Linux can't just copy it. I'm looking closely to GPLv3 but maybe Linux should change it's license to actually provide more freedom and problem would disappear then. See ZFS being ported to FreeBSD. I really see no point wasting Open Solaris resources to keep ZFS dual licensed. And frankly, it would be mostly Sun's resources. I would rather like to spend those resources on something more important. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] raidz2 another resilver problem
Hello zfs-discuss, One of a disk started to behave strangely. Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd27): Apr 11 16:07:42 thumper-9.srv Error for Command: read(10) Error Level: Retryable Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.notice] Requested Block: 41122898 Error Block: 41122898 Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.notice] Vendor: ATA Serial Number: Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.notice] Sense Key: No Additional Sense Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.notice] ASC: 0x0 (no additional sense info), ASCQ: 0x0, FRU: 0x0 Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:43 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:07:43 thumper-9.srv port 6: link lost Apr 11 16:07:43 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:07:43 thumper-9.srv port 6: link established Apr 11 16:07:43 thumper-9.srv marvell88sx: [ID 812950 kern.warning] WARNING: marvell88sx2: error on port 6: Apr 11 16:07:43 thumper-9.srv marvell88sx: [ID 517869 kern.info]device disconnected Apr 11 16:07:43 thumper-9.srv marvell88sx: [ID 517869 kern.info]device connected Apr 11 16:07:43 thumper-9.srv marvell88sx: [ID 517869 kern.info]SError interrupt Apr 11 16:08:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:08:42 thumper-9.srv port 6: device reset Apr 11 16:08:43 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:08:43 thumper-9.srv port 6: device reset Apr 11 16:08:43 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:08:43 thumper-9.srv port 6: link lost Apr 11 16:08:43 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:08:43 thumper-9.srv port 6: link established Apr 11 16:08:43 thumper-9.srv marvell88sx: [ID 812950 kern.warning] WARNING: marvell88sx2: error on port 6: Apr 11 16:08:43 thumper-9.srv marvell88sx: [ID 517869 kern.info]device disconnected Apr 11 16:08:43 thumper-9.srv marvell88sx: [ID 517869 kern.info]device connected Apr 11 16:08:43 thumper-9.srv marvell88sx: [ID 517869 kern.info]SError interrupt Apr 11 16:09:44 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:09:44 thumper-9.srv port 6: device reset Apr 11 16:09:44 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:09:44 thumper-9.srv port 6: device reset Apr 11 16:09:45 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:09:45 thumper-9.srv port 6: link lost Apr 11 16:09:45 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Apr 11 16:09:45 thumper-9.srv port 6: link established Apr 11 16:09:45 thumper-9.srv marvell88sx: [ID 812950 kern.warning] WARNING: marvell88sx2: error on port 6: Apr 11 16:09:45 thumper-9.srv marvell88sx: [ID 517869 kern.info]device disconnected Apr 11 16:09:45 thumper-9.srv marvell88sx: [ID 517869 kern.info]device connected Apr 11 16:09:45 thumper-9.srv marvell88sx: [ID 517869 kern.info]SError interrupt Apr 11 16:09:45 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd27): Apr 11 16:09:45 thumper-9.srv Error for Command: read(10) Error Level: Retryable Apr 11 16:09:45 thumper-9.srv scsi: [ID 107833 kern.notice] Requested Block: 41122898 Error Block: 41122898 Apr 11 16:09:45 thumper-9.srv scsi: [ID 107833 kern.notice] Vendor: ATA Serial Number: Apr 11 16:09:45 thumper-9.srv scsi: [ID 107833 kern.notice] Sense Key: No Additional Sense Apr 11 16:09:45 thumper-9.srv scsi: [ID 107833 kern.notice] ASC: 0x0 (no
Re: [zfs-discuss] ZFS and Linux
On 4/11/07, Robert Milkowski [EMAIL PROTECTED] wrote: I'm looking closely to GPLv3 but maybe Linux should change it's license to actually provide more freedom and problem would disappear then. See ZFS being ported to FreeBSD. Agreed. Why does everyone need to be compatible with Linux?? Why not Linux changes its license and be compatible with *BSD and Solaris?? Rayson I really see no point wasting Open Solaris resources to keep ZFS dual licensed. And frankly, it would be mostly Sun's resources. I would rather like to spend those resources on something more important. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror drives on a _striped_ pool?
Frank Cusack wrote: On April 11, 2007 11:54:38 AM +0200 Constantin Gonzalez Schmitz [EMAIL PROTECTED] wrote: Hi Mark, Mark J Musante wrote: On Tue, 10 Apr 2007, Constantin Gonzalez wrote: Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I attach devices to both mirrors, let them resilver, then detach them and import the pool from those? You'd want to export them, not detach them. Detaching will overwrite the vdev labels and make it un-importable. thank you for the export/import idea, it does sound cleaner from a ZFS perspective, but comes at the expense of temporarily unmounting the filesystems. So, instead of detaching, would unplugging, then detaching work? I'm thinking something like this: - zpool create tank mirror dev1 dev2 dev3 - {physically move dev3 to new box} - zpool detach tank dev3 If we're talking about a 3rd device, added in order to migrate the data, why not just zfs send | zfs recv? Time? The reason people go the split mirror route, at least in block land, is because once you split the volume you can export it someplace else and start using it. Same goes for constant replication where you suspend the replication, take a copy, go start working on it, restart the replication. (Lots of ways people do that one.) I think the requirement could be voiced as, I want an independent copy of my data on a secondary system in a quick fashion. I want to avoid using resources from the primary system. The fun part is that people will think in terms of current technologies so you'll see split mirror, or volume copy or Truecopy mixed in for flavor. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On Wed, 11 Apr 2007, Rayson Ho wrote: Why does everyone need to be compatible with Linux?? Why not Linux changes its license and be compatible with *BSD and Solaris?? I agree with this sentiment, but the reality is that changing the Linux kernel's license would require the consent of every copyright holder, many of whom may not be able to be tracked down or give their consent. So in practical terms, the license for Linux CAN'T be changed: they're stuck with it (it being GPLv2). -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory Voice: +1 (250) 979-1638 URLs: http://www.rite-group.com/rich http://www.myonlinehomeinventory.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
Rich Teer writes: On Wed, 11 Apr 2007, Rayson Ho wrote: Why does everyone need to be compatible with Linux?? Why not Linux changes its license and be compatible with *BSD and Solaris?? I agree with this sentiment, but the reality is that changing the Linux kernel's license would require the consent of every copyright holder, many of whom may not be able to be tracked down or give their consent. So in practical terms, the license for Linux CAN'T be changed: they're stuck with it (it being GPLv2). Exactly! And nobody can force Sun to dual license if they do not want to. But enterprises that use Linux and Linux community in general still need proper storage system, right? And they might still have perfectly valid reasons not to switch to Solaris. If ZFS can't be ported and writing binary compatible storage system is impossible or impractical then ZFS alternative must and will be designed and implemented sooner or later. Sincerely yours, Max V. Yudin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
Robert Milkowski writes: I'm looking closely to GPLv3 but maybe Linux should change it's license to actually provide more freedom and problem would disappear then. See ZFS being ported to FreeBSD. Will GPLv3 be CDDL compatible? I don't think so, but I'm no lawyer. Perhaps somebody with more knowledge in these matters can clarify? Sincerely yours, Max V. Yudin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On Thu, 12 Apr 2007, Ignatich wrote: Does Sun have plans to dual license ZFS as GPL so it can be ported to native Linux? I don't work for Sun so I can't speak for them. The last I heard was that Sun was looking at GPLv3, and considering its use for one or more projects, either dual licensed with the CDDL, or on its own. My own personal opinion is that dual licensing should be avoided because it makes things needlessly complicated, and there is also the real danger of license-based forks being introduced. If not, is it legal to write ZFS clone from scratch while maintaining binary compatibility with original? The short answer is: seek professional legal council. The longer answer, bearing in mind that IANAL, is that yes, a clean room implementation would be legal. -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory Voice: +1 (250) 979-1638 URLs: http://www.rite-group.com/rich http://www.myonlinehomeinventory.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs sharing
Anthony Scarpino wrote: [EMAIL PROTECTED] wrote: Anthony J. Scarpino wrote: I'm not sure if this is an nfs/autofs problem or zfs problem... But I'll try here first... On our server, I've got a zfs directory called cube/builds/izick/. In this directory I have a number of mountpoints to other zfs file systems.. The problem happens when we clone a new zfs file system, say cube/builds/izick/foo, any client system that already have cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It looks like a blank directory on the client systems, but on the server it would be fully populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client does nothing.. The only way to fix this is to unmount the directory on the client, which can be invasive to a desktop machine.. Could there be a problem because the zfs files systems are nested? Is there a known issue with zfs-nfs interactions where zfs doesn't tell nfs properly that there has been an update, other than the just the mountpoint? thanks... This is a known limitation - you would need to add entries to your automounter maps to let the client know to do mounts for those 'nested' entries. We're working on it - since the client can see the new directories and detect that they're different filesystems, we could do what we call 'mirror mounts' to make them available. See http://opensolaris.org/os/project/nfs-namespace/ for more on this and other work. Rob T Ok... thanks for the link.. I'm happy this is known and being worked on.. Any targets yet when this would integrated? Early summer, we hope :-) Rob T ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: zfs sharing
Robert Thurlow wrote: Anthony J. Scarpino wrote: I'm not sure if this is an nfs/autofs problem or zfs problem... But I'll try here first... On our server, I've got a zfs directory called cube/builds/izick/. In this directory I have a number of mountpoints to other zfs file systems.. The problem happens when we clone a new zfs file system, say cube/builds/izick/foo, any client system that already have cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It looks like a blank directory on the client systems, but on the server it would be fully populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client does nothing.. The only way to fix this is to unmount the directory on the client, which can be invasive to a desktop machine.. Could there be a problem because the zfs files systems are nested? Is there a known issue with zfs-nfs interactions where zfs doesn't tell nfs properly that there has been an update, other than the just the mountpoint? thanks... This is a known limitation - you would need to add entries to your automounter maps to let the client know to do mounts for those 'nested' entries. We're working on it - since the client can see the new directories and detect that they're different filesystems, we could do what we call 'mirror mounts' to make them available. See http://opensolaris.org/os/project/nfs-namespace/ for more on this and other work. Rob T Ok... thanks for the link.. I'm happy this is known and being worked on.. Any targets yet when this would integrated? thanks.. Tony This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS remote mirroring
How have your snapshotting experiments worked out for fault tolerance? One of the things I was hoping was that a solution could be easily constructed similar to what we see from some higher-end IP SAN solutions like LeftHand Networks SAN/iQ and proprietary SANs like Equallogic using just ZFS and iSCSI. Out of curiousity, you wouldn't be experimenting with this setup and VMware ESX 3.0.1 would you? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On 11-Apr-07, at 8:25 PM, Ignatich wrote: Rich Teer writes: On Wed, 11 Apr 2007, Rayson Ho wrote: Why does everyone need to be compatible with Linux?? Why not Linux changes its license and be compatible with *BSD and Solaris?? I agree with this sentiment, but the reality is that changing the Linux kernel's license would require the consent of every copyright holder, many of whom may not be able to be tracked down or give their consent. So in practical terms, the license for Linux CAN'T be changed: they're stuck with it (it being GPLv2). Exactly! And nobody can force Sun to dual license if they do not want to. I hope this isn't turning into a License flame war. But why do Linux contributors not deserve the right to retain their choice of license as equally as Sun, or any other copyright holder, does? The anti-GPL kneejerk just witnessed on this list is astonishing. The BSD license, for instance, is fundamentally undesirable to many GPL licensors (myself included). It seems Sun is internally divided on the GPL. Your CEO and Java division seem quite happy with it. But enterprises that use Linux and Linux community in general still need proper storage system, right? And they might still have perfectly valid reasons not to switch to Solaris. If ZFS can't be ported and writing binary compatible storage system is impossible or impractical then ZFS alternative must and will be designed and implemented sooner or later. ZFS has value in and of itself as a differentiator in Solaris, which will drive adoption and satisfaction. Solaris may be the only credible competitor Linux has left, which will keep it honest. :) --T Sincerely yours, Max V. Yudin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On 4/11/07, Toby Thain [EMAIL PROTECTED] wrote: I hope this isn't turning into a License flame war. But why do Linux contributors not deserve the right to retain their choice of license as equally as Sun, or any other copyright holder, does? Hey, then just don't *keep on* asking to relicense ZFS (and anything else) to GPL. I don't think a lot of Solaris users ask on the Linux kernel mailing list to relicense Linux kernel components to CDDL so that they can use the features on Solaris. Rayson The anti-GPL kneejerk just witnessed on this list is astonishing. The BSD license, for instance, is fundamentally undesirable to many GPL licensors (myself included). It seems Sun is internally divided on the GPL. Your CEO and Java division seem quite happy with it. But enterprises that use Linux and Linux community in general still need proper storage system, right? And they might still have perfectly valid reasons not to switch to Solaris. If ZFS can't be ported and writing binary compatible storage system is impossible or impractical then ZFS alternative must and will be designed and implemented sooner or later. ZFS has value in and of itself as a differentiator in Solaris, which will drive adoption and satisfaction. Solaris may be the only credible competitor Linux has left, which will keep it honest. :) --T Sincerely yours, Max V. Yudin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
From: Toby Thain [EMAIL PROTECTED] On 11-Apr-07, at 8:25 PM, Ignatich wrote: Rich Teer writes: On Wed, 11 Apr 2007, Rayson Ho wrote: Why does everyone need to be compatible with Linux?? Why not Linux changes its license and be compatible with *BSD and Solaris?? I agree with this sentiment, but the reality is that changing the Linux kernel's license would require the consent of every copyright holder, many of whom may not be able to be tracked down or give their consent. So in practical terms, the license for Linux CAN'T be changed: they're stuck with it (it being GPLv2). Exactly! And nobody can force Sun to dual license if they do not want to. I hope this isn't turning into a License flame war. But why do Linux contributors not deserve the right to retain their choice of license as equally as Sun, or any other copyright holder, does? The anti-GPL kneejerk just witnessed on this list is astonishing. The BSD license, for instance, is fundamentally undesirable to many GPL licensors (myself included). GPL for Linux is a double edged sword. Linux has this interface called netfilter which I provided input on many years ago. A primary goal of that was to enable other open source projects (such as IPFilter) to work with Linux. If, however, the mere act of compiling IPFilter for Linux forces it to be GPL then it's not something I ever want to happen and in turn I would withdraw IPFilter's Linux support and the point of having the API would be somewhat diminished. The problem here is around what the term derived work means and what exactly is one. While we all have opinions, to my knowledge it is untested in court. If interoperability with Linux means you have no choice in your licence then the only option seems to be excluding Linux. Darren ___ zfs-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On Wed, 11 Apr 2007, Toby Thain wrote: I hope this isn't turning into a License flame war. But why do Linux contributors not deserve the right to retain their choice of license as equally as Sun, or any other copyright holder, does? Read what I wrote again, more slowly. Individually, Linux contributors have every right to retain their choice of license for software they produce. But given the viral nature of the GPL, every piece of software that goes into the Linux kernel must be GPLed, hence if the Linux community wants to change the license the Linux kernel uses, ALL contributors must agree to the change. If 90% of the contributors want to change their code to use the Toby Thain License (TTL) but the remaining 10% want to stick with the GPL, then that 90% must either live without the code owned by the 10% or drop the idea of changing license. The GPL, which the Linux community has embraced, forces them into this position. What I or anyone else thinks of the GPL is moot: it is the license you yourselves have chosen that limits your choices, not us. Which is kind of ironic, because the all-GPL or nothing nature of the GPL is routinely touted as an advantage by the more vociferous GPL advocates! The anti-GPL kneejerk just witnessed on this list is astonishing. The BSD license, for instance, is fundamentally undesirable to many GPL licensors (myself included). And power to you. As for anti-GPL knee jerks; I know two wrongs don't make a right, but how do you think we feel when Sun and the CDDL are slagged off on Slashdot et al? It seems Sun is internally divided on the GPL. Your CEO and Java division seem quite happy with it. As others have said, some licenses are better than others for certain projects; there is no One True License. drive adoption and satisfaction. Solaris may be the only credible competitor Linux has left, which will keep it honest. :) I prefer to think of it the other way round. :-) -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory Voice: +1 (250) 979-1638 URLs: http://www.rite-group.com/rich http://www.myonlinehomeinventory.com ___ zfs-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On Wed, 11 Apr 2007, Rayson Ho wrote: Hey, then just don't *keep on* asking to relicense ZFS (and anything else) to GPL. Amen to that! I don't think a lot of Solaris users ask on the Linux kernel mailing list to relicense Linux kernel components to CDDL so that they can use the features on Solaris. Indeed. -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory Voice: +1 (250) 979-1638 URLs: http://www.rite-group.com/rich http://www.myonlinehomeinventory.com ___ zfs-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Linux
On 11/04/07, Toby Thain [EMAIL PROTECTED] wrote: On 11-Apr-07, at 8:25 PM, Ignatich wrote: Rich Teer writes: On Wed, 11 Apr 2007, Rayson Ho wrote: Why does everyone need to be compatible with Linux?? Why not Linux changes its license and be compatible with *BSD and Solaris?? I agree with this sentiment, but the reality is that changing the Linux kernel's license would require the consent of every copyright holder, many of whom may not be able to be tracked down or give their consent. So in practical terms, the license for Linux CAN'T be changed: they're stuck with it (it being GPLv2). Exactly! And nobody can force Sun to dual license if they do not want to. I hope this isn't turning into a License flame war. But why do Linux contributors not deserve the right to retain their choice of license as equally as Sun, or any other copyright holder, does? Indeed, why? So why do people show up at the community doorstep asking for a license change instead of respecting the right to keep theirs? The anti-GPL kneejerk just witnessed on this list is astonishing. The BSD license, for instance, is fundamentally undesirable to many GPL licensors (myself included). Which is funny considering how many GPL projects *love* the fact that BSD-licensed code is easily integrable with their project, yet don't want to give others the same benefit. It seems Sun is internally divided on the GPL. Your CEO and Java division seem quite happy with it. Just like any community, there are differences in opinion. Communities are made of individuals. -- Less is only more where more is no good. --Frank Lloyd Wright Shawn Walker, Software and Systems Analyst [EMAIL PROTECTED] - http://binarycrusader.blogspot.com/ ___ zfs-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Asus L1N64 - a good candidate for a ZFS based home server?
This Asus board looks promising, assuming the parts (nForce 680a chipset) are Solaris friendly: http://www.asus.com.tw/products4.aspx?l1=3l2=136l3=486model=1530modelmenu=2 The board boasts 12 (!) SATA2 ports. The FX-7x series CPUs appear to be a very cost effective ($799) 4 core solution. Ian ___ zfs-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/zfs-discuss