Re: [zfs-discuss] planning for upgrades
Matt Harrison iwasinnamuknow at genestate.com writes: Aah, excellent, just did an export/import and its now showing the expected capacity increase. Thanks for that, I should've at least tried a reboot :) More recent OpenSolaris builds don't even need the export/import anymore when expanding a raidz this way (I tested with build 82). -marc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Raw storage space is cheap. Managing the data is what is expensive. Not for my customer. Internal accounting means that the storage team gets paid for each allocated GB on a monthly basis. They have stacks of IO bandwidth and CPU cycles to spare outside of their daily busy period. I can't think of a better spend of their time than a scheduled dedup. Perhaps deduplication is a response to an issue which should be solved elsewhere? I don't think you can make this generalisation. For most people, yes, but not everyone. cheers, --justin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19
Hi Gilberto, I bought a Micro Memory card too, so I'm very likely going to end up in the same boat. I saw Neil Perrin's blog about the MM-5425 card, found that Vmetro don't seem to want to sell them, but then then last week spotted five of those cards on e-bay so snapped them up. I'm still waiting for the hardware for this server, but regarding the drivers, if these cards don't work out of the box I was planning to pester Neil Perrin and see if he still has some drivers for them :) The cards were only £20 each, I figured it was a bit of a gamble buying them, but hopefully one that will pay off. Ross This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Does anyone know a tool that can look over a dataset and give duplication statistics? I'm not looking for something incredibly efficient but I'd like to know how much it would actually benefit our Check out the following blog..: http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] confusion and frustration with zpool
James, May I ask what kind of USB enclosures and hubs you are using? I've had some very bad experiences over the past month with not so cheap enclosures. Wrt esata, I found the following chipsets on the SHCL. Any others you can recommend? Silicon Image 3112A intel S5400 Intel S5100 Silicon Image Sil3114 Thanks justin smime.p7s Description: S/MIME cryptographic signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Justin Stringfellow wrote: Raw storage space is cheap. Managing the data is what is expensive. Not for my customer. Internal accounting means that the storage team gets paid for each allocated GB on a monthly basis. They have stacks of IO bandwidth and CPU cycles to spare outside of their daily busy period. I can't think of a better spend of their time than a scheduled dedup. Perhaps deduplication is a response to an issue which should be solved elsewhere? I don't think you can make this generalisation. For most people, yes, but not everyone. cheers, --justin ___ Frankly, while I tend to agree with Bob that backend dedup is something that ever-cheaper disks and client-side misuse make unnecessary, I would _very_ much like us to have some mechanism by which we could have some sort of a 'pay-per-feature' system, so people who disagree with me can still get what they want. grin By that, I mean, that something along the lines of a 'bounty' system where folks pony up cash for features. I'd love to have many more outside (from Sun) contributors to the OpenSolaris base, ZFS in particular. Right now, virtually all the development work is being driven by internal-to-Sun priorities, which, given that Sun pays the developers, is OK. However, I would really like to have some direct method where outsiders can show to Mgmt that there is direct cash for certain improvements. For Justin, it sounds like being able to pony up several thousand (minimum) for desired feature would be no problem. And, for the rest of us, I can think that a couple of hundred of us putting up $100 each to get RAIDZ expansion might move it to the front of the TODO list. wink Plus, we might be able to attract some more interest from the hobbiest folks that way. :-) Buying a service contract and then bugging your service rep doesn't say the same thing a I'm willing to pony up $10k right now for feature X. Big customers have weight to throw around, but we need some mechanism where a mid/small guy can make a real statement, and back it up. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of ZFS on Solaris 10
Enda O'Connor wrote: Hi S10_u5 has version 4, latest in opensolaris is version 10 see http://opensolaris.org/os/community/zfs/version/10/ Actually as of yesterday 11 is the latest in the source tree. All going well that will be snv_94. http://opensolaris.org/os/community/zfs/version/11 doesn't exist yet though. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Remove old boot environment?
Hello All, Is there a way I can remove my old boot environments? Is it as simple as performing a 'zfs destroy' on the older entries, followed by removing the entry from the menu.lst?? I have been searching, but have not found anything... Any help would be much appreciated!! Here is what my /rpool/boot/grub/menu.lst looks like: # cat menu.lst splashimage /boot/grub/splash.xpm.gz timeout 30 default 3 #-- ADDED BY BOOTADM - DO NOT EDIT -- title OpenSolaris 2008.05 snv_86_rc3 X86 bootfs rpool/ROOT/opensolaris kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive #-END BOOTADM title opensolaris-1 bootfs rpool/ROOT/opensolaris-1 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive # End of LIBBE entry = title opensolaris-2 bootfs rpool/ROOT/opensolaris-2 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive # End of LIBBE entry = #-- ADDED BY BOOTADM - DO NOT EDIT -- title Solaris 2008.11 snv_91 X86 findroot (pool_rpool,0,a) kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive #-END BOOTADM title opensolaris-3 bootfs rpool/ROOT/opensolaris-3 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive # End of LIBBE entry = title opensolaris-4 bootfs rpool/ROOT/opensolaris-4 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive # End of LIBBE entry = And here is the zfs list: # zfs list NAMEUSED AVAIL REFER MOUNTPOINT rpool 6.66G 27.8G 59.5K /rpool [EMAIL PROTECTED] 19.5K - 55K - rpool/ROOT 6.56G 27.8G18K /rpool/ROOT rpool/[EMAIL PROTECTED] 15K - 18K - rpool/ROOT/opensolaris 757M 27.8G 2.51G legacy rpool/ROOT/[EMAIL PROTECTED]:-:2008-07-03-00:43:327.09M - 2.41G - rpool/ROOT/opensolaris-1 7.33M 27.8G 3.21G legacy rpool/ROOT/opensolaris-1/opt101K 27.8G 1.02G /opt rpool/ROOT/opensolaris-2 1.37M 27.8G 2.41G /tmp/tmp8Petug rpool/ROOT/opensolaris-2/opt 0 27.8G 989M /tmp/tmp8Petug/opt rpool/ROOT/opensolaris-3 3.17M 27.8G 3.21G /tmp/tmpPK0x35 rpool/ROOT/opensolaris-3/opt 0 27.8G 1.02G /tmp/tmpPK0x35/opt rpool/ROOT/opensolaris-4 5.81G 27.8G 3.29G legacy rpool/ROOT/[EMAIL PROTECTED] 64.3M - 2.22G - rpool/ROOT/[EMAIL PROTECTED]:-:2008-05-16-01:39:46 76.3M - 2.38G - rpool/ROOT/[EMAIL PROTECTED]:-:2008-07-08-03:53:06 12.9M - 3.21G - rpool/ROOT/[EMAIL PROTECTED]:-:2008-07-08-04:26:54 7.41M - 3.21G - rpool/ROOT/opensolaris-4/opt 1.02G 27.8G 1.02G /opt rpool/ROOT/opensolaris-4/[EMAIL PROTECTED]111K - 3.60M - rpool/ROOT/opensolaris-4/[EMAIL PROTECTED]:-:2008-05-16-01:39:46 1.22M - 989M - rpool/ROOT/opensolaris-4/[EMAIL PROTECTED]:-:2008-07-08-03:53:06 101K - 1.02G - rpool/ROOT/opensolaris-4/[EMAIL PROTECTED]:-:2008-07-08-04:26:54 0 - 1.02G - rpool/ROOT/opensolaris/opt 510K 27.8G 989M /opt rpool/ROOT/opensolaris/[EMAIL PROTECTED]:-:2008-07-03-00:43:32 48K - 989M - rpool/export 98.5M 27.8G19K /export rpool/[EMAIL PROTECTED] 15K - 19K - rpool/export/home 98.5M 27.8G 98.4M /export/home rpool/export/[EMAIL PROTECTED]18K - 21K - Cheers, Ted This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Remove old boot environment?
Ted Carr wrote: Hello All, Is there a way I can remove my old boot environments? Is it as simple as performing a 'zfs destroy' on the older entries, followed by removing the entry from the menu.lst?? I have been searching, but have not found anything... Any help would be much appreciated!! The beadm command is probably the tool of choice here. Cheers, Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Just going to make a quick comment here. It's a good point about wanting backup software to support this, we're a much smaller company but it's already more difficult to manage the storage needed for backups than our live storage. However, we're actively planning that over the next 12 months, ZFS will actually *be* our backup system, so for us just ZFS and send/receive supporting de-duplication would be great :) In fact, I can see that being useful for a number of places. ZFS send/receive is already a good way to stream incremental changes and keep filesystems in sync. Having de-duplication built into that can only be a good thing. PS. Yes, we'll still have off-site tape backups just in case, but the vast majority of our backup restore functionality (including two off-site backups) will be just ZFS. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] confusion and frustration with zpool
I'm curious which enclosures you've had problems with? Mine are both Maxtor One Touch; the 750 is slightly different in that it has a FireWire port as well as USB. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] confusion and frustration with zpool
Pete Hartman wrote: I'm curious which enclosures you've had problems with? Mine are both Maxtor One Touch; the 750 is slightly different in that it has a FireWire port as well as USB. I've had VERY bad experiences with the Maxtor One Touch and ZFS. To the point that we gave up trying to use them. We last tried on snv_79 though. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Even better would be using the ZFS block checksums (assuming we are only summing the data, not it's position or time :)... Then we could have two files that have 90% the same blocks, and still get some dedup value... ;) Yes, but you will need to add some sort of highly collision resistant checksum (sha+md5 maybe) and code to a; bit level compare blocks on collision (100% bit verification) and b; handle linked or cascaded collision tables (2+ blocks with the same hash but differing bits). I actually coded some of this and was playing with it. My testbed relied on another internal data store to track hash maps, collisions (dedup lists) and collision cascades (kind of like what perl does with hash key collisions). It turned out to be a real pain when taking into account snaps and clones. I decided to wait until the resilver/grow/remove code was in place as this seems to be part of the puzzle. -Wade ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
[EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM: Does anyone know a tool that can look over a dataset and give duplication statistics? I'm not looking for something incredibly efficient but I'd like to know how much it would actually benefit our Check out the following blog..: http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool Just want to add, while this is ok to give you a ballpark dedup number -- fletcher2 is notoriously collision prone on real data sets. It is meant to be fast at the expense of collisions. This issue can show much more dedup possible than really exists on large datasets. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
[EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM: Does anyone know a tool that can look over a dataset and give duplication statistics? I'm not looking for something incredibly efficient but I'd like to know how much it would actually benefit our Check out the following blog..: http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool Just want to add, while this is ok to give you a ballpark dedup number -- fletcher2 is notoriously collision prone on real data sets. It is meant to be fast at the expense of collisions. This issue can show much more dedup possible than really exists on large datasets. Doing this using sha256 as the checksum algorithm would be much more interesting. I'm going to try that now and see how it compares with fletcher2 for a small contrived test. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Justin Stringfellow wrote: Raw storage space is cheap. Managing the data is what is expensive. Not for my customer. Internal accounting means that the storage team gets paid for each allocated GB on a monthly basis. They have stacks of IO bandwidth and CPU cycles to spare outside of their daily busy period. I can't think of a better spend of their time than a scheduled dedup. [donning my managerial accounting hat] It is not a good idea to design systems based upon someone's managerial accounting whims. These are subject to change in illogical ways at unpredictable intervals. This is why managerial accounting can be so much fun for people who want to hide costs. For example, some bright manager decided that they should charge $100/month/port for ethernet drops. So now, instead of having a centralized, managed network with well defined port mappings, every cube has an el-cheapo ethernet switch. Saving money? Not really, but this can be hidden by the accounting. In the interim, I think you will find that if the goal is to reduce the number of bits stored on some expensive storage, there is more than one way to accomplish that goal. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
On Jul 8, 2008, at 11:00 AM, Richard Elling wrote: much fun for people who want to hide costs. For example, some bright manager decided that they should charge $100/month/port for ethernet drops. So now, instead of having a centralized, managed network with well defined port mappings, every cube has an el-cheapo ethernet switch. Saving money? Not really, but this can be hidden by the accounting. Indeed, it actively hurts performance (mixing sunray, mobile, and fixed units on the same subnets rather than segregation by type). -- Keith H. Bierman [EMAIL PROTECTED] | AIM kbiermank 5430 Nassau Circle East | Cherry Hills Village, CO 80113 | 303-997-2749 speaking for myself* Copyright 2008 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
On Tue, 8 Jul 2008, Richard Elling wrote: [donning my managerial accounting hat] It is not a good idea to design systems based upon someone's managerial accounting whims. These are subject to change in illogical ways at unpredictable intervals. This is why managerial accounting can be so Managerial accounting whims can be put to good use. If there is desire to reduce the amout of disk space consumed, then the accounting whims should make sure that those who consume the disk space get to pay for it. Apparently this is not currently the case or else there would not be so much blatant waste. On the flip-side, the approach which results in so much blatant waste may be extremely profitable so the waste does not really matter. Imagine if university students were allowed to use as much space as they wanted but had to pay a per megabyte charge every two weeks or their account is terminated? This would surely result in huge reduction in disk space consumption. Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Something else came to mind which is a negative regarding deduplication. When zfs writes new sequential files, it should try to allocate blocks in a way which minimizes fragmentation (disk seeks). Disk seeks are the bane of existing storage systems since they come out of the available IOPS budget, which is only a couple hundred ops/second per drive. The deduplication algorithm will surely result in increasing effective fragmentation (decreasing sequential performance) since duplicated blocks will result in a seek to the master copy of the block followed by a seek to the next block. Disk seeks will remain an issue until rotating media goes away, which (in spite of popular opinion) is likely quite a while from now. Someone has to play devil's advocate here. :-) Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Hmmn, you might want to look at Andrew Tridgell's' thesis (yes, Andrew of Samba fame), as he had to solve this very question to be able to select an algorithm to use inside rsync. --dave Darren J Moffat wrote: [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM: Does anyone know a tool that can look over a dataset and give duplication statistics? I'm not looking for something incredibly efficient but I'd like to know how much it would actually benefit our Check out the following blog..: http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool Just want to add, while this is ok to give you a ballpark dedup number -- fletcher2 is notoriously collision prone on real data sets. It is meant to be fast at the expense of collisions. This issue can show much more dedup possible than really exists on large datasets. Doing this using sha256 as the checksum algorithm would be much more interesting. I'm going to try that now and see how it compares with fletcher2 for a small contrived test. -- David Collier-Brown| Always do right. This will gratify Sun Microsystems, Toronto | some people and astonish the rest [EMAIL PROTECTED] | -- Mark Twain (905) 943-1983, cell: (647) 833-9377, (800) 555-9786 x56583 bridge: (877) 385-4099 code: 506 9191# ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
[EMAIL PROTECTED] wrote on 07/08/2008 01:26:15 PM: Something else came to mind which is a negative regarding deduplication. When zfs writes new sequential files, it should try to allocate blocks in a way which minimizes fragmentation (disk seeks). Disk seeks are the bane of existing storage systems since they come out of the available IOPS budget, which is only a couple hundred ops/second per drive. The deduplication algorithm will surely result in increasing effective fragmentation (decreasing sequential performance) since duplicated blocks will result in a seek to the master copy of the block followed by a seek to the next block. Disk seeks will remain an issue until rotating media goes away, which (in spite of popular opinion) is likely quite a while from now. Yes, I think it should be close to common sense to realize that you are trading speed for space (but should be well documented if dedup/squash ever makes it into the codebase). You find these types of tradoffs in just about every area of disk administration from the type of raid you select, inode numbers, block size, to the number of spindles and size of disk you use. The key here is that it would be a choice just as compression is per fs -- let the administrator choose her path. In some situations it would make sense, in others not. -Wade Someone has to play devil's advocate here. :-) Debate is welcome, it is the only way to flesh out the issues. Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Bob Friesenhahn wrote: Something else came to mind which is a negative regarding deduplication. When zfs writes new sequential files, it should try to allocate blocks in a way which minimizes fragmentation (disk seeks). It should, but because of its copy-on-write nature, fragmentation is a significant part of the ZFS data lifecycle. There was a discussion of this on this list at the beginning of the year... http://mail.opensolaris.org/pipermail/zfs-discuss/2007-November/044077.h tml Disk seeks are the bane of existing storage systems since they come out of the available IOPS budget, which is only a couple hundred ops/second per drive. The deduplication algorithm will surely result in increasing effective fragmentation (decreasing sequential performance) since duplicated blocks will result in a seek to the master copy of the block followed by a seek to the next block. Disk seeks will remain an issue until rotating media goes away, which (in spite of popular opinion) is likely quite a while from now. On ZFS, sequential files are rarely sequential anyway. The SPA tries to keep blocks nearby, but when dealing with snapshotted sequential files being rewritten, there is no way to keep everything in order. But if you read through the thread referenced above, you'll see that there's no clear data about just how that impacts performance (I still owe Mr. Elling a filebench run on one of my spare servers) --Joe ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS problem mirror
Hi everyone, i did a nice install of opensolaris and i pulled 2x500 gig sata disk in a zpool mirror. Everything went well and i got it so that my mirror called datatank got shared by using CIFS. I can access it from my macbook and pc. So with this nice setup i started to put my files on but now i notice this : pool: datatank state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: scrub in progress for 0h1m, 38.50% done, 0h3m to go config: NAMESTATE READ WRITE CKSUM datatankDEGRADED 0 0 0 mirrorDEGRADED 0 0 0 c5d0DEGRADED 0 0 0 too many errors c7d0DEGRADED 0 0 0 too many errors errors: Permanent errors have been detected in the following files: datatank:0x3df2 It seems that files are corrupted and when i delete them my pool stay degraded. I did the clear command and then it went ok until after a while i coppied files over from my macbook and again some were corrupted. Iam a affraid to put my produktion files on this server, it doens't seems reliable. What can i do ? anybody any clues. I notice also that i got an error on my boot disk saying (bootdisk is a 20 gig ata): gzip: kernel/misc/qlc/qlc_fw_2400: I/O error Thanks in advance !! best regards, Y This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS problem mirror
On Tue, Jul 8, 2008 at 2:56 PM, BG [EMAIL PROTECTED] wrote: Hi everyone, i did a nice install of opensolaris and i pulled 2x500 gig sata disk in a zpool mirror. Everything went well and i got it so that my mirror called datatank got shared by using CIFS. I can access it from my macbook and pc. So with this nice setup i started to put my files on but now i notice this : pool: datatank state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: scrub in progress for 0h1m, 38.50% done, 0h3m to go config: NAMESTATE READ WRITE CKSUM datatankDEGRADED 0 0 0 mirrorDEGRADED 0 0 0 c5d0DEGRADED 0 0 0 too many errors c7d0DEGRADED 0 0 0 too many errors errors: Permanent errors have been detected in the following files: datatank:0x3df2 It seems that files are corrupted and when i delete them my pool stay degraded. I did the clear command and then it went ok until after a while i coppied files over from my macbook and again some were corrupted. Iam a affraid to put my produktion files on this server, it doens't seems reliable. What can i do ? anybody any clues. I notice also that i got an error on my boot disk saying (bootdisk is a 20 gig ata): gzip: kernel/misc/qlc/qlc_fw_2400: I/O error Thanks in advance !! best regards, Y This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Might want to provide some basics: What build of Opensolaris are you running? What version of ZFS? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
On Tue, 8 Jul 2008, Moore, Joe wrote: On ZFS, sequential files are rarely sequential anyway. The SPA tries to keep blocks nearby, but when dealing with snapshotted sequential files being rewritten, there is no way to keep everything in order. I think that rewriting files (updating existing blocks) is pretty rare. Only limited types of applications do such things. That is a good thing since zfs is not so good at rewriting files. The most common situation is that a new file is written, even if selecting save for an existing file in an application. Even if the user thinks that the file is being re-written, usually the application writes to a new temporary file and moves it into place once it is known to be written correctly. The majority of files will be written sequentially and most files will be small enough that zfs will see all the data before it outputs to disk. Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
On Tue, Jul 8, 2008 at 12:25 PM, Bob Friesenhahn [EMAIL PROTECTED] wrote: On Tue, 8 Jul 2008, Richard Elling wrote: [donning my managerial accounting hat] It is not a good idea to design systems based upon someone's managerial accounting whims. These are subject to change in illogical ways at unpredictable intervals. This is why managerial accounting can be so Managerial accounting whims can be put to good use. If there is desire to reduce the amout of disk space consumed, then the accounting whims should make sure that those who consume the disk space get to pay for it. Apparently this is not currently the case or else there would not be so much blatant waste. On the flip-side, the approach which results in so much blatant waste may be extremely profitable so the waste does not really matter. The existence of the waste paves the way for new products to come in and offer competitive advantage over in-place solutions. When companies aren't buying anything due to budget constraints, the only way to make sales is to show businesses that by buying something they will save money - and quickly. Imagine if university students were allowed to use as much space as they wanted but had to pay a per megabyte charge every two weeks or their account is terminated? This would surely result in huge reduction in disk space consumption. If you can offer the perception of more storage because of efficiencies of the storage devices make it the same cost as less storage, then perhaps allocating more per student is feasible. Or maybe tuition could drop by a few bucks. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
On Tue, Jul 8, 2008 at 1:26 PM, Bob Friesenhahn [EMAIL PROTECTED] wrote: Something else came to mind which is a negative regarding deduplication. When zfs writes new sequential files, it should try to allocate blocks in a way which minimizes fragmentation (disk seeks). Disk seeks are the bane of existing storage systems since they come out of the available IOPS budget, which is only a couple hundred ops/second per drive. The deduplication algorithm will surely result in increasing effective fragmentation (decreasing sequential performance) since duplicated blocks will result in a seek to the master copy of the block followed by a seek to the next block. Disk seeks will remain an issue until rotating media goes away, which (in spite of popular opinion) is likely quite a while from now. Someone has to play devil's advocate here. :-) With L2ARC on SSD, seeks are free and IOPs are quite cheap (compared to spinning rust). Cold reads may be a problem, but there is a reasonable chance that L2ARC sizing can be helpful here. Also, the blocks that are likely to be duplicate are going to be the same file but just with a different offset. That is, this file is going to be the same in every one of my LDom disk images. # du -h /usr/jdk/instances/jdk1.5.0/jre/lib/rt.jar 38M /usr/jdk/instances/jdk1.5.0/jre/lib/rt.jar There is a pretty good chance that the first copy will be sequential and as a result all of the deduped copies would be sequential as well. What's more - it is quite likely to be in the ARC or L2ARC. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS problem mirror
i removed the files that were corrupted,scrubbed the datatank mirror and the did status -v datatank and i got this : pool: datatank state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub completed after 0h4m with 0 errors on Tue Jul 8 22:17:26 2008 config: NAMESTATE READ WRITE CKSUM datatankDEGRADED 0 0 4 mirrorDEGRADED 0 0 4 c5d0DEGRADED 0 0 8 too many errors c7d0DEGRADED 0 0 8 too many errors errors: No known data errors then i used the zpool clear command zpool clear datatank : and this it the output : pool: datatank state: ONLINE scrub: scrub completed after 0h4m with 0 errors on Tue Jul 8 22:17:26 2008 config: NAMESTATE READ WRITE CKSUM datatankONLINE 0 0 0 mirrorONLINE 0 0 0 c5d0ONLINE 0 0 0 c7d0ONLINE 0 0 0 errors: No known data errors But is this really ok ? because everytime i got a file corruption i can't do these steps. Btw the files aren't corrupt on my macbook so how come they get corrupted during the transport or on the mirror ? and the error i mentioned on my boot disk is that of some vallue, i searched on the net and found the the package is related with iscsi but i have only sata and ata so. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Tim Spriggs wrote: Does anyone know a tool that can look over a dataset and give duplication statistics? I'm not looking for something incredibly efficient but I'd like to know how much it would actually benefit our dataset: HiRISE has a large set of spacecraft data (images) that could potentially have large amounts of redundancy, or not. Also, other up and coming missions have a large data volume that have a lot of duplicate image info and a small budget; with d11p in OpenSolaris there is a good business case to invest in Sun/OpenSolaris rather than buy the cheaper storage (+ linux?) that can simply hold everything as is. If someone feels like coding a tool up that basically makes a file of checksums and counts how many times a particular checksum get's hit over a dataset, I would be willing to run it and provide feedback. :) -Tim Me too. Our data profile is just like Tim's: Terra bytes of satellite data. I'm going to guess that the d11p ratio won't be fantastic for us. I sure would like to measure it though. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED] - __/__/__/ AST:7731^29u18e3 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Justin Stringfellow wrote: Does anyone know a tool that can look over a dataset and give duplication statistics? I'm not looking for something incredibly efficient but I'd like to know how much it would actually benefit our Check out the following blog..: http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that will dump those checksums? Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED] - __/__/__/ AST:7731^29u18e3 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS deduplication
Moore, Joe wrote: On ZFS, sequential files are rarely sequential anyway. The SPA tries to keep blocks nearby, but when dealing with snapshotted sequential files being rewritten, there is no way to keep everything in order. In some cases, a d11p system could actually speed up data reads and writes. If you are repeatedly accessing duplicate data, then you will more likely hit your ARC, and not have to go to disk. With your data d11p, the ARC can hold a significantly higher percentage of your data set, just like the disks. For a d11p ARC, I would expire based upon block reference count. If a block has few references, it should expire first, and vise versa, blocks with many references should be the last out. With all the savings on disks, think how much RAM you could buy ;) Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED] - __/__/__/ AST:7731^29u18e3 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Disks errors not shown by zpool?
Ok, this is not a OpenSolaris question, but it is a Solaris and ZFS question. I have a pool with three mirrored vdevs. I just got an error message from FMD that read failed from one on the disks,(c1t6d0). All with instructions on how to handle the problem and replace the devices, so far everything is good. But the zpool still thinks everything is fine. Shouldn't zpool also show errors in this state? This was run on S10U4 with 127127-11. # zpool status -x all pools are healthy # zpool status pool: storage state: ONLINE scrub: scrub completed with 0 errors on Sun Jun 29 23:16:34 2008 config: NAMESTATE READ WRITE CKSUM storage ONLINE 0 0 0 mirrorONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirrorONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 mirrorONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 errors: No known data errors # fmdump -v TIME UUID SUNW-MSG-ID Jul 08 20:14:42.6951 3780a675-96ea-6fa4-bd55-cb078a539f08 ZFS-8000-D3 100% fault.fs.zfs.device Problem in: zfs://pool=storage/vdev=83de319aad25c131 Affects: zfs://pool=storage/vdev=83de319aad25c131 FRU: - Location: - From my message log: Jul 8 20:11:53 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:11:53 fortressSCSI transport failed: reason 'incomplete': retrying command Jul 8 20:12:56 fortress scsi: [ID 365881 kern.info] fas: 6.0: cdb=[ 0xa 0x0 0x1 0xda 0x2 0x0 ] Jul 8 20:12:56 fortress scsi: [ID 365881 kern.info] fas: 6.0: cdb=[ 0xa 0x0 0x3 0xda 0x2 0x0 ] Jul 8 20:12:56 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880 (fas1): Jul 8 20:12:56 fortressDisconnected tagged cmd(s) (2) timeout for Target 6.0 Jul 8 20:12:56 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:12:56 fortressSCSI transport failed: reason 'timeout': retrying command Jul 8 20:12:56 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:12:56 fortressSCSI transport failed: reason 'reset': retrying command Jul 8 20:12:59 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:12:59 fortressError for Command: write(10) Error Level: Retryable Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] Requested Block: 17672154 Error Block: 17672154 Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] Vendor: SEAGATESerial Number: 9946626576 Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] Sense Key: Unit Attention Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] ASC: 0x29 (power on occurred), ASCQ: 0x1, FRU: 0x1 Jul 8 20:12:59 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:12:59 fortressError for Command: write(10) Error Level: Retryable Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] Requested Block: 17672154 Error Block: 17672154 Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] Vendor: SEAGATESerial Number: 9946626576 Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] Sense Key: Not Ready Jul 8 20:12:59 fortress scsi: [ID 107833 kern.notice] ASC: 0x4 (LUN is becoming ready), ASCQ: 0x1, FRU: 0x2 Jul 8 20:13:04 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:13:04 fortressError for Command: write(10) Error Level: Retryable Jul 8 20:13:04 fortress scsi: [ID 107833 kern.notice] Requested Block: 17672154 Error Block: 17672154 Jul 8 20:13:04 fortress scsi: [ID 107833 kern.notice] Vendor: SEAGATESerial Number: 9946626576 Jul 8 20:13:04 fortress scsi: [ID 107833 kern.notice] Sense Key: Not Ready Jul 8 20:13:04 fortress scsi: [ID 107833 kern.notice] ASC: 0x4 (LUN is becoming ready), ASCQ: 0x1, FRU: 0x2 Jul 8 20:13:09 fortress scsi: [ID 107833 kern.warning] WARNING: / [EMAIL PROTECTED],0/SUNW,[EMAIL PROTECTED],880/[EMAIL PROTECTED],0 (sd19): Jul 8 20:13:09 fortressError for Command: write(10) Error Level:
Re: [zfs-discuss] ZFS deduplication
Mike Gerdts wrote: [I agree with the comments in this thread, but... I think we're still being old fashioned...] Imagine if university students were allowed to use as much space as they wanted but had to pay a per megabyte charge every two weeks or their account is terminated? This would surely result in huge reduction in disk space consumption. If you can offer the perception of more storage because of efficiencies of the storage devices make it the same cost as less storage, then perhaps allocating more per student is feasible. Or maybe tuition could drop by a few bucks. hmm... well, having spent the past two years at the University, I can provide the observation that: 0. Tuition never drops. 1. Everybody (yes everybody) had a laptop. I would say the average hard disk size per laptop was 100 GBytes. 2. Everybody (yes everybody) had USB flash drives. In part because the school uses them for recruitment tools (give-aways), but they are inexpensive, too. 3. Everybody (yes everybody) had a MP3 player of some magnitude. Many were disk-based, but there were many iPod Nanos, too. 4. 50% had smart phones -- crackberries, iPhones, etc. 5. The school actually provides some storage space, but I don't know anyone who took advantage of the service. E-mail and document sharing was outsourced to google -- no perceptible shortage of space there. Even Microsoft charges only $3/user/month for exchange and sharepoint services. I think many businesses would be hard-pressed to match that sort of efficiency. Unlike my undergraduate days, where we had to make trade-offs between beer and floppy disks, there does not seem to be a shortage of storage space amongst the university students today -- in spite of the rise of beer prices recently (hops shortage, they claim ;-O Is the era of centralized home directories for students over? I think that the normal enterprise backup scenarios are more likely to gain from de-dup, in part because they tend to make full backups of systems and end up with zillions of copies of (static) OS files. Actual work files tend to be smaller, for many businesses. De-dup on my desktop seems to be a non-issue. Has anyone done a full value chain or data path analysis for de-dup? Will de-dup grow beyond the backup function? Will the performance penalty of SHA-256 and bit comparison kill all interactive performance? Should I set aside a few acres at the ranch to grow hops? So many good questions, so little time... -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19
Ross wrote: Hi Gilberto, I bought a Micro Memory card too, so I'm very likely going to end up in the same boat. I saw Neil Perrin's blog about the MM-5425 card, found that Vmetro don't seem to want to sell them, but then then last week spotted five of those cards on e-bay so snapped them up. I'm still waiting for the hardware for this server, but regarding the drivers, if these cards don't work out of the box I was planning to pester Neil Perrin and see if he still has some drivers for them :) Unfortunately, there are a couple of problems: 1. It's been a while since I used that board and driver. I recently tried pkgadd-ing on the latest Nevada build and it hung. I'm not sure if the latest Nevada is somehow incompatible. I didn't have time to track down the cause. 2. I received the board and driver from another group within Sun. It would be better to contact Micro Memory (or whoever took them over) directly, as it's not my place to give out 3rd party drivers or provide support for them. Sorry for the bad news: Neil. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Why can't ZFS find a plugged in disk
Turns out zfs mount -a will pick up the file system. Fun question is why the OS can't mount the disk by itself. gnome-mount is what puts up the Can't access the disk and whines to stdout (/dev/null in this case) about: ** (gnome-mount:1050): WARNING **: Mount failed for /org/freedesktop/Hal/devices /pci_0_0/pci1179_1_1d_7/storage_7_if0_0/scsi_host0/disk1/sd1/p0 org.freedesktop.Hal.Device.Volume.UnknownFailure : cannot open '/dev/dsk/c2t0d0p 0': invalid dataset name gnome-mount never attempts to open, access or mount the disk. It comes to the above conclusion after an exchange of messages with hald. Further questions will be directed in that direction. Jim --- James Litchfield wrote: Indeed, after rebooting we see the following. You'll have to trust me that /ehome and /ehome/v1 are the relevant ZFS filesystems. If it makes any different, this file system had been previously mounted. My memory is suggesting that zpool import works in this situation whenever the FS hasn't been previously mounted. Jim bash-3.2$ rmformat ld.so.1: rmformat: warning: libumem.so.1: open failed: No such file in secure directories Looking for devices... 1. Logical Node: /dev/rdsk/c1t0d0p0 Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: MATSHITA DVD-RAM UJ-841S 1.40 Device Type: CD Reader Bus: IDE Size: 2.8 GB Label: None Access permissions: Medium is not write protected. 2. Logical Node: /dev/rdsk/c2t0d0p0 Physical Node: /[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: WDC WD16 00BEAE-11UWT0 Device Type: Removable Bus: USB Size: 152.6 GB Label: None Access permissions: Medium is not write protected. bash-3.2$ /usr/sbin/mount | egrep ehome /ehome on ehome read/write/setuid/devices/nonbmand/exec/xattr/noatime/dev=2d90006 on Sun Jul 6 17:08:12 2008 /ehome/v1 on ehome/v1 read/write/setuid/devices/nonbmand/exec/xattr/noatime/dev=2d90007 on Sun Jul 6 17:08:12 2008 James Litchfield wrote: Currently on SNV92 + some BFUs but this has bene going on for quite a while. If I boot my system without a USB drive plugged in and then plug it in, rmformat sees it but ZFS seems not to. If I reboot the system, ZFS will have no problem with using the disk. # zpool import # rmformat Looking for devices... 1. Logical Node: /dev/rdsk/c1t0d0p0 Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: MATSHITA DVD-RAM UJ-841S 1.40 Device Type: DVD Reader/Writer Bus: IDE Size: 2.8 GB Label: None Access permissions: Medium is not write protected. 2. Logical Node: /dev/rdsk/c2t0d0p0 Physical Node: /[EMAIL PROTECTED],0/pci1179,[EMAIL PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: WDC WD16 00BEAE-11UWT0 Device Type: Removable Bus: USB Size: 152.6 GB Label: None Access permissions: Medium is not write protected. Perhaps because I didn't label the disk before giving to ZFS? If so, bad ZFS for either not complaining or else asking me for permission to label the disk. Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] slog device
Hi, Anyway, are there other devices out there that you would recommend to use as a slog device, other than this nvram card, that would present similar performance gains? Thanks Gilberto On 7/8/08 9:40 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Ross wrote: Hi Gilberto, I bought a Micro Memory card too, so I'm very likely going to end up in the same boat. I saw Neil Perrin's blog about the MM-5425 card, found that Vmetro don't seem to want to sell them, but then then last week spotted five of those cards on e-bay so snapped them up. I'm still waiting for the hardware for this server, but regarding the drivers, if these cards don't work out of the box I was planning to pester Neil Perrin and see if he still has some drivers for them :) Unfortunately, there are a couple of problems: 1. It's been a while since I used that board and driver. I recently tried pkgadd-ing on the latest Nevada build and it hung. I'm not sure if the latest Nevada is somehow incompatible. I didn't have time to track down the cause. 2. I received the board and driver from another group within Sun. It would be better to contact Micro Memory (or whoever took them over) directly, as it's not my place to give out 3rd party drivers or provide support for them. Sorry for the bad news: Neil. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss