Re: [zfs-discuss] Raidz1 p

2009-01-20 Thread zfs user
I would get a new 1.5 TB and make sure it has the new firmware and replace c6t3d0 right away - even if someone here comes up with a magic solution, you don't want to wait for another drive to fail. http://hardware.slashdot.org/article.pl?sid=09/01/17/0115207

Re: [zfs-discuss] [caiman-discuss] Can not delete swap on AI sparc

2009-01-20 Thread jan damborsky
Hi Jeffrey, jeffrey huang wrote: Hi, Jan, After successfully install AI on SPARC(zpool/zfs created), without reboot, I want try a installation again, so I want to destroy the rpool. # dumpadm -d swap -- ok # zfs destroy rpool/dump -- ok # swap -l # swap -d /dev/zvol/dsk/rpool/swap --

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Orvar Korvar
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide ZFS works well with storage based

Re: [zfs-discuss] Raidz1 p

2009-01-20 Thread Blake Irvin
I would in this case also immediately export the pool (to prevent any write attempts) and see about a firmware update for the failed drive (probably need windows for this). Sent from my iPhone On Jan 20, 2009, at 3:22 AM, zfs user zf...@itsbeen.sent.com wrote: I would get a new 1.5 TB and

[zfs-discuss] How do you re-attach a 3 disk RAIDZ array to a new OS installation?

2009-01-20 Thread Luke Scammell
Hi, I'm completely new to Solaris, but have managed to bumble through installing it to a single disk, creating an additional 3 disk RAIDZ array and then copying over data from a separate NTFS formatted disk onto the array using NTFS-3G. However, the single disk that was used for the OS

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Tim
On Mon, Jan 19, 2009 at 5:39 PM, Adam Leventhal a...@eng.sun.com wrote: And again, I say take a look at the market today, figure out a percentage, and call it done. I don't think you'll find a lot of users crying foul over losing 1% of their drive space when they don't already cry foul

Re: [zfs-discuss] How do you re-attach a 3 disk RAIDZ array to a new OS installation?

2009-01-20 Thread Craig Morgan
Luke, You're looking for a `zpool list`, followed by a `zpool import poolname` after Solaris has correctly recognised the attachment of the three original disks (ie. they appear in `format` and/or `cfgadm - al`). Complete docs here, now you know what you are looking for ...

Re: [zfs-discuss] How do you re-attach a 3 disk RAIDZ array to a new OS installation?

2009-01-20 Thread Darren J Moffat
Luke Scammell wrote: Hi, I'm completely new to Solaris, but have managed to bumble through installing it to a single disk, creating an additional 3 disk RAIDZ array and then copying over data from a separate NTFS formatted disk onto the array using NTFS-3G. However, the single disk

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Blake
I think maybe it means that if ZFS can't 'see' the block (the controller does that in HW RAID), it can't checksum said block. cheers, Blake On Tue, Jan 20, 2009 at 6:34 AM, Orvar Korvar knatte_fnatte_tja...@yahoo.com wrote: What does this mean? Does that mean that ZFS + HW raid with raid-5 is

[zfs-discuss] Performance issue with zfs send of a zvol (Again)

2009-01-20 Thread Brian H. Nelson
Nobody can comment on this? -Brian Brian H. Nelson wrote: I noticed this issue yesterday when I first started playing around with zfs send/recv. This is on Solaris 10U6. It seems that a zfs send of a zvol issues 'volblocksize' reads to the physical devices. This doesn't make any sense to

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-20 Thread Richard Elling
Good observations, Eric, more below... Eric D. Mudama wrote: On Mon, Jan 19 at 23:14, Greg Mason wrote: So, what we're looking for is a way to improve performance, without disabling the ZIL, as it's my understanding that disabling the ZIL isn't exactly a safe thing to do. We're looking

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Moore, Joe
Ross wrote: The problem is they might publish these numbers, but we really have no way of controlling what number manufacturers will choose to use in the future. If for some reason future 500GB drives all turn out to be slightly smaller than the current ones you're going to

Re: [zfs-discuss] Performance issue with zfs send of a zvol (Again)

2009-01-20 Thread Richard Elling
Brian H. Nelson wrote: Nobody can comment on this? -Brian Brian H. Nelson wrote: I noticed this issue yesterday when I first started playing around with zfs send/recv. This is on Solaris 10U6. It seems that a zfs send of a zvol issues 'volblocksize' reads to the physical devices.

[zfs-discuss] Updated on disk specification??

2009-01-20 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I see http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf as a pretty outdated (3 years old) document. is there any plan to update it?. Maybe somebody could update it every time a new ZFS pool version is available?. - -- Jesus Cea

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Richard Elling
Orvar Korvar wrote: What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS? http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide ZFS works well

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-20 Thread Doug
Any recommendations for an SSD to work with an X4500 server? Will the SSDs used in the 7000 series servers work with X4500s or X4540s? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Torrey McMahon
On 1/20/2009 1:14 PM, Richard Elling wrote: Orvar Korvar wrote: What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you should only use ZFS?

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Miles Nordin
mj == Moore, Joe joe.mo...@siemens.com writes: mj For a ZFS pool, (until block pointer rewrite capability) this mj would have to be a pool-create-time parameter. naw. You can just make ZFS do it all the time, like the other storage vendors do. no parameters. You can invent

Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-20 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Nicolas Williams wrote: I'd recommend waiting for ZFS crypto rather than using lofi with ZFS. Wait... for how long?. Any schedule?. I am very interested in ZFS Crypto, although I have lost hope of seeing in Solaris 10. - -- Jesus Cea Avion

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Orvar Korvar
So ZFS is not hindered at all, if you use it in conjunction with HW raid? ZFS can utilize all functionality and heal corrupted blocks without problems - with HW raid? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Disks in each RAIDZ group

2009-01-20 Thread Doug
Probably Richard Elling's blog: http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Moore, Joe
Miles Nordin wrote: mj == Moore, Joe joe.mo...@siemens.com writes: mj For a ZFS pool, (until block pointer rewrite capability) this mj would have to be a pool-create-time parameter. naw. You can just make ZFS do it all the time, like the other storage vendors do. no

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-20 Thread Marion Hakanson
d...@yahoo.com said: Any recommendations for an SSD to work with an X4500 server? Will the SSDs used in the 7000 series servers work with X4500s or X4540s? The Sun System Handbook (sunsolve.sun.com) for the 7210 appliance (an X4540-based system) lists the logzilla device with this fine print:

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Richard Elling
[I hate to keep dragging this thread forward, but...] Moore, Joe wrote: And there is no way to change this after the pool has been created, since after that time, the disk size can't be changed. So whatever policy is used by default, it is very important to get it right. Today, vdev size can

Re: [zfs-discuss] Comparison between the S-TEC Zeus and the Intel X25-E

2009-01-20 Thread kristof
I have been testing the 32 GB X25-E last week. When I connect it to one of the onboard (tyan 2925) sata ports, it's not detected by opensolaris 2008.11. When I connect it to an PCIE LSI 3081 , The disk is found But I'm getting trouble when I run performance tests via filebench. Filebench

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Miles Nordin
jm == Moore, Joe joe.mo...@siemens.com writes: jm Sysadmins should not be required to RTFS. I never said they were. The comparison was between hardware RAID and ZFS, not between two ZFS alternatives. The point: other systems' behavior is enitely secret. Therefore, secret opaque

[zfs-discuss] hot spare not so hot ??

2009-01-20 Thread Scot Ballard
I have configured a test system with a mirrored rpool and one hot spare. I powered the systems off, pulled one of the disks from rpool to simulate a hardware failure. The hot spare is not activating automatically. Is there something more i should have done to make this work ? pool: rpool

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Kees Nuyt
On Tue, 20 Jan 2009 12:13:00 PST, Orvar Korvar knatte_fnatte_tja...@yahoo.com wrote: So ZFS is not hindered at all, if you use it in conjunction with HW raid? ZFS can utilize all functionality and heal corrupted blocks without problems - with HW raid? Only if you build the zpool from a mirror

Re: [zfs-discuss] hot spare not so hot ??

2009-01-20 Thread Nathan Kroenert
An interesting interpretation of using hot spares. Could it be that the hot-spare code only fires if the disk goes down whilst the pool is active? hm. Nathan. Scot Ballard wrote: I have configured a test system with a mirrored rpool and one hot spare. I powered the systems off, pulled one

Re: [zfs-discuss] hot spare not so hot ??

2009-01-20 Thread Eric Schrock
What software are you running? There was a bug where offline device failure did not trigger hot spares, but that should be fixed now (at least in OpenSolaris, not sure about s10u6). - Eric On Wed, Jan 21, 2009 at 09:57:42AM +1100, Nathan Kroenert wrote: An interesting interpretation of using

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Tim
On Tue, Jan 20, 2009 at 2:26 PM, Moore, Joe joe.mo...@siemens.com wrote: Other storage vendors have specific compatibility requirements for the disks you are allowed to install in their chassis. And again, the reason for those requirements is 99% about making money, not a technical one. If

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Anton B. Rang
The user DEFINITELY isn't expecting 5 bytes, or what you meant to say 5000 bytes, they're expecting 500GB. You know, 536,870,912,000 bytes. But even if the drive mfg's calculated it correctly, they wouldn't even be getting that due to filesystem overhead. I doubt there are

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-20 Thread Eric D. Mudama
On Tue, Jan 20 at 9:04, Richard Elling wrote: Yes. And I think there are many more use cases which are not yet characterized. What we do know is that using an SSD for the separate ZIL log works very well for a large number of cases. It is not clear to me that the efforts to characterize a

Re: [zfs-discuss] zfs null pointer deref,

2009-01-20 Thread Anton B. Rang
Sigh. Richard points out in private email that automatic savecore functionality is disabled in OpenSolaris; you need to manually set up a dump device and save core files if you want them. However, the stack may be sufficient to ID the bug. -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-20 Thread Eric D. Mudama
On Tue, Jan 20 at 21:35, Eric D. Mudama wrote: On Tue, Jan 20 at 9:04, Richard Elling wrote: Yes. And I think there are many more use cases which are not yet characterized. What we do know is that using an SSD for the separate ZIL log works very well for a large number of cases. It is not

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Antonius
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy another bigger drive again? the drives are RMA replacements for the other disks that faulted in the array before. they are the same brand, model and model number, apparently not so under the label though, but no way I