Re: [zfs-discuss] SSD best practices

2010-04-23 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of thomas Someone on this list threw out the idea a year or so ago to just setup 2 ramdisk servers, export a ramdisk from each and create a mirror slog from them. Isn't the whole point of a

Re: [zfs-discuss] SSD best practices

2010-04-23 Thread Darren J Moffat
On 23/04/2010 12:24, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of thomas Someone on this list threw out the idea a year or so ago to just setup 2 ramdisk servers, export a ramdisk from each and create a mirror slog

Re: [zfs-discuss] SSD best practices

2010-04-22 Thread thomas
Someone on this list threw out the idea a year or so ago to just setup 2 ramdisk servers, export a ramdisk from each and create a mirror slog from them. Assuming newer version zpools, this sounds like it could be even safer since there is (supposedly) less of a chance of catastrophic failure if

Re: [zfs-discuss] SSD best practices

2010-04-22 Thread Daniel Carosone
On Thu, Apr 22, 2010 at 09:58:12PM -0700, thomas wrote: Assuming newer version zpools, this sounds like it could be even safer since there is (supposedly) less of a chance of catastrophic failure if your ramdisk setup fails. Use just one remote ramdisk or two with battery backup.. whatever

Re: [zfs-discuss] SSD best practices

2010-04-21 Thread Frank Middleton
On 04/20/10 11:06 AM, Don wrote: Who else, besides STEC, is making write optimized drives and what kind of IOP performance can be expected? Just got a distributor email about Texas Memory Systems' RamSan-630, one of a range of huge non-volatile SAN products they make. Other than that this

Re: [zfs-discuss] SSD best practices

2010-04-21 Thread Richard Elling
On Apr 21, 2010, at 7:24 AM, Frank Middleton wrote: On 04/20/10 11:06 AM, Don wrote: Who else, besides STEC, is making write optimized drives and what kind of IOP performance can be expected? Just got a distributor email about Texas Memory Systems' RamSan-630, one of a range of huge

Re: [zfs-discuss] SSD best practices

2010-04-21 Thread Brandon High
On Wed, Apr 21, 2010 at 7:24 AM, Frank Middleton f.middle...@apogeect.com wrote: On 04/20/10 11:06 AM, Don wrote: Just got a distributor email about Texas Memory Systems'  RamSan-630, one of a range of huge non-volatile SAN products they make. Other than that this has a capacity of 4-10TB,

Re: [zfs-discuss] SSD best practices

2010-04-20 Thread Casper . Dik
On Mon, 19 Apr 2010, Edward Ned Harvey wrote: Improbability assessment aside, suppose you use something like the DDRDrive X1 ... Which might be more like 4G instead of 32G ... Is it even physically possible to write 4G to any device in less than 10 seconds? Remember, to achieve worst case,

Re: [zfs-discuss] SSD best practices

2010-04-20 Thread Edward Ned Harvey
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf Of casper@sun.com On Mon, 19 Apr 2010, Edward Ned Harvey wrote: Improbability assessment aside, suppose you use something like the DDRDrive X1 ... Which might be more like 4G instead of 32G ... Is it even

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Michael DeMan
By the way, I would like to chip in about how informative this thread has been, at least for me, despite (and actually because of) the strong opinions on some of the posts about the issues involved. From what I gather, there is still an interesting failure possibility with ZFS, although

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Edward Ned Harvey
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Sunday, April 18, 2010 11:34 PM To: Edward Ned Harvey Cc: Christopher George; zfs-discuss@opensolaris.org Subject: RE: [zfs-discuss] SSD best practices On Sun, 18 Apr 2010, Edward Ned Harvey wrote: This seems

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Yes yes- /etc/zfs/zpool.cache - we all hate typos :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I must note that you haven't answered my question... If the zpool.cache file differs between the two heads for some reason- how do I ensure that the second head has an accurate copy without importing the ZFS pool? -- This message posted from opensolaris.org

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I'm not certain if I'm misunderstanding you- or if you didn't read my post carefully. Why would the zpool.cache file be current on the _second_ node? The first node is where I've added my zpools and so on. The second node isn't going to have an updated cache file until I export the zpool from

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread David Magda
On Mon, April 19, 2010 07:32, Edward Ned Harvey wrote: I'm saying that even a single pair of disks (maybe 4 disks if you're using cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck is the 1Gb Ethernet, you won't gain anything (significant) by accelerating the stuff that

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread David Magda
On Mon, April 19, 2010 06:26, Michael DeMan wrote: B. The current implementation stores that cache file on the zil device, so if for some reason, that device is totally lost (along with said .cache file), it is nigh impossible to recover the entire pool it correlates with. Given that ZFS is

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Don wrote: If the zpool.cache file differs between the two heads for some reason- how do I ensure that the second head has an accurate copy without importing the ZFS pool? The zpool.cache file can only be valid for one system at a time. If the pool is imported to a

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Ok- I think perhaps I'm failing to explain myself. I want to know if there is a way for a second node- connected to a set of shared disks- to keep its zpool.cache up to date _without_ actually importing the ZFS pool. As I understand it- keeping the zpool up to date on the second node would

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Darren J Moffat
On 19/04/2010 16:46, Don wrote: I want to know if there is a way for a second node- connected to a set of shared disks- to keep its zpool.cache up to date _without_ actually importing the ZFS pool. See zpool(1M): cachefile=path | none Controls the location of where the pool

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Edward Ned Harvey wrote: There's no point trying to accelerate your disks if you're only going to use a single client over gigabit. This is a really strange statement. It does not make any sense. I'm saying that even a single pair of disks (maybe 4 disks if you're

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
That section of the man page is actually helpful- as I wasn't sure what I was going to do to ensure the nodes didn't try to bring up the zpool on their own- outside of clustering software or my own intervention. That said- it still doesn't explain how I would keep the secondary nodes

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Darren J Moffat
On 19/04/2010 17:13, Don wrote: That section of the man page is actually helpful- as I wasn't sure what I was going to do to ensure the nodes didn't try to bring up the zpool on their own- outside of clustering software or my own intervention. That said- it still doesn't explain how I would

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache (or wherever you prefer to put it) so it won't come up on system

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Darren J Moffat
On 19/04/2010 17:50, Don wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? Either that or a way for the nodes to update each others copy very quickly. Such as a parallel filesystem. It is the job of the

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 9:50 AM, Don wrote: Now I'm simply confused. In one sentence, the cachefile keeps track of what is currently imported. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? Each OS instance has a default cachefile. The

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I apologize- I didn't mean to come across as rude- I'm just not sure if I'm asking the right question. I'm not ready to use the ha-cluster software yet as I haven't finished testing it. For now I'm manually failing over from the primary to the backup node. That will change- but I'm not ready

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Ross Walker
On Apr 19, 2010, at 12:50 PM, Don d...@blacksun.org wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Carson Gaspar
Edward Ned Harvey wrote: I'm saying that even a single pair of disks (maybe 4 disks if you're using cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck is the 1Gb Ethernet, you won't gain anything (significant) by accelerating the stuff that isn't the bottleneck. And you

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I understand that important bit about having the cachefile is the GUID's (although the disk record is, I believe, helpful in improving import speeds) so we can recover in certain oddball cases. As such- I'm still confused why you say it's unimportant. Is it enough to simply copy the

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Christopher George
To clarify, the DDRdrive X1 is not an option for OpenSolaris today, irrespective of specific features, because the driver is not yet available. When our OpenSolaris device driver is released, later this quarter, the X1 will have updated firmware to automatically provide backup/restore based on

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Miles Nordin
dm == David Magda dma...@ee.ryerson.ca writes: dm Given that ZFS is always consistent on-disk, why would you dm lose a pool if you lose the ZIL and/or cache file? because of lazy assertions inside 'zpool import'. you are right there is no fundamental reason for it---it's just code that

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Brandon High
I think the DDR drive has a battery and can dump to a cf card. -B Sent from my Nexus One. On Apr 19, 2010 10:41 AM, Carson Gaspar car...@taltos.org wrote: Edward Ned Harvey wrote: I'm saying that even a single pair of disks (maybe 4 disks if you're usi... And you are confusing throughput

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Continuing on the best practices theme- how big should the ZIL slog disk be? The ZFS evil tuning guide suggests enough space for 10 seconds of my synchronous write load- even assuming I could cram 20 gigabits/sec into the host (2 10 gigE NICs) That only comes out to 200 Gigabits which = 25

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Christopher George
I think the DDR drive has a battery and can dump to a cf card. The DDRdrive X1's automatic backup/restore feature utilizes on-board SLC NAND (high quality Flash) and is completely self- contained. Neither the backup nor restore feature involves data transfer over the PCIe bus or to/from

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Don Continuing on the best practices theme- how big should the ZIL slog disk be? The ZFS evil tuning guide suggests enough space for 10 seconds of my synchronous write load- even assuming

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I think the size of the ZIL log is basically irrelevant That was the understanding I got from reading the various blog posts and tuning guide. only a single SSD, just due to the fact that you've probably got dozens of disks attached, and you'll probably use multiple log devices striped just

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Don wrote: Continuing on the best practices theme- how big should the ZIL slog disk be? The ZFS evil tuning guide suggests enough space for 10 seconds of my synchronous write load- even assuming I could cram 20 gigabits/sec into the host (2 10 gigE NICs) That only comes

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Edward Ned Harvey wrote: Improbability assessment aside, suppose you use something like the DDRDrive X1 ... Which might be more like 4G instead of 32G ... Is it even physically possible to write 4G to any device in less than 10 seconds? Remember, to achieve worst case,

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I always try to plan for the worst case- I just wasn't sure how to arrive at the worst case. Thanks for providing the information- and I will definitely checkout the dtrace zilstat script. Considering the smallest SSD I can buy from a manufacturer that I trust seems to be 32GB- that's probably

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Don wrote: I'm curious if anyone knows how ZIL slog performance scales. For example- how much benefit would you expect from 2 SSD slogs over 1? Would there be a significant benefit to 3 over 2 or does it begin to taper off? I'm sure a lot of this is dependent on the

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
A STEC Zeus IOPS SSD (45K IOPS) will behave quite differently than an Intel X-25E (~3.3K IOPS). Where can you even get the Zeus drives? I thought they were only in the OEM market and last time I checked they were ludicrously expensive. I'm looking for between 5k and 10k IOPS using up to 4

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 7:02 PM, Bob Friesenhahn wrote: On Mon, 19 Apr 2010, Don wrote: Continuing on the best practices theme- how big should the ZIL slog disk be? The ZFS evil tuning guide suggests enough space for 10 seconds of my synchronous write load- even assuming I could cram 20

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 7:11 PM, Bob Friesenhahn wrote: On Mon, 19 Apr 2010, Edward Ned Harvey wrote: Improbability assessment aside, suppose you use something like the DDRDrive X1 ... Which might be more like 4G instead of 32G ... Is it even physically possible to write 4G to any device in less

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 12:44 PM, Miles Nordin wrote: dm == David Magda dma...@ee.ryerson.ca writes: dm Given that ZFS is always consistent on-disk, why would you dm lose a pool if you lose the ZIL and/or cache file? because of lazy assertions inside 'zpool import'. you are right there

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Ragnar Sundblad
On 18 apr 2010, at 06.43, Richard Elling wrote: On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
On 18 apr 2010, at 00.52, Dave Vrona wrote: Ok, so originally I presented the X-25E as a reasonable approach. After reading the follow-ups, I'm second guessing my statement. Any decent alternatives at a reasonable price? How much is reasonable? :-) How about $1000 per device?

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
The Acard device mentioned in this thread looks interesting: http://opensolaris.org/jive/thread.jspa?messageID=401719#401719 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
From: Richard Elling [mailto:richard.ell...@gmail.com] On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: For zpool 19, which includes all present releases of Solaris 10 and Opensolaris 2009.06, it is critical to mirror your ZIL log device. A failed unmirrored log device would

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
Or, DDRDrive X1 ? Would the X1 need to be mirrored? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Christopher George
IMHO, whether a dedicated log device needs redundancy (mirrored), should be determined by the dynamics of each end-user environment (zpool version, goals/priorities, and budget). If mirroring is deemed important, a key benefit of the DDRdrive X1, is the HBA / storage device integration. For

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Miles Nordin
re == Richard Elling richard.ell...@gmail.com writes: A failed unmirrored log device would be the permanent death of the pool. re It has also been shown that such pools are recoverable, albeit re with tedious, manual procedures required. for the 100th time, No, they're not,

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
IMHO, whether a dedicated log device needs redundancy (mirrored), should be determined by the dynamics of each end-user environment (zpool version, goals/priorities, and budget). Well, I populate a chassis with dual HBAs because my _perception_ is they tend to fail more than other cards.

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Christopher George
There is no definitive answer (yes or no) on whether to mirror a dedicated log device, as reliability is one of many variables. This leads me to the frequently given but never satisfying it depends. In a time when too many good questions go unanswered, let me take advantage of our less rigid

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Bob Friesenhahn
On Sun, 18 Apr 2010, Christopher George wrote: In summary, the DDRdrive X1 is designed, built and tested with immense pride and an overwhelming attention to detail. Sounds great. What performance does DDRdrive X1 provide for this simple NFS write test from a single client over gigabit

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Miles Nordin
re == Richard Elling richard.ell...@gmail.com writes: re a well managed system will not lose zpool.cache or any other re file. I would complain this was circular reasoning if it weren't such obvious chest-puffing bullshit. It's normal even to the extent of being a best practice to have

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
So if the Intel X25E is a bad device- can anyone recommend an SLC device with good firmware? (Or an MLC drive that performs as well?) I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a mirrored ZIL) connected

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn On Sun, 18 Apr 2010, Christopher George wrote: In summary, the DDRdrive X1 is designed, built and tested with immense pride and an overwhelming attention to detail.

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Don I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a mirrored ZIL) connected to two servers (so if one

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
If you have a pair of heads talking to shared disks with ZFS- what can you do to ensure the second head always has a current copy of the zpool.cache file? I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't import the pool on my second head. -- This message posted

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
But if the X25E doesn't honor cache flushes then it really doesn't matter if they are mirrored- they both may cache the data, not write it out, and leave me screwed. I'm running 2009.06 and not one of the newer developer candidates that handle ZIL losses gracefully (or at all- at least as far

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 07:02:38PM -0700, Don wrote: If you have a pair of heads talking to shared disks with ZFS- what can you do to ensure the second head always has a current copy of the zpool.cache file? I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
I'm not sure to what you are referring when you say my running BE I haven't looked at the zpool.cache file too closely but if the devices don't match between the two systems for some reason- isn't that going to cause a problem? I was really asking if there is a way to build the cache file

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Bob Friesenhahn
On Sun, 18 Apr 2010, Edward Ned Harvey wrote: This seems to be the test of the day. time tar jxf gcc-4.4.3.tar.bz2 I get 22 seconds locally and about 6-1/2 minutes from an NFS client. There's no point trying to accelerate your disks if you're only going to use a single client over

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 10:33:36PM -0500, Bob Friesenhahn wrote: Probably the DDRDrive is able to go faster since it should have lower latency than a FLASH SSD drive. However, it may have some bandwidth limits on its interface. It clearly has some. They're just as clearly well in excess

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Richard Elling
On Apr 18, 2010, at 7:02 PM, Don wrote: If you have a pair of heads talking to shared disks with ZFS- what can you do to ensure the second head always has a current copy of the zpool.cache file? By definition, the zpool.cache file is always up to date. I'd prefer not to lose the ZIL, fail

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 07:37:10PM -0700, Don wrote: I'm not sure to what you are referring when you say my running BE Running boot environment - the filesystem holding /etc/zpool.cache -- Dan. pgpbKUgqnePjv.pgp Description: PGP signature ___

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Mon, Apr 19, 2010 at 03:37:43PM +1000, Daniel Carosone wrote: the filesystem holding /etc/zpool.cache or, indeed, /etc/zfs/zpool.cache :-) -- Dan. pgpSCBv4eR19k.pgp Description: PGP signature ___ zfs-discuss mailing list

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Bob Friesenhahn
On Sat, 17 Apr 2010, Dave Vrona wrote: 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? Mirroring the intent log is a good idea, particularly for ZFS versions which don't support removing the intent log device. 2) ZIL write cache. It appears some have

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Bill Sommerfeld
On 04/17/10 07:59, Dave Vrona wrote: 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? L2ARC cannot be mirrored -- and doesn't need to be. The contents are checksummed; if the checksum doesn't match, it's treated as a cache miss and the block is re-read from

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? IMHO, the best answer to this question is the one from the ZFS Best Practices guide. (I wrote

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 2) ZIL write cache. It appears some have disabled

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Ragnar Sundblad
On 17 apr 2010, at 20.51, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? ... Personally, I recommend the latest build

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Dave Vrona
Ok, so originally I presented the X-25E as a reasonable approach. After reading the follow-ups, I'm second guessing my statement. Any decent alternatives at a reasonable price? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Ragnar Sundblad
On 18 apr 2010, at 00.52, Dave Vrona wrote: Ok, so originally I presented the X-25E as a reasonable approach. After reading the follow-ups, I'm second guessing my statement. Any decent alternatives at a reasonable price? How much is reasonable? :-) I guess there are STEC drives that

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Richard Elling
On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Vrona 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? IMHO, the best answer to this question