[zfs-discuss] iscsi share on different subnet?

2009-10-20 Thread Kent Watsen
I have ZFS/Xen server for my home network. The box itself has two physical NICs. I want Dom0 to be on my management network and the guest domains to be on the dmz and private networks. The private network is where all my home computers are and would like to export iscsi volumes directly

[zfs-discuss] how to destroy a pool by id?

2009-06-20 Thread Kent Watsen
Over the course of multiple OpenSolaris installs , I first created a pool called "tank" and then, later and resusing some of the same drives, I created another pool called tank. I can `zpool export tank`, but when I `zpool import tank`, I get: bash-3.2# zpool import tank cannot import

Re: [zfs-discuss] ATA UDMA data parity error

2008-01-22 Thread Kent Watsen
For the archive, I swapped the mobo and all is good now... (I copied 100GB into the pool without a crash) One problem I had was that Solaris would hang whenever booting - even when all the aoc-sat2-mv8 cards were pulled out. Turns out that switching the BIOS field USB 2.0 Controller Mode

Re: [zfs-discuss] ATA UDMA data parity error

2008-01-18 Thread Kent Watsen
Thanks for the note Anton. I let memtest86 run overnight and it found no issues. I've also now moved the cards around and have confirmed that slot #3 on the mobo is bad (all my aoc-sat2-mv8 cards, cables, and backplanes are OK). However, I think its more than just slot #3 that has a fault

[zfs-discuss] ATA UDMA data parity error

2008-01-17 Thread Kent Watsen
Hey all, I'm not sure if this is a ZFS bug or a hardware issue I'm having - any pointers would be great! Following contents include: - high-level info about my system - my first thought to debugging this - stack trace - format output - zpool status output - dmesg output

Re: [zfs-discuss] ATA UDMA data parity error

2008-01-17 Thread Kent Watsen
On a lark, I decided to create a new pool not including any devices connected to card #3 (i.e. c5) It crashes again, but this time with a slightly different dump (see below) - actually, there are two dumps below, the first is using the xVM kernel and the second is not Any ideas? Kent

Re: [zfs-discuss] ATA UDMA data parity error

2008-01-17 Thread Kent Watsen
Below I create zpools isolating one card at a time - when just card#1 - it works - when just card #2 - it fails - when just card #3 - it works And then again using the two cards that seem to work: - when cards #1 and #3 - it fails So, at first I thought I narrowed it down to a card, but

Re: [zfs-discuss] how to create whole disk links?

2007-12-28 Thread Kent Watsen
Eric Schrock wrote: Or just let ZFS work its magic ;-) Oh, I didn't realize that `zpool create` could be fed vdevs that didn't exist in /dev/dsk/ - and, as a bonus, it also creates the /dev/dsk/ links! # zpool create -f tank raidz2 c3t0d0 c3t4d0 c4t0d0 c4t4d0 c5t0d0 c5t4d0z # ls -l

[zfs-discuss] aoc-sat2-mv8 (was: LSI SAS3081E = unstable drive numbers?)

2007-12-18 Thread Kent Watsen
Kent Watsen wrote: So, I picked up an AOC-SAT2-MV8 off eBay for not too much and then I got a 4xSATA to one SFF-8087 cable to connect it to one one my six backplanes. But, as fortune would have it, the cable I bought has SATA connectors that are physically too big to plug into the AOC

Re: [zfs-discuss] LSI SAS3081E = unstable drive numbers?

2007-12-16 Thread Kent Watsen
Paul Jochum wrote: What the lsiutil does for me is clear the persistent mapping for all of the drives on a card. Since James confirms that I'm doomed to ad hoc methods tracking device-ids to bays, I'm interested in knowing if somehow your ability to clear the persistent mapping for

Re: [zfs-discuss] LSI SAS3081E = unstable drive numbers?

2007-12-15 Thread Kent Watsen
Kent Watsen wrote: Given that manually tracking shifting ids doesn't sound appealing to me, would using a SATA controller like the AOC-SAT2-MV8 resolve the issue? Given that I currently only have one LSI HBA - I'd need to get 2 more for all 24 drives ---or--- I could get 3 of these SATA

Re: [zfs-discuss] LSI SAS3081E = unstable drive numbers?

2007-12-15 Thread Kent Watsen
Eric Schrock wrote: For x86 systems, you can use ipmitool to manipulate the led state (ipmitool sunoem led ...). On older galaxy systems, you can only set the fail LED ('io.hdd0.led'), as the ok2rm LED is not physically connected to anything. On newer systems, you can set both the 'fail'

[zfs-discuss] LSI SAS3081E = unstable drive numbers?

2007-12-12 Thread Kent Watsen
Based on recommendations from this list, I asked the company that built my box to use an LSI SAS3081E controller. The first problem I noticed was that the drive-numbers were ordered incorrectly. That is, given that my system has 24 bays (6 rows, 4 bays/row), the drive numbers from

Re: [zfs-discuss] LSI SAS3081E = unstable drive numbers?

2007-12-12 Thread Kent Watsen
:-) Kent Watsen wrote: Based on recommendations from this list, I asked the company that built my box to use an LSI SAS3081E controller. The first problem I noticed was that the drive-numbers were ordered incorrectly. That is, given that my system has 24 bays (6 rows, 4 bays/row

Re: [zfs-discuss] LSI SAS3081E = unstable drive numbers?

2007-12-12 Thread Kent Watsen
Hi Paul, Already in my LSI Configuration Utility I have an option to clear the persistent mapping for drives not present, but then the card resumes its normal persistent-mapping logic. What I really want is to disable to persistent mapping logic completely - is the `lsiutil` doing that for

Re: [zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread Kent Watsen
How does one access the PSARC database to lookup the description of these features? Sorry if this has been asked before! - I tried google before posting this :-[ Kent George Wilson wrote: ZFS Fans, Here's a list of features that we are proposing for Solaris 10u5. Keep in mind that

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-17 Thread Kent Watsen
Probably not, my box has 10 drives and two very thirsty FX74 processors and it draws 450W max. At 1500W, I'd be more concerned about power bills and cooling than the UPS! Yeah - good point, but I need my TV! - or so I tell my wife so I can play with all this gear :-X Cheers, Kent

Re: [zfs-discuss] [xen-discuss] hardware sizing for a zfs-based system?

2007-09-16 Thread Kent Watsen
David Edmondson wrote: One option I'm still holding on to is to also use the ZFS system as a Xen-server - that is OpenSolaris would be running in Dom0... Given that the Xen hypervisor has a pretty small cpu/memory footprint, do you think it could share 2-cores + 4Gb with ZFS or should I

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-16 Thread Kent Watsen
I know what you are saying, but I , wonder if it would be noticeable? I Well, noticeable again comes back to your workflow. As you point out to Richard, it's (theoretically) 2x IOPS difference, which can be very significant for some people. Yeah, but my point is if it would be noticeable

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-15 Thread Kent Watsen
Hey Adam, My first posting contained my use-cases, but I'd say that video recording/serving will dominate the disk utilization - thats why I'm pushing for 4 striped sets of RAIDZ2 - I think that it would be all around goodness It sounds good, that way, but (in theory), you'll see random

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-15 Thread Kent Watsen
Nit: small, random read I/O may suffer. Large random read or any random write workloads should be ok. Given that video-serving is all sequential-read, is it correct that that raidz2, specifically 4(4+2), would be just fine? For 24 data disks there are enough combinations that it is not

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-15 Thread Kent Watsen
Sorry, but looking again at the RMP page, I see that the chassis I recommended is actually different than the one we have. I can't find this chassis only online, but here's what we bought: http://www.siliconmechanics.com/i10561/intel-storage-server.php?cat=625 That is such a cool looking

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-15 Thread Kent Watsen
[CC-ing xen-discuss regarding question below] Probably a 64 bit dual core with 4GB of (ECC) RAM would be a good starting point. Agreed. So I was completely out of a the ball-park - I hope the ZFS Wiki can be updated to contain some sensible hardware-sizing information... One option I'm

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-14 Thread Kent Watsen
I will only comment on the chassis, as this is made by AIC (short for American Industrial Computer), and I have three of these in service at my work. These chassis are quite well made, but I have experienced the following two problems: snip Oh my, thanks for the heads-up! Charlie at

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-14 Thread Kent Watsen
Fun exercise! :) Indeed! - though my wife and kids don't seem to appreciate it so much ;) I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS 4*(4+2) RAIDZ2 for SAN] What are you *most* interested in for this server? Reliability? Capacity? High Performance?

[zfs-discuss] hardware sizing for a zfs-based system?

2007-09-13 Thread Kent Watsen
Hi all, I'm putting together a OpenSolaris ZFS-based system and need help picking hardware. I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS 4*(4+2) RAIDZ2 for SAN] http://rackmountpro.com/productpage.php?prodid=2418 Regarding the mobo, cpus, and memory - I

[zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
Richard's blog analyzes MTTDL as a function of N+P+S: http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl But to understand how to best utilize an array with a fixed number of drives, I add the following constraints: - N+P should follow ZFS best-practice rule of

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
Resent as HTML to avoid line-wrapping: Richard's blog analyzes MTTDL as a function of N+P+S: http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl But to understand how to best utilize an array with a fixed number of drives, I add the following constraints: - N+P should

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Kent Watsen
But to understand how to best utilize an array with a fixed number of drives, I add the following constraints: - N+P should follow ZFS best-practice rule of N={2,4,8} and P={1,2} - all sets in an array should be configured similarly - the MTTDL for S sets is equal to (MTTDL for one

[zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Kent Watsen
Hi all, I'm new here and to ZFS but I've been lurking for quite some time... My question is simple: which is better 8+2 or 8+1+spare? Both follow the (N+P) N={2,4,8} P={1,2} rule, but 8+2 results in a total or 10 disks, which is one disk more than 3=num-disks=9 rule. But 8+2 has much

Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Kent Watsen
I think that the 3=num-disks=9 rule only applies to RAIDZ and it was changed to 4=num-disks=10 for RAIDZ2, but I might be remembering wrong. Can anybody confirm that the 3=num-disks=9 rule only applies to RAIDZ and that 4=num-disks=10 applies to RAIDZ2? Thanks, Kent

Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Kent Watsen
Don't confuse vdevs with pools. If you add two 4+1 vdevs to a single pool it still appears to be one place to put things. ;) Newbie oversight - thanks! Kent ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Kent Watsen
Another reason to recommend spares is when you have multiple top-level vdevs and want to amortize the spare cost over multiple sets. For example, if you have 19 disks then 2x 8+1 raidz + spare amortizes the cost of the spare across two raidz sets. -- richard Interesting - I hadn't

Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Kent Watsen
Rob Logan wrote: which is better 8+2 or 8+1+spare? 8+2 is safer for the same speed 8+2 requires alittle more math, so its slower in theory. (unlikely seen) (4+1)*2 is 2x faster, and in theory is less likely to have wasted space in transaction group (unlikely seen) I keep reading

Re: [zfs-discuss] 8+2 or 8+1+spare?

2007-07-09 Thread Kent Watsen
John-Paul Drawneek wrote: Your data gets striped across the two sets so what you get is a raidz stripe giving you the 2x faster. tank ---raidz --devices ---raidz --devices sorry for the diagram. So you got your zpool tank with raidz stripe. Thanks - I think you all have