[zfs-discuss] ZFS boot for xVM builds?

2007-10-22 Thread Adam Lindsay
Hi all. I've been running the ZFS boot netinstall setup on SXCE's snv_69 and snv_70 very happily. I'm anticipating the release of xVM with build 75, and wondering if the same ZFS install procedure is likely to work, or if I'll be left waiting further for more changes. I understand that this

[zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
Hey all, Has anyone else noticed Norco's recently-announced DS-520 and thought ZFS-ish thoughts? It's a five-SATA, Celeron-based desktop NAS that ships without an OS. http://www.norcotek.com/item_detail.php?categoryid=8modelno=ds-520 What practical impact is a 32-bit processor going to

Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
Hello, Robert, Robert Milkowski wrote: Because it offers upto 1GB of memory, 32bit shouldn't be an issue. Sorry, could someone expand on this? The only received opinion I've seen on 32-bit is from the ZFS best practice wiki, which simply says Run ZFS on a system that runs a 64-bit kernel. I

Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
[EMAIL PROTECTED] wrote: If you don't have a 64bit cpu, add more ram(tm). Actually, no; if you have a 32 bit CPU, you must not add too much RAM or the kernel will run out of space to put things. Hrm. Do you have a working definition of too much? adam

Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
Gary Gendel wrote: Norco usually uses Silicon Image based SATA controllers. Ah, yes, I remember hearing SI SATA multiplexer horror stories when I was researching storage possibilities. However, I just heard back from Norco: Thank you for interest in Norco products. Most of part uses by DS

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-16 Thread Adam Lindsay
Heya Kent, Kent Watsen wrote: It sounds good, that way, but (in theory), you'll see random I/O suffer a bit when using RAID-Z2: the extra parity will drag performance down a bit. I know what you are saying, but I , wonder if it would be noticeable? I Well, noticeable again comes back to

[zfs-discuss] Bottleneck characterization -- followup

2007-09-14 Thread Adam Lindsay
Back in April, I pinged this list[1] for help in specifying a ZFS server that would handle high-capacity reads and writes. That server was finally built and delivered, and I've blogged the results[2] as part of a larger series[3] about that server. [1]

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-14 Thread Adam Lindsay
Kent Watsen wrote: I'm putting together a OpenSolaris ZFS-based system and need help picking hardware. Fun exercise! :) I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the OS 4*(4+2) RAIDZ2 for SAN] What are you *most* interested in for this server? Reliability?

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-14 Thread Adam Lindsay
Kent Watsen wrote: What are you *most* interested in for this server? Reliability? Capacity? High Performance? Reading or writing? Large contiguous reads or small seeks? One thing that I did that got a good feedback from this list was picking apart the requirements of the most demanding

Re: [zfs-discuss] Bottlenecks in building a system

2007-04-20 Thread Adam Lindsay
[EMAIL PROTECTED] wrote: I suspect that if you have a bottleneck in your system, it would be due to the available bandwidth on the PCI bus. Mm. yeah, it's what I was worried about, too (mostly through ignorance of the issues), which is why I was hoping HyperTransport and PCIe were going to

Re: [zfs-discuss] Bottlenecks in building a system

2007-04-19 Thread Adam Lindsay
[EMAIL PROTECTED] wrote: Adam: Does anyone have a clue as to where the bottlenecks are going to be with this: 16x hot swap SATAII hard drives (plus an internal boot drive) Tyan S2895 (K8WE) motherboard Dual GigE (integral nVidia ports) 2x Areca 8-port PCIe (8-lane) RAID drivers 2x AMD

Re: [zfs-discuss] Bottlenecks in building a system

2007-04-19 Thread Adam Lindsay
Nicholas Lee wrote: On 4/19/07, *Adam Lindsay* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: 16x hot swap SATAII hard drives (plus an internal boot drive) Tyan S2895 (K8WE) motherboard Dual GigE (integral nVidia ports) 2x Areca 8-port PCIe (8-lane) RAID drivers 2x AMD

[zfs-discuss] ZFS performance model for sustained, contiguous writes?

2007-04-18 Thread Adam Lindsay
Hi folks. I'm looking at putting together a 16-disk ZFS array as a server, and after reading Richard Elling's writings on the matter, I'm now left wondering if it'll have the performance we expect of such a server. Looking at his figures, 5x 3-disk RAIDZ sets seems like it *might* be made to do

Re: [zfs-discuss] ZFS performance model for sustained, contiguous writes?

2007-04-18 Thread Adam Lindsay
Bart Smaalders wrote: Adam Lindsay wrote: Okay, the way you say it, it sounds like a good thing. I misunderstood the performance ramifications of COW and ZFS's opportunistic write locations, and came up with much more pessimistic guess that it would approach random writes. As it is, I have

[zfs-discuss] Bottlenecks in building a system

2007-04-18 Thread Adam Lindsay
In asking about ZFS performance in streaming IO situations, discussion quite quickly turned to potential bottlenecks. By coincidence, I was wondering about the same thing. Richard Elling said: We know that channels, controllers, memory, network, and CPU bottlenecks can and will impact actual