Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-26 Thread devsk
 
 
 On 9/23/2010 at 12:38 PM Erik Trimble wrote:
 
 | [snip]
 |If you don't really care about ultra-low-power, then
 there's
 absolutely 
 |no excuse not to buy a USED server-class machine
 which is 1- or 2- 
 |generations back.  They're dirt cheap, readily
 available, 
 | [snip]
  =
 
 
 Anyone have a link or two to a place where I can buy
 some dirt-cheap,
 readily available last gen servers?

I would love some links as well. I have heard a lot about dirt cheap last gen 
servers but nobody ever provides a link.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-26 Thread Erik Trimble

 On 9/26/2010 8:06 AM, devsk wrote:


On 9/23/2010 at 12:38 PM Erik Trimble wrote:

| [snip]
|If you don't really care about ultra-low-power, then
there's
absolutely
|no excuse not to buy a USED server-class machine
which is 1- or 2-
|generations back.  They're dirt cheap, readily
available,
| [snip]
  =


Anyone have a link or two to a place where I can buy
some dirt-cheap,
readily available last gen servers?

I would love some links as well. I have heard a lot about dirt cheap last gen 
servers but nobody ever provides a link.


http://www.serversupply.com/products/part_search/pid_lookup.asp?pid=105676
http://www.canvassystems.com/products/c-16-ibm-servers.aspx?_vsrefdom=PPCIBMgclid=CLTF7NHNpaQCFRpbiAodoSxK5Q;
http://www.glcomp.com/Products/IBM-SystemX-x3500-Server.aspx



Lots, and Lots of stuff from eBay - use them to see which companies are 
in the recycling business, then deal with them directly, rather than 
through eBay.


http://computers.shop.ebay.com/i.html?_nkw=ibm+x3500_sacat=58058_sop=12_dmd=1_odkw=ibm+x3500_osacat=0_trksid=p3286.c0.m270.l1313


Companies specializing in used (often off-lease) business computers:

http://compucycle.net/
http://www.andovercg.com/
http://www.recurrent.com/
http://www.lapkosoft.com/
http://www.weirdstuff.com/
http://synergy.ships2day.com/
http://www.useddell.net/
http://www.vibrant.com/


There's hordes more.  I've dealt with all of the above, and have no 
problems recommending them.




The thing here is that you need to educate yourself *before* going out 
and looking.  You need to spend a non-trivial amount of time reading the 
Support pages for Sun, IBM, HP, and Dell, and be able to either *ask* 
for specific part/model numbers, or be able to interpret what is 
advertised.  The key thing here is that many places will advertised/sell 
you some server, and all the info they have is the model number off the 
front.  If you can understand what this means in terms of hardware, then 
you can get a bang-up deal.


I've bought computers from recycling places that were 25% or less of the 
value I could get by *immediately* turning around and selling the system 
somewhere else. All because I could understand the part numbers enough 
to know what I was getting, and the original seller couldn't (or, in 
most cases, didn't have the time to bother).  In particular, what is 
usually the best way to get a deal is to look for a machine which has 
(a) very little info about it in the advertisement, other than model 
number, and (b) seems to be noticeably higher in price than what you've 
seen a stripped version of that model go for.   Some of those will be 
stripped systems which the seller doesn't understand the going rate, but 
many are actually better equipped versions which the additional money is 
more than made up for the significantly better system.


Here's an example using the IBM x3500:

Model 7977-72y seems to be the most commonly available one right now - 
and the config it's generally sold in is (2) x dual-core E5140 2.33Ghz 
Xeons, plus 2GB of RAM. No disk, one power supply.   Tends to go for 
$400-500.


I just got a model 7977- F2x, which was advertised as a 2.0Ghz model, 
with nothing else in the ad except the word loaded.  I paid $750 for 
it,  and got a system with (2) quad-core E5335 Xeons, 8GB of RAM, and 
8x73GB SAS drives, plus the redundant power supply.  The extra $300 more 
than covers the cost I would pay for the additional RAM, power supply, 
and drives, and I get twice the CPU core-count for free.



Be an educated buyer, and the recycled marketplace can be your oyster.  
I've actually made enough doing arbitrage to cover my costs of buying 
a nice SOHO machine each year.  That is, I can buy and sell 10-12 
systems per year, and make $2000-3000 in profit for not much effort. I'd 
estimate you can sustain a 20-30% profit margin by being a smart 
buyer/seller.  At least on a small scale.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Alexander Skwar
Hi.

2010/9/19 R.G. Keen k...@geofex.com

 and last-generation hardware is very, very cheap.

Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheapo last-gen hardware boxes quite often don't have
ECC, do they?

So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it safe enough now do use ZFS on
non-ECC-RAM systems (if backups are around)?

Regards,

Alexander
--
↯    Lifestream (Twitter, Blog, …) ↣ http://alexs77.soup.io/     ↯
↯ Chat (Jabber/Google Talk) ↣ a.sk...@gmail.com , AIM: alexws77  ↯
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Ian Collins

On 09/23/10 06:33 PM, Alexander Skwar wrote:

Hi.

2010/9/19 R.G. Keenk...@geofex.com

   

and last-generation hardware is very, very cheap.
 

Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheapo last-gen hardware boxes quite often don't have
ECC, do they?

So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it safe enough now do use ZFS on
non-ECC-RAM systems (if backups are around)?

   

It's as safe as running any other OS.

The big difference is ZFS will tell you when there's a corruption.  Most 
users of other systems are blissfully unaware of data corruption!


All my desktops use ZFS, none have ECC.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Casper . Dik


I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).

Note that this is not different from using another OS; the difference is 
that ZFS will complain when memory leads to disk corruption; without ZFS 
you will still have memory corruption but you wouldn't know.

Is it helpful not knowing that you have memory corruption?  I don't think 
so.

I've love to have a small (40W) system with ECC but it is difficult to 
find one.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Dick Hoogendijk

 On 23-9-2010 10:25, casper@sun.com wrote:

I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).


I'm using ZFS on a non-ECC machine for years now without any issues. 
Never had errors. Plus, like others said, other OS'ses have the same 
problems and also run quite well. If not, you don't know it. With ZFS 
you will know.

I would say - just go for it. You will never want to go back.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Casper . Dik

  On 23-9-2010 10:25, casper@sun.com wrote:
 I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).

I'm using ZFS on a non-ECC machine for years now without any issues. 
Never had errors. Plus, like others said, other OS'ses have the same 
problems and also run quite well. If not, you don't know it. With ZFS 
you will know.
I would say - just go for it. You will never want to go back.


Indeed.  While I mirror stuff on the same system, I'm now also making
backups using a USB connected disk (eSATA would be better but the box
only has USB).

My backup consists of:



for pool in $pools
do

zfs snapshot -r $p...@$newnapshot
zfs send -R -I $p...@$lastsnapshot $p...@$newsnapshot |
zfs receive -v -u -d -F portable/$pool
done

then I export and store the portable pool somewhere else.

I do run a once per two weeks scrub for all the pools, just in case.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread David Dyer-Bennet

On Thu, September 23, 2010 01:33, Alexander Skwar wrote:
 Hi.

 2010/9/19 R.G. Keen k...@geofex.com

 and last-generation hardware is very, very cheap.

 Yes, of course, it is. But, actually, is that a true statement? I've read
 that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
 RAM. And those cheapo last-gen hardware boxes quite often don't have
 ECC, do they?

Last-generation server hardware supports ECC, and was usually populated
with ECC.  Last-generation desktop hardware rarely supports ECC, and was
even more rarely populated with ECC.

The thing is, last-generation server hardware is, um, marvelously adequate
for most home setups (the problem *I* see with it, for many home setups,
is that it's *noisy*).  So, if you can get it cheap in a sound-level that
fits your needs, that's not at all a bad choice.

I'm running a box I bought new as a home server, but it's NOW at least
last-generation hardware (2006), and it's still running fine; in
particular the CPU load remains trivial compared to what the box supports
(not doing compression or dedup on the main data pool, though I do
compress the backup pools on external USB disks).  (It does have ECC; even
before some of the cases leading to that recommendation were explained on
that list, I just didn't see the percentage in not protecting the memory.)

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread R.G. Keen
I should clarify. I was addressing just the issue of 
virtualizing, not what the complete set of things to
do to prevent data loss is. 

 2010/9/19 R.G. Keen k...@geofex.com
  and last-generation hardware is very, very cheap.
 Yes, of course, it is. But, actually, is that a true
 statement? 
Yes, it is. Last-generation hardware is, in general, 
very cheap. But there is no implication either way 
about ECC in that. And in fact, there is a buyer's
market for last-generation *servers* with ECC that 
is very cheap too. I can get a single-unit rackmount
server setup for under $100 here in Austin that includes
ECC memory. 

That may not be the best of all possible things to do
on a number of levels. But for me, the likelihood of 
making a setup or operating mistake in a virtual machine 
setup server is far outweighs the hardware cost to put
another physical machine on the ground. 

I've read that it's *NOT* advisable to run ZFS on systems 
which do NOT have ECC  RAM. And those cheapo last-gen 
hardware boxes quite often don't have ECC, do they?
Most of them, the ex-desktop boxes, do not. However, 
as I noted above, removed-from-service servers are also
quite cheap. They *do* have ECC. I say this just to 
illustrate the point that a statement about last generation
hardware says nothing about ECC, either positive or negative.

In fact, the issue goes further. Processor chipsets from both
Intel and AMD used to support ECC on an ad-hoc basis. It may
have been there, but may or may not have been supported
by the motherboard. Intels recent chipsets emphatically do 
not support ECC. AMDs do, in general. However, the motherboard
must still support the ECC reporting in hardware and BIOS for
ECC to actually work, and you have to buy the ECC memory. 
The newer the intel motherboard, the less likely and more
expensive ECC is. Older intel motherboards sometimes
did support ECC, as a side note. 

There's about sixteen more pages of typing to cover the issue 
even modestly correctly. The bottom line is this: for 
current-generation hardware, buy an AMD AM3 socket CPU,
ASUS motherboard, and ECC memory. DDR2 and DDR3 ECC
memory is only moderately more expensive than non-ECC.

I have this year built two Opensolaris servers from scratch.
They use the Athlon II processors, 4GB of ECC memory and
ASUS motherboards. This setup runs ECC, and supports ECC 
reporting and scrubbing. The cost of this is about $65 for
the CPU, $110 for memory, and $70-$120 for the motherboard. 
$300 more or less gets you new hardware that runs a 64bit
OS, ECC, and zfs, and does not give you worries about the 
hardware going into wearout. I also bought new, high quality
power supplies for $40-$60 per machine because the power
supply is a single point of failure, and wears out - that's a 
fact that many people ignore until the machine doesn't come
up one day.

 So, I wonder - what's the recommendation, or rather,
 experience as far as home users are concerned? Is it safe 
enough now do use ZFS on non-ECC-RAM systems (if backups 
are around)?
That's more a question about how much you trust your backups
than a question about ECC. 

ZFS is a layer of checking and recovery on disk writes. If your
memory/CPU tell it to carefully save and recover corrupted
data, it will. Memory corruption is something zfs does not 
address in any way, positive or negative. 

[i][b]The correct question is this: given how much value you put on
not losing your data to hardware or software errors, how much
time and money are you willing to spend to make sure you don't 
lose your data?[/b][/i]
ZFS prevents or mitigates many of the issues involved with disk
errors and bit rot. ECC prevents or mitigates many of the issues
involved with memory corruption. 

My recommendation is this: if you are playing around, fine, use
virtual machines for your data backup. If you want some amount
of real data backup security, address the issues of data corruption
on as many levels as you can. Safe enough is something only 
you can answer. My answer, for me and my data, is a separate
machine which does only data backup, which runs both ECC and
zfs, on new (and burnt-in) hardware, which runs only the data
management tasks to simplify the software interactions being run,
and that being two levels deep on different hardware setups, 
finally flushing out to offline DVDs which are themselves protected
by ECC (look up DVDisaster) and periodically scanned for errors
and recopied. 

That probably seems excessive. But I've been burned with subtle
data loss before. It only takes one or two flipped bits in the wrong
places to make a really ugly scenario. Losing an entire file is in 
many ways easier to live with than a quiet error that gets 
propagated silently into your backup stream. When that happens, 
you can't trust **any** file until you have manually checked it, if
that is even possible. Want a really paranoia inducing situation?
Think about what happens if you find a silent bit corruption in 
a file system 

Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Erik Trimble
 [I'm deleting the whole thread, since this is a rehash of several 
discussions on this list previously - check out the archives, and search 
for ECC RAM]



These days, for a home server, you really have only one choice to make:

How much power do I care that this thing uses?



If you are sensitive to the power (and possibly cooling) budget that 
your home server might use, then there are a myriad of compromises 
you're going to have to make - and lack of ECC support is almost 
certainly going to be the first one.  Very, very, very few low-power 
(i.e. under 25W) CPUs support ECC.  A couple of the very-low-voltage EE 
Opterons, and some of the laptop-series Core2 chips are about the best 
hope you have get a CPU which is both low-power and supports ECC.



If you don't really care about ultra-low-power, then there's absolutely 
no excuse not to buy a USED server-class machine which is 1- or 2- 
generations back.  They're dirt cheap, readily available, and support 
all those nice features you'll have problems replicating in trying to do 
a build-it-yourself current-gen box.



For instance, an IBM x3500 tower machine, with dual-core Xeon 
5100-series CPUs, on-board ILOM/BMC, redundant power supply, ECC RAM 
support, and 8 hot-swap SAS/SATA 3.5 bays (and the nice SAS/SATA 
controller supported by Solaris) is about $500, minus the drives.  The 
Sun Ultra 40 is similar.  The ultra-cheapo Dell 400SC works fine, too.



And, frankly, buying a used brand-name server machine will almost 
certainly give you a big advantage over building-it-yourself in one 
crucial (and generally overlooked) area:  the power supply.   These 
machines have significantly better power supplies (and, most have 
redundant ones) than what you can buy for a PC.  Indeed, figure you need 
to spend at least $100 on the PS alone if you build it yourself.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Mike.


On 9/23/2010 at 12:38 PM Erik Trimble wrote:

| [snip]
|If you don't really care about ultra-low-power, then there's
absolutely 
|no excuse not to buy a USED server-class machine which is 1- or 2- 
|generations back.  They're dirt cheap, readily available, 
| [snip]
 =



Anyone have a link or two to a place where I can buy some dirt-cheap,
readily available last gen servers?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Peter Jeremy
On 2010-Sep-24 00:58:47 +0800, R.G. Keen k...@geofex.com wrote:
That may not be the best of all possible things to do
on a number of levels. But for me, the likelihood of 
making a setup or operating mistake in a virtual machine 
setup server is far outweighs the hardware cost to put
another physical machine on the ground. 

The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parallel SCSI and PCI or PCIx rather than SAS/SATA and PCIe).  And
when something fails (fan, PSU, ...), it's more likely to be customised
in some way that makes it more difficult/expensive to repair/replace.

In fact, the issue goes further. Processor chipsets from both
Intel and AMD used to support ECC on an ad-hoc basis. It may
have been there, but may or may not have been supported
by the motherboard. Intels recent chipsets emphatically do 
not support ECC.

Not quite.  When Intel moved the memory controllers from the
northbridge into the CPU, they made a conscious decision to separate
server and desktop CPUs and chipsets.  The desktop CPUs do not support
ECC whereas the server ones do - this way they can continue to charge
a premium for server-grade parts and prevent the server
manufacturers from using lower-margin desktop parts.  This means that
if you want an Intel-based solution, you need to look at a Xeon CPU.
That said, the low-end Xeons aren't outrageously expensive and you
generally wind up with support for registered RAM and registered ECC
RAM is often easier to find than unregistered ECC RAM.

 AMDs do, in general.

AMD chose to leave ECC support in almost all their higher-end memory
controllers, rather than use it as a market differentiator.  AFAIK,
all non-mobile Athlon, Phenom and Opteron CPUs support ECC, whereas
the lower-end Sempron, Neo, Turion and Geode CPUs don't.  Note that
Athlon and Phenom CPUs normally need unbuffered RAM whereas Opteron
CPUs normally want buffered/registered RAM.

 However, the motherboard
must still support the ECC reporting in hardware and BIOS for
ECC to actually work, and you have to buy the ECC memory. 

In the case of AMD motherboards, it's really just laziness on the
manufacturer's part to not bother routing the additional tracks.

The newer the intel motherboard, the less likely and more
expensive ECC is. Older intel motherboards sometimes
did support ECC, as a side note. 

On older Intel motherboards, it was a chipset issue rather than a
CPU issue (and even if the chipset supported ECC, the motherboard
manufacturer might have decided to not bother running the ECC tracks).

There's about sixteen more pages of typing to cover the issue 
even modestly correctly. The bottom line is this: for 
current-generation hardware, buy an AMD AM3 socket CPU,
ASUS motherboard, and ECC memory. DDR2 and DDR3 ECC
memory is only moderately more expensive than non-ECC.

Asus appears to have made a conscious decision to support ECC on
all AMD motherboards whereas other vendors support it sporadically
and determining whether a particular motherboard supports ECC can
be quite difficult since it's never one of the options in their
motherboard selection tools.

And when picking the RAM, make sure it's compatible with your
motherboard - motherboards are virtually never compatible with
both unbuffered and buffered RAM.

hardware going into wearout. I also bought new, high quality
power supplies for $40-$60 per machine because the power
supply is a single point of failure, and wears out - that's a 
fact that many people ignore until the machine doesn't come
up one day.

Doesn't come up one day is at least a clear failure.  With a
cheap (or under-dimensioned) PSU, things are more likely to go
out of tolerance under heavy load so you wind up with unrepeatable
strange glitches.

Think about what happens if you find a silent bit corruption in 
a file system that includes encrypted files. 

Or compressed files.

-- 
Peter Jeremy


pgp2gl67ZdR99.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread R.G. Keen
 On 2010-Sep-24 00:58:47 +0800, R.G. Keen
 k...@geofex.com wrote:
  But for me, the likelihood of
 making a setup or operating mistake in a virtual machine 
 setup server is far outweighs the hardware cost to put
 another physical machine on the ground. 
 
 The downsides are generally that it'll be slower and less power-
 efficient that a current generation server 
My comment was about a physical machine versus virtual machine, 
and my likelihood of futzing up the setup, not new machine versus 
old machine. There are many upsides and downsides on the new 
versus old questions too. 

and the I/O interfaces will
 be also be last generation (so you are more likely to be stuck with
 parallel SCSI and PCI or PCIx rather than SAS/SATA and PCIe).  And
 when something fails (fan, PSU, ...), it's more likely to be customised
 in some way that makes it more difficult/expensive to repair/replace.
Presuming what you did was buy a last generation server after you 
decided to go for a physical machine. That's not what I did, as I mentioned
later in the posting. Server hardware in general is more expensive than
desktop, and even last generation server hardware will cost more
to repair than desktop. To a hardware manufacturer, server is 
synonymous with these guys can be convinced to pay more if we make
something a little different. And there is a cottage industry of people 
who sell repair parts for older servers at exorbitant prices because there
is some non-techie businessman who will pay.

 Not quite.  When Intel moved the memory controllers from the 
 northbridge into the CPU, they made a conscious  decision to separate
 server and desktop CPUs and chipsets.  The desktop CPUs do not support
 ECC whereas the server ones do 
So, from the lower-cost new hardware view, newer Intel chipsets emphatically
do not support ECC. The (expensive) server-class hardware/chipsets, etc., do. 
A lower-end home class server is unlikely to be built from these much more
expensive - by plan/design - parts. 

 That said, the low-end Xeons aren't outrageously expensive 
They aren't. I considered using  Xeons in my servers. It was about another
$200 in the end. I bought disks with the $200. 

and you
 generally wind up with support for registered RAM and registered ECC
 RAM is often easier to find than unregistered ECC RAM.
I had no difficulty at all finding unregistered ECC RAM. 
Newegg has steady stock of DDR2 and DDR3 unregistered and registered
ECC. For instance: 
2GB 240-Pin DDR3 ECC Registered KVR1333D3D8R9S/2GHT is $59.99.
2GB 240-Pin DDR3 1333 ECC Unbuffered Server Memory Model KVR1333D3E9S/2G
is $43.99 plus $2.00 shipping. 
2GB 240-pin DDR3 NON-ECC is available for $35 per stick and up. The Kingston
brand I used in the ECC examples is $40.
These are representative, and there are multiple choices, in stock, for all 
three
categories. Intel certified costs more if you get registered.

  AMDs do, in general.
 AMD chose to leave ECC support in almost all their higher-end memory
 controllers, rather than use it as a market differentiator. AFAIK,
 all non-mobile Athlon, Phenom and Opteron CPUs support ECC, whereas
 the lower-end Sempron, Neo, Turion and Geode CPUs don't.  
I guess I should have looked at the lower end CPUs - and chipsets before
I took my in general count. I didn't, and every chipset I saw had ECC
support. My lowest end CPU was the Athlon II X2 240e, and every chipset
for that and above that I found supports ECC. 

 In the case of AMD motherboards, it's really just laziness on the
 manufacturer's part to not bother routing the additional tracks.
And doing the support in bios. I did research these issues a fair amount.
For the same chipset, ASUS MBs seem to have bios settings for ECC and
Gigabyte, for instance, do not. I determined this by downloading the 
user manuals for the mobos and reading them. I didn't find a brand other
that ASUS that had a clear support for ECC in the bios. 

But my search was not exhaustive. 

 On older Intel motherboards, it was a chipset issue rather than a
 CPU issue (and even if the chipset supported ECC, the motherboard
 manufacturer might have decided to not bother running
 the ECC tracks).
I think that's generically true.

 Asus appears to have made a conscious decision to support ECC on
 all AMD motherboards whereas other vendors support it sporadically
 and determining whether a particular motherboard supports ECC can
 be quite difficult since it's never one of the options in their
 motherboard selection tools.
Yep. I resorted to selecting mobos that I'd otherwise want, then 
downloaded the user manuals and read the bios configurations 
pages. If it didn't specifically say how to configure ECC, they it 
MIGHT be supported somehow, but also might not. Gigabyte boards
for instance were reputed to support ECC in an undocumented way, 
but I figured if they didn't want to say how to configure it, they sure
wouldn't have worried about testing whether it worked. 


 And when picking the RAM,