Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-26 Thread Erik Trimble

 On 9/26/2010 8:06 AM, devsk wrote:


On 9/23/2010 at 12:38 PM Erik Trimble wrote:

| [snip]
|If you don't really care about ultra-low-power, then
there's
absolutely
|no excuse not to buy a USED server-class machine
which is 1- or 2-
|generations back.  They're dirt cheap, readily
available,
| [snip]
  =


Anyone have a link or two to a place where I can buy
some dirt-cheap,
readily available last gen servers?

I would love some links as well. I have heard a lot about "dirt cheap last gen 
servers" but nobody ever provides a link.


http://www.serversupply.com/products/part_search/pid_lookup.asp?pid=105676
http://www.canvassystems.com/products/c-16-ibm-servers.aspx?_vsrefdom=PPCIBM&gclid=CLTF7NHNpaQCFRpbiAodoSxK5Q&;
http://www.glcomp.com/Products/IBM-SystemX-x3500-Server.aspx



Lots, and Lots of stuff from eBay - use them to see which companies are 
in the recycling business, then deal with them directly, rather than 
through eBay.


http://computers.shop.ebay.com/i.html?_nkw=ibm+x3500&_sacat=58058&_sop=12&_dmd=1&_odkw=ibm+x3500&_osacat=0&_trksid=p3286.c0.m270.l1313


Companies specializing in used (often off-lease) business computers:

http://compucycle.net/
http://www.andovercg.com/
http://www.recurrent.com/
http://www.lapkosoft.com/
http://www.weirdstuff.com/
http://synergy.ships2day.com/
http://www.useddell.net/
http://www.vibrant.com/


There's hordes more.  I've dealt with all of the above, and have no 
problems recommending them.




The thing here is that you need to educate yourself *before* going out 
and looking.  You need to spend a non-trivial amount of time reading the 
Support pages for Sun, IBM, HP, and Dell, and be able to either *ask* 
for specific part/model numbers, or be able to interpret what is 
advertised.  The key thing here is that many places will advertised/sell 
you some server, and all the info they have is the model number off the 
front.  If you can understand what this means in terms of hardware, then 
you can get a bang-up deal.


I've bought computers from recycling places that were 25% or less of the 
value I could get by *immediately* turning around and selling the system 
somewhere else. All because I could understand the part numbers enough 
to know what I was getting, and the original seller couldn't (or, in 
most cases, didn't have the time to bother).  In particular, what is 
usually the best way to get a deal is to look for a machine which has 
(a) very little info about it in the advertisement, other than model 
number, and (b) seems to be noticeably higher in price than what you've 
seen a "stripped" version of that model go for.   Some of those will be 
stripped systems which the seller doesn't understand the going rate, but 
many are actually better equipped versions which the additional money is 
more than made up for the significantly better system.


Here's an example using the IBM x3500:

Model 7977-72y seems to be the most commonly available one right now - 
and the config it's generally sold in is (2) x dual-core E5140 2.33Ghz 
Xeons, plus 2GB of RAM. No disk, one power supply.   Tends to go for 
$400-500.


I just got a model 7977- F2x, which was advertised as a 2.0Ghz model, 
with nothing else in the ad except the word "loaded".  I paid $750 for 
it,  and got a system with (2) quad-core E5335 Xeons, 8GB of RAM, and 
8x73GB SAS drives, plus the redundant power supply.  The extra $300 more 
than covers the cost I would pay for the additional RAM, power supply, 
and drives, and I get twice the CPU core-count for "free".



Be an educated buyer, and the recycled marketplace can be your oyster.  
I've actually made enough doing "arbitrage" to cover my costs of buying 
a nice SOHO machine each year.  That is, I can buy and sell 10-12 
systems per year, and make $2000-3000 in profit for not much effort. I'd 
estimate you can sustain a 20-30% profit margin by being a smart 
buyer/seller.  At least on a small scale.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-26 Thread devsk
> 
> 
> On 9/23/2010 at 12:38 PM Erik Trimble wrote:
> 
> | [snip]
> |If you don't really care about ultra-low-power, then
> there's
> absolutely 
> |no excuse not to buy a USED server-class machine
> which is 1- or 2- 
> |generations back.  They're dirt cheap, readily
> available, 
> | [snip]
>  =
> 
> 
> Anyone have a link or two to a place where I can buy
> some dirt-cheap,
> readily available last gen servers?

I would love some links as well. I have heard a lot about "dirt cheap last gen 
servers" but nobody ever provides a link.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-26 Thread Alex Blewitt
On 25 Sep 2010, at 19:56, Giovanni Tirloni  wrote:

> We have correctable memory errors on ECC systems on a monthly basis. It's not 
> if they'll happen but how often.

"DRAM Errors in the wild: a large-scale field study" is worth a read if you 
have time. 

http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf

Alex
(@alblue on Twitter)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread R.G. Keen
> Erik Trimble sez:
> Honestly, I've said it before, and I'll say it (yet) again:  unless you 
> have very stringent power requirement (or some other unusual 
> requirement, like very, very low noise),  used (or even new-in-box, 
> previous generation excess inventory) OEM stuff is far superior to any 
> build-it-yourself rig you can come up with. 
It's horses for courses, I guess. I've had to live with server fan noise and
power requirements and it's not pleasant. I very much like the reliability
characteristics of older servers, but they eat a lot of power and are noisy
as you say.

On the other hand, I did the calculation of the difference in cost of 
electricity over two years at my local rates (central Texas) and it's easy to
save a few hundred dollars over two years of 24/7 operation with low power
systems. I am NOT necessarily saying that my system is something to emulate, 
nor that my choices are right for everyone, particularly hardware building 
amateurs. My past includes a lot of hardware design and build. So putting 
together a Frankenserver is not something that is daunting. I also have a 
history which includes making educated guesses about failure rates and the 
cost of losing data. So I made choices based on my experience and skills.

For **me**, putting together a server out of commercial parts is a far better 
bet than running a server in a virtual machine on desktop parts of 
any vintage, which was the original question - whether a virtual server on top
of Windows running on some hardware was advisable for a beginner. For me, 
it's not. The relative merits will vary from user to user according to their 
skills
and experience level. I was willing to learn Solaris to get zfs. Given what's 
happened with Oracle since I started that, that may have been a bad bet, but
my server and data do now live and breathe for better or worse. 

But I have no fears of breathing life into new hardware and copying 
the old data over. Nor is it a trial to me to fire up a last-generation server, 
install a new OS and copy the data over. To me, that's all a cost/benefit 
calculation.

>So much so, in fact, that we   should really consider the reference 
> recommendation for a ZFS fileserver 
> to be certain configs of brand-name hardware, and NOT
> try to recommend other things to folks.
I personally would have loved to have that when I started the zfs/Opensolaris 
trek a year ago. It was not available, and I paid my dues learning the OS and
zfs. I'm not sure, given where Oracle is taking Solaris, that there is any need 
to
recommend any particular hardware to folks in general. I think the number of 
people following the path I took, using OpenSolaris to get zfs, and 
buying/building
a home machine to do it, are going to nosedive dramatically, by Oracle's design.

To me the data stability issues dictated zfs, and Opensolaris was where I
got that. I put up with the labyrinthine mess of figuring out what would and 
would
not run OS to get zfs, and it worked OK. To me, data integrity was what I was
after.

I had sub issues. It's silly (in my estimation) to worry about data integrity 
on 
disks and not in memory. That made ECC an issue. Hence my burrowing through 
the most cost-efficient way to get ECC. Oh, yeah, cost. I wanted it to be as
cheap as possible, given the other constraints. Then hardware reliability. I 
actually bought an off-duty server locally because of the cost advantages and 
the
perceived hardware realiability. I can't get OS to work on it - yet at least. 
I'm sure
that it's my problems with being an Open Solaris neophyte. But it sure is noisy.

What **my** compromise was is 
- new hardware to stay inside the shallow end of the failure-rate bathtub
- burn in to get past the infant mortality issues
- ECC as cheaply as possible, given that I actually wanted it to work
- modern SATA controllers for the storage, which dragged in PCIe and compatible
controllers under Opensolaris
- as low a power as possible, as that can save about $100 a year *for me*
- as low a noise factor as possible because I've spent too much of my life 
listening
to machines desperately try to stay cool.

What I could trade for this was not caring whether the hardware was 
particularly 
fast; it was a layer of data backup, not a mission critical server. And it had 
to 
run zfs, which is why I started this mess. Also, I don't have huge data storage 
problems. I enforce the live backup data to be under 4TB. Yep, that's tiny by
comparison. I have small problems. 8-)

Result: a new-components server that runs zfs, works on my house network, uses
under 100W as measured at the wall socket, and stores 4TB. I got what I set out
to get, so I'm happy with it. 

This is not the system for everybody, but it works for me. Writing down what 
you're trying to do is a great tool. People used to get really mad at me for 
saying, 
in Very Serious Business Meetings "If we were completely successful, what would 
that look like?"  It almost always 

Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Ian Collins

On 09/26/10 07:25 AM, Erik Trimble wrote:

 On 9/25/2010 1:57 AM, Ian Collins wrote:

On 09/25/10 02:54 AM, Erik Trimble wrote:


Honestly, I've said it before, and I'll say it (yet) again:  unless 
you have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to 
any build-it-yourself rig you can come up with. So much so, in fact, 
that we should really consider the reference recommendation for a 
ZFS fileserver to be certain configs of brand-name hardware, and NOT 
try to recommend other things to folks.



Unless you live somewhere with a very small used server market that is!



But, I hear there's this newfangled thingy, called some darned fool 
thing like "the interanets" or some such, that lets you, you know, 
*order* things from far away places using one of those funny PeeCee 
dood-ads, and they like, *deliver* to your door.


Have you ever had to pay international shipping to the other side of the 
world on a second hand server?!  Not all sellers will ship internationally.


I do bring in a lot of system components, but chassis aren't worth the cost.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Giovanni Tirloni
On Thu, Sep 23, 2010 at 1:08 PM, Dick Hoogendijk  wrote:

>  And about what SUN systems are you thinking for 'home use' ?
> The likeliness of memory failures might be much higher than becoming a
> millionair, but in the years past I have never had one. And my home sytems
> are rather cheap. Mind you, not the cheapest, but rather cheap. I do buy
> good memory though. So, to me, with a good backup I feel rather safe using
> ZFS. I also had it running for quite some time on a 32bits machine and that
> also worked out fine.
>

We have correctable memory errors on ECC systems on a monthly basis. It's
not if they'll happen but how often.

-- 
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Erik Trimble

 On 9/25/2010 1:57 AM, Ian Collins wrote:

On 09/25/10 02:54 AM, Erik Trimble wrote:


Honestly, I've said it before, and I'll say it (yet) again:  unless 
you have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to 
any build-it-yourself rig you can come up with. So much so, in fact, 
that we should really consider the reference recommendation for a ZFS 
fileserver to be certain configs of brand-name hardware, and NOT try 
to recommend other things to folks.



Unless you live somewhere with a very small used server market that is!



But, I hear there's this newfangled thingy, called some darned fool 
thing like "the interanets" or some such, that lets you, you know, 
*order* things from far away places using one of those funny PeeCee 
dood-ads, and they like, *deliver* to your door.


And here I was, just getting used to all those nice things from the 
Sears catalog.  Gonna have to learn me a whole new thing again.


Damn kids.





Of course, living in certain countries it's hard to get the hardware 
through customs...


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Ian Collins

On 09/25/10 02:54 AM, Erik Trimble wrote:


Honestly, I've said it before, and I'll say it (yet) again:  unless 
you have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to any 
build-it-yourself rig you can come up with. So much so, in fact, that 
we should really consider the reference recommendation for a ZFS 
fileserver to be certain configs of brand-name hardware, and NOT try 
to recommend other things to folks.



Unless you live somewhere with a very small used server market that is!

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-24 Thread Erik Trimble

 On 9/24/2010 6:27 AM, Frank Middleton wrote:

On 09/23/10 19:08, Peter Jeremy wrote:


The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parallel SCSI and PCI or PCIx rather than SAS/SATA and PCIe).  And
when something fails (fan, PSU, ...), it's more likely to be customised
in some way that makes it more difficult/expensive to repair/replace.


Sometimes the bargains on E-Bay are such that you can afford to get
2 or even a 3rd machine for spares, and a PCI-X SAS card has more
than adequate performance for SOHO use. But, I agree, repair is
probably impossible unless you can simply swap in a spare part from
another box. However server class machines are pretty tough. My used
Sun hardware has yet to drop a beat and they've been running 24*7
for years - well, I cycle the spares since they were never needed for
parts, so it's less than that. But they are noisy...

Surely the issue about repairs extends to current generation hardware.
It gets obsolete so quickly that finding certain parts (especially mobos)
may be next to impossible. So what's the difference other than lots of 
$$$?


Cheers -- Frank


Most certainly, but remember that even 2- generations old hardware at 
this point means either a 2200-series Opteron or a 5100-series Xeon 
system, which come with a minimum of PCI-E 1.0 slots, DDR2 RAM, and 
usually SAS controllers.   I've a dual-socket Barcelona (Opteron 2354) 
system here under my desk, and it's so overkill for a SOHO server it's 
not even funny.


Also, if you get OEM (name-brand) equipment from a used seller, that 
means you get the ability to search eBay (or your favorite local 
recycler) for the FRU part that goes bad.  There's a *ton* of spare 
parts floating around the used market for anything less than 5 years 
old, and even the 5-8 year-old parts are commonplace.   HP, Sun, IBM, 
and Dell all have the FRU/Option part label *on the part itself*, so if 
something dies, it's idiot simple to figure out what to get to replace 
it.  And, the used parts prices are, well, *nice*.



Honestly, I've said it before, and I'll say it (yet) again:  unless you 
have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to any 
build-it-yourself rig you can come up with. So much so, in fact, that we 
should really consider the reference recommendation for a ZFS fileserver 
to be certain configs of brand-name hardware, and NOT try to recommend 
other things to folks.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-24 Thread Frank Middleton

On 09/23/10 19:08, Peter Jeremy wrote:


The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parallel SCSI and PCI or PCIx rather than SAS/SATA and PCIe).  And
when something fails (fan, PSU, ...), it's more likely to be customised
in some way that makes it more difficult/expensive to repair/replace.


Sometimes the bargains on E-Bay are such that you can afford to get
2 or even a 3rd machine for spares, and a PCI-X SAS card has more
than adequate performance for SOHO use. But, I agree, repair is
probably impossible unless you can simply swap in a spare part from
another box. However server class machines are pretty tough. My used
Sun hardware has yet to drop a beat and they've been running 24*7
for years - well, I cycle the spares since they were never needed for
parts, so it's less than that. But they are noisy...

Surely the issue about repairs extends to current generation hardware.
It gets obsolete so quickly that finding certain parts (especially mobos)
may be next to impossible. So what's the difference other than lots of $$$?

Cheers -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread R.G. Keen
> On 2010-Sep-24 00:58:47 +0800, "R.G. Keen"
>  wrote:
> > But for me, the likelihood of
> >making a setup or operating mistake in a virtual machine 
> >setup server is far outweighs the hardware cost to put
> >another physical machine on the ground. 
> 
> The downsides are generally that it'll be slower and less power-
> efficient that a current generation server 
My comment was about a physical machine versus virtual machine, 
and my likelihood of futzing up the setup, not new machine versus 
old machine. There are many upsides and downsides on the new 
versus old questions too. 

>and the I/O interfaces will
> be also be last generation (so you are more likely to be stuck with
> parallel SCSI and PCI or PCIx rather than SAS/SATA and PCIe).  And
> when something fails (fan, PSU, ...), it's more likely to be customised
> in some way that makes it more difficult/expensive to repair/replace.
Presuming what you did was buy a last generation server after you 
decided to go for a physical machine. That's not what I did, as I mentioned
later in the posting. Server hardware in general is more expensive than
desktop, and even last generation server hardware will cost more
to repair than desktop. To a hardware manufacturer, "server" is 
synonymous with "these guys can be convinced to pay more if we make
something a little different". And there is a cottage industry of people 
who sell repair parts for older servers at exorbitant prices because there
is some non-techie businessman who will pay.

> Not quite.  When Intel moved the memory controllers from the 
> northbridge into the CPU, they made a conscious  decision to separate
> server and desktop CPUs and chipsets.  The desktop CPUs do not support
> ECC whereas the server ones do 
So, from the lower-cost new hardware view, newer Intel chipsets emphatically
do not support ECC. The (expensive) server-class hardware/chipsets, etc., do. 
A lower-end home class server is unlikely to be built from these much more
expensive - by plan/design - parts. 

> That said, the low-end Xeons aren't outrageously expensive 
They aren't. I considered using  Xeons in my servers. It was about another
$200 in the end. I bought disks with the $200. 

>and you
> generally wind up with support for registered RAM and registered ECC
> RAM is often easier to find than unregistered ECC RAM.
I had no difficulty at all finding unregistered ECC RAM. 
Newegg has steady stock of DDR2 and DDR3 unregistered and registered
ECC. For instance: 
2GB 240-Pin DDR3 ECC Registered KVR1333D3D8R9S/2GHT is $59.99.
2GB 240-Pin DDR3 1333 ECC Unbuffered Server Memory Model KVR1333D3E9S/2G
is $43.99 plus $2.00 shipping. 
2GB 240-pin DDR3 NON-ECC is available for $35 per stick and up. The Kingston
brand I used in the ECC examples is $40.
These are representative, and there are multiple choices, in stock, for all 
three
categories. "Intel certified" costs more if you get registered.

> > AMDs do, in general.
> AMD chose to leave ECC support in almost all their higher-end memory
> controllers, rather than use it as a market differentiator. AFAIK,
> all non-mobile Athlon, Phenom and Opteron CPUs support ECC, whereas
> the lower-end Sempron, Neo, Turion and Geode CPUs don't.  
I guess I should have looked at the lower end CPUs - and chipsets before
I took my "in general" count. I didn't, and every chipset I saw had ECC
support. My lowest end CPU was the Athlon II X2 240e, and every chipset
for that and above that I found supports ECC. 

> In the case of AMD motherboards, it's really just laziness on the
> manufacturer's part to not bother routing the additional tracks.
And doing the support in bios. I did research these issues a fair amount.
For the same chipset, ASUS MBs seem to have bios settings for ECC and
Gigabyte, for instance, do not. I determined this by downloading the 
user manuals for the mobos and reading them. I didn't find a brand other
that ASUS that had a clear support for ECC in the bios. 

But my search was not exhaustive. 

> On older Intel motherboards, it was a chipset issue rather than a
> CPU issue (and even if the chipset supported ECC, the motherboard
> manufacturer might have decided to not bother running
> the ECC tracks).
I think that's generically true.

> Asus appears to have made a conscious decision to support ECC on
> all AMD motherboards whereas other vendors support it sporadically
> and determining whether a particular motherboard supports ECC can
> be quite difficult since it's never one of the options in their
> motherboard selection tools.
Yep. I resorted to selecting mobos that I'd otherwise want, then 
downloaded the user manuals and read the bios configurations 
pages. If it didn't specifically say how to configure ECC, they it 
MIGHT be supported somehow, but also might not. Gigabyte boards
for instance were reputed to support ECC in an undocumented way, 
but I figured if they didn't want to say how to configure it, they sure
wouldn't have worried about testing whether it wor

Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Peter Jeremy
On 2010-Sep-24 00:58:47 +0800, "R.G. Keen"  wrote:
>That may not be the best of all possible things to do
>on a number of levels. But for me, the likelihood of 
>making a setup or operating mistake in a virtual machine 
>setup server is far outweighs the hardware cost to put
>another physical machine on the ground. 

The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parallel SCSI and PCI or PCIx rather than SAS/SATA and PCIe).  And
when something fails (fan, PSU, ...), it's more likely to be customised
in some way that makes it more difficult/expensive to repair/replace.

>In fact, the issue goes further. Processor chipsets from both
>Intel and AMD used to support ECC on an ad-hoc basis. It may
>have been there, but may or may not have been supported
>by the motherboard. Intels recent chipsets emphatically do 
>not support ECC.

Not quite.  When Intel moved the memory controllers from the
northbridge into the CPU, they made a conscious decision to separate
server and desktop CPUs and chipsets.  The desktop CPUs do not support
ECC whereas the server ones do - this way they can continue to charge
a premium for "server-grade" parts and prevent the server
manufacturers from using lower-margin desktop parts.  This means that
if you want an Intel-based solution, you need to look at a Xeon CPU.
That said, the low-end Xeons aren't outrageously expensive and you
generally wind up with support for registered RAM and registered ECC
RAM is often easier to find than unregistered ECC RAM.

> AMDs do, in general.

AMD chose to leave ECC support in almost all their higher-end memory
controllers, rather than use it as a market differentiator.  AFAIK,
all non-mobile Athlon, Phenom and Opteron CPUs support ECC, whereas
the lower-end Sempron, Neo, Turion and Geode CPUs don't.  Note that
Athlon and Phenom CPUs normally need unbuffered RAM whereas Opteron
CPUs normally want buffered/registered RAM.

> However, the motherboard
>must still support the ECC reporting in hardware and BIOS for
>ECC to actually work, and you have to buy the ECC memory. 

In the case of AMD motherboards, it's really just laziness on the
manufacturer's part to not bother routing the additional tracks.

>The newer the intel motherboard, the less likely and more
>expensive ECC is. Older intel motherboards sometimes
>did support ECC, as a side note. 

On older Intel motherboards, it was a chipset issue rather than a
CPU issue (and even if the chipset supported ECC, the motherboard
manufacturer might have decided to not bother running the ECC tracks).

>There's about sixteen more pages of typing to cover the issue 
>even modestly correctly. The bottom line is this: for 
>current-generation hardware, buy an AMD AM3 socket CPU,
>ASUS motherboard, and ECC memory. DDR2 and DDR3 ECC
>memory is only moderately more expensive than non-ECC.

Asus appears to have made a conscious decision to support ECC on
all AMD motherboards whereas other vendors support it sporadically
and determining whether a particular motherboard supports ECC can
be quite difficult since it's never one of the options in their
motherboard selection tools.

And when picking the RAM, make sure it's compatible with your
motherboard - motherboards are virtually never compatible with
both unbuffered and buffered RAM.

>hardware going into wearout. I also bought new, high quality
>power supplies for $40-$60 per machine because the power
>supply is a single point of failure, and wears out - that's a 
>fact that many people ignore until the machine doesn't come
>up one day.

"Doesn't come up one day" is at least a clear failure.  With a
cheap (or under-dimensioned) PSU, things are more likely to go
out of tolerance under heavy load so you wind up with unrepeatable
strange glitches.

>Think about what happens if you find a silent bit corruption in 
>a file system that includes encrypted files. 

Or compressed files.

-- 
Peter Jeremy


pgp2gl67ZdR99.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Mike.


On 9/23/2010 at 12:38 PM Erik Trimble wrote:

| [snip]
|If you don't really care about ultra-low-power, then there's
absolutely 
|no excuse not to buy a USED server-class machine which is 1- or 2- 
|generations back.  They're dirt cheap, readily available, 
| [snip]
 =



Anyone have a link or two to a place where I can buy some dirt-cheap,
readily available last gen servers?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Erik Trimble
 [I'm deleting the whole thread, since this is a rehash of several 
discussions on this list previously - check out the archives, and search 
for "ECC RAM"]



These days, for a "home" server, you really have only one choice to make:

"How much power do I care that this thing uses?"



If you are sensitive to the power (and possibly cooling) budget that 
your home server might use, then there are a myriad of compromises 
you're going to have to make - and lack of ECC support is almost 
certainly going to be the first one.  Very, very, very few low-power 
(i.e. under 25W) CPUs support ECC.  A couple of the very-low-voltage EE 
Opterons, and some of the laptop-series Core2 chips are about the best 
hope you have get a CPU which is both low-power and supports ECC.



If you don't really care about ultra-low-power, then there's absolutely 
no excuse not to buy a USED server-class machine which is 1- or 2- 
generations back.  They're dirt cheap, readily available, and support 
all those nice features you'll have problems replicating in trying to do 
a build-it-yourself current-gen box.



For instance, an IBM x3500 tower machine, with dual-core Xeon 
5100-series CPUs, on-board ILOM/BMC, redundant power supply, ECC RAM 
support, and 8 hot-swap SAS/SATA 3.5" bays (and the nice SAS/SATA 
controller supported by Solaris) is about $500, minus the drives.  The 
Sun Ultra 40 is similar.  The ultra-cheapo Dell 400SC works fine, too.



And, frankly, buying a used brand-name server machine will almost 
certainly give you a big advantage over building-it-yourself in one 
crucial (and generally overlooked) area:  the power supply.   These 
machines have significantly better power supplies (and, most have 
redundant ones) than what you can buy for a PC.  Indeed, figure you need 
to spend at least $100 on the PS alone if you build it yourself.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread R.G. Keen
I should clarify. I was addressing just the issue of 
virtualizing, not what the complete set of things to
do to prevent data loss is. 

> 2010/9/19 R.G. Keen 
> > and last-generation hardware is very, very cheap.
> Yes, of course, it is. But, actually, is that a true
> statement? 
Yes, it is. Last-generation hardware is, in general, 
very cheap. But there is no implication either way 
about ECC in that. And in fact, there is a buyer's
market for last-generation *servers* with ECC that 
is very cheap too. I can get a single-unit rackmount
server setup for under $100 here in Austin that includes
ECC memory. 

That may not be the best of all possible things to do
on a number of levels. But for me, the likelihood of 
making a setup or operating mistake in a virtual machine 
setup server is far outweighs the hardware cost to put
another physical machine on the ground. 

>I've read that it's *NOT* advisable to run ZFS on systems 
>which do NOT have ECC  RAM. And those cheapo last-gen 
>hardware boxes quite often don't have ECC, do they?
Most of them, the ex-desktop boxes, do not. However, 
as I noted above, removed-from-service servers are also
quite cheap. They *do* have ECC. I say this just to 
illustrate the point that a statement about last generation
hardware says nothing about ECC, either positive or negative.

In fact, the issue goes further. Processor chipsets from both
Intel and AMD used to support ECC on an ad-hoc basis. It may
have been there, but may or may not have been supported
by the motherboard. Intels recent chipsets emphatically do 
not support ECC. AMDs do, in general. However, the motherboard
must still support the ECC reporting in hardware and BIOS for
ECC to actually work, and you have to buy the ECC memory. 
The newer the intel motherboard, the less likely and more
expensive ECC is. Older intel motherboards sometimes
did support ECC, as a side note. 

There's about sixteen more pages of typing to cover the issue 
even modestly correctly. The bottom line is this: for 
current-generation hardware, buy an AMD AM3 socket CPU,
ASUS motherboard, and ECC memory. DDR2 and DDR3 ECC
memory is only moderately more expensive than non-ECC.

I have this year built two Opensolaris servers from scratch.
They use the Athlon II processors, 4GB of ECC memory and
ASUS motherboards. This setup runs ECC, and supports ECC 
reporting and scrubbing. The cost of this is about $65 for
the CPU, $110 for memory, and $70-$120 for the motherboard. 
$300 more or less gets you new hardware that runs a 64bit
OS, ECC, and zfs, and does not give you worries about the 
hardware going into wearout. I also bought new, high quality
power supplies for $40-$60 per machine because the power
supply is a single point of failure, and wears out - that's a 
fact that many people ignore until the machine doesn't come
up one day.

> So, I wonder - what's the recommendation, or rather,
> experience as far as home users are concerned? Is it "safe 
>enough" now do use ZFS on non-ECC-RAM systems (if backups 
>are around)?
That's more a question about how much you trust your backups
than a question about ECC. 

ZFS is a layer of checking and recovery on disk writes. If your
memory/CPU tell it to carefully save and recover corrupted
data, it will. Memory corruption is something zfs does not 
address in any way, positive or negative. 

[i][b]The correct question is this: given how much value you put on
not losing your data to hardware or software errors, how much
time and money are you willing to spend to make sure you don't 
lose your data?[/b][/i]
ZFS prevents or mitigates many of the issues involved with disk
errors and bit rot. ECC prevents or mitigates many of the issues
involved with memory corruption. 

My recommendation is this: if you are playing around, fine, use
virtual machines for your data backup. If you want some amount
of real data backup security, address the issues of data corruption
on as many levels as you can. "Safe enough" is something only 
you can answer. My answer, for me and my data, is a separate
machine which does only data backup, which runs both ECC and
zfs, on new (and burnt-in) hardware, which runs only the data
management tasks to simplify the software interactions being run,
and that being two levels deep on different hardware setups, 
finally flushing out to offline DVDs which are themselves protected
by ECC (look up DVDisaster) and periodically scanned for errors
and recopied. 

That probably seems excessive. But I've been burned with subtle
data loss before. It only takes one or two flipped bits in the wrong
places to make a really ugly scenario. Losing an entire file is in 
many ways easier to live with than a quiet error that gets 
propagated silently into your backup stream. When that happens, 
you can't trust **any** file until you have manually checked it, if
that is even possible. Want a really paranoia inducing situation?
Think about what happens if you find a silent bit corruption in 
a file system 

Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread David Dyer-Bennet

On Thu, September 23, 2010 01:33, Alexander Skwar wrote:
> Hi.
>
> 2010/9/19 R.G. Keen 
>
>> and last-generation hardware is very, very cheap.
>
> Yes, of course, it is. But, actually, is that a true statement? I've read
> that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
> RAM. And those cheapo last-gen hardware boxes quite often don't have
> ECC, do they?

Last-generation server hardware supports ECC, and was usually populated
with ECC.  Last-generation desktop hardware rarely supports ECC, and was
even more rarely populated with ECC.

The thing is, last-generation server hardware is, um, marvelously adequate
for most home setups (the problem *I* see with it, for many home setups,
is that it's *noisy*).  So, if you can get it cheap in a sound-level that
fits your needs, that's not at all a bad choice.

I'm running a box I bought new as a home server, but it's NOW at least
last-generation hardware (2006), and it's still running fine; in
particular the CPU load remains trivial compared to what the box supports
(not doing compression or dedup on the main data pool, though I do
compress the backup pools on external USB disks).  (It does have ECC; even
before some of the cases leading to that recommendation were explained on
that list, I just didn't see the percentage in not protecting the memory.)

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-23 Thread Richard Elling
On Sep 23, 2010, at 9:08 AM, Dick Hoogendijk wrote:
> On 23-9-2010 16:34, Frank Middleton wrote:
> 
> > For home use, used Suns are
>   available at ridiculously low prices and
> 
>   > they seem to be much better engineered than your typical PC.
>   Memory
> 
>   > failures are much more likely than winning the pick 6
>   lotto...
> 
> And about what SUN systems are you thinking for 'home use' ?

At one time, due to market pricing pressure, Sun actually sold a server without 
ECC.
Bad idea, didn't last long.  Unfortunately, the PeeCee market is just too cheap 
to 
value ECC.  So they take the risk and hope for the best.

> The likeliness of memory failures might be much higher than becoming a 
> millionair, but in the years past I have never had one. And my home sytems 
> are rather cheap. Mind you, not the cheapest, but rather cheap. I do buy good 
> memory though. So, to me, with a good backup I feel rather safe using ZFS. I 
> also had it running for quite some time on a 32bits machine and that also 
> worked out fine.

Part of the difference is the expected use.  For PCs which are only used 8 
hours per
day, 40 hours per week, rebooting regularly, the risk of transient main memory 
errors
is low.  For servers running 24x7, rebooting once a year, the risk is much 
higher.

> The fact that a perfectly good file can not be read because of a bad checksum 
> is a design failure imho. There should be an option to overrule this 
> behaviour of ZFS.

It isn't a perfectly good file once it has been corrupted. But there are some 
ways to get
at the file contents.  Remember, the blocks are checksummed, not the file. So 
if a bad
block is in the file, you can skip over it.
http://blogs.sun.com/relling/entry/holy_smokes_a_holey_file
http://blogs.sun.com/relling/entry/dd_tricks_for_holey_files
http://blogs.sun.com/relling/entry/more_on_holey_files

 -- richard

-- 
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com
ZFS and performance consulting
http://www.RichardElling.com












___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-23 Thread Dick Hoogendijk

 On 23-9-2010 16:34, Frank Middleton wrote:


 For home use, used Suns are available at ridiculously low prices and
 they seem to be much better engineered than your typical PC. Memory
 failures are much more likely than winning the pick 6 lotto...


And about what SUN systems are you thinking for 'home use' ?
The likeliness of memory failures might be much higher than becoming a 
millionair, but in the years past I have never had one. And my home 
sytems are rather cheap. Mind you, not the cheapest, but rather cheap. I 
do buy good memory though. So, to me, with a good backup I feel rather 
safe using ZFS. I also had it running for quite some time on a 32bits 
machine and that also worked out fine.


The fact that a perfectly good file can not be read because of a bad 
checksum is a design failure imho. There should be an option to overrule 
this behaviour of ZFS.


My 2çt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-23 Thread Frank Middleton

On 09/23/10 03:01, Ian Collins wrote:


So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it "safe enough" now do use ZFS on
non-ECC-RAM systems (if backups are around)?


It's as safe as running any other OS.

The big difference is ZFS will tell you when there's a corruption. Most
users of other systems are blissfully unaware of data corruption!


This runs you into the possibility of perfectly good files becoming inaccessible
due to bad checksums being written to all the mirrors. As Richard Elling
wrote some time ago in "[zfs-discuss] You really do need ECC RAM", see
http://www.cs.toronto.edu/%7Ebianca/papers/sigmetrics09.pdf. There
were a couple of zfs-discuss threads quite recently about memory problems
causing serious issues. Personally, I wouldn't trust any valuable data to any
system without ECC, regardless of OS and file systems. For home use, used
Suns are available at ridiculously low prices and they seem to be much better
engineered than your typical PC. Memory failures are much more likely than
winning the pick 6 lotto...

FWIW Richard helped me diagnose a problem with checksum failures on
mirrored drives a while back and it turned out to be the CPU itself getting
the actual checksum wrong /only on one particular file/, and even then only
when the ambient temperature was high. So ZFS is good at ferreting out
obscure hardware problems :-).

Cheers -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Casper . Dik

>  On 23-9-2010 10:25, casper@sun.com wrote:
>> I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
>
>I'm using ZFS on a non-ECC machine for years now without any issues. 
>Never had errors. Plus, like others said, other OS'ses have the same 
>problems and also run quite well. If not, you don't know it. With ZFS 
>you will know.
>I would say - just go for it. You will never want to go back.


Indeed.  While I mirror stuff on the same system, I'm now also making
backups using a USB connected disk (eSATA would be better but the box
only has USB).

My backup consists of:



for pool in $pools
do

zfs snapshot -r $p...@$newnapshot
zfs send -R -I $p...@$lastsnapshot $p...@$newsnapshot |
zfs receive -v -u -d -F portable/$pool
done

then I export and store the portable pool somewhere else.

I do run a once per two weeks scrub for all the pools, just in case.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Dick Hoogendijk

 On 23-9-2010 10:25, casper@sun.com wrote:

I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).


I'm using ZFS on a non-ECC machine for years now without any issues. 
Never had errors. Plus, like others said, other OS'ses have the same 
problems and also run quite well. If not, you don't know it. With ZFS 
you will know.

I would say - just go for it. You will never want to go back.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Casper . Dik


I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).

Note that this is not different from using another OS; the difference is 
that ZFS will complain when memory leads to disk corruption; without ZFS 
you will still have memory corruption but you wouldn't know.

Is it helpful not knowing that you have memory corruption?  I don't think 
so.

I've love to have a small (<40W) system with ECC but it is difficult to 
find one.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Ian Collins

On 09/23/10 06:33 PM, Alexander Skwar wrote:

Hi.

2010/9/19 R.G. Keen

   

and last-generation hardware is very, very cheap.
 

Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheapo last-gen hardware boxes quite often don't have
ECC, do they?

So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it "safe enough" now do use ZFS on
non-ECC-RAM systems (if backups are around)?

   

It's as safe as running any other OS.

The big difference is ZFS will tell you when there's a corruption.  Most 
users of other systems are blissfully unaware of data corruption!


All my desktops use ZFS, none have ECC.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-22 Thread Alexander Skwar
Hi.

2010/9/19 R.G. Keen 

> and last-generation hardware is very, very cheap.

Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheapo last-gen hardware boxes quite often don't have
ECC, do they?

So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it "safe enough" now do use ZFS on
non-ECC-RAM systems (if backups are around)?

Regards,

Alexander
--
↯    Lifestream (Twitter, Blog, …) ↣ http://alexs77.soup.io/     ↯
↯ Chat (Jabber/Google Talk) ↣ a.sk...@gmail.com , AIM: alexws77  ↯
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss