storage vendors have a credibility problem. i think the big
storage vendors, as referenced in the op, sell you on many
things you don't need for much more than one has to spend.
I went to a product demo from http://www.isilon.com/
They make a filesystem that spans multiple machines. They
8 bits/byte * 1e12 bytes / 1e14 bits/ure = 8%
Isn't that the probability of getting a bad sector when you
read a terabyte? In other words, this is not related to the
disk size but how much you read from the given disk. Granted
that when you resilver you have no choice but to read
erik quanstrom wrote:
i think the lesson here is don't by cheep drives;
Our top-of-the-line Sub Zero and Thermidor kitchen appliances are pure
junk. In fact, I can point to Consumer Reports data that shows an
inverse relationship between appliance cost and reliability.
One who works for
On Mon, 21 Sep 2009 14:02:40 EDT erik quanstrom quans...@quanstro.net wrote:
i would think this is acceptable. at these low levels, something
else is going to get you -- like drives failing unindependently.
say because of power problems.
8% rate for an array rebuild may or may not
On Mon Sep 21 14:51:07 EDT 2009, w...@authentrus.com wrote:
erik quanstrom wrote:
Our top-of-the-line Sub Zero and Thermidor kitchen appliances are pure
junk. In fact, I can point to Consumer Reports data that shows an
inverse relationship between appliance cost and reliability.
storage
i think the lesson here is don't by cheep drives; if you
have enterprise drives at 1e-15 error rate, the fail rate
will be 0.8%. of course if you don't have a raid, the fail
rate is 100%.
if that's not acceptable, then use raid 6.
Hopefully Raid 6 or zfs's raidz2 works well enough
erik quanstrom wrote:
storage vendors have a credibility problem. i think the big
storage vendors, as referenced in the op, sell you on many
things you don't need for much more than one has to spend.
Those of us who know something about Coraid understand that your company
doesn't engage in
erik quanstrom wrote:
i think the lesson here is don't by cheep drives; if you
have enterprise drives at 1e-15 error rate, the fail rate
will be 0.8%. of course if you don't have a raid, the fail
rate is 100%.
if that's not acceptable, then use raid 6.
Hopefully Raid 6 or zfs's raidz2
On Mon, 21 Sep 2009 16:30:25 EDT erik quanstrom quans...@quanstro.net wrote:
i think the lesson here is don't by cheep drives; if you
have enterprise drives at 1e-15 error rate, the fail rate
will be 0.8%. of course if you don't have a raid, the fail
rate is 100%.
if that's not
At work, we recently had a massive failure of our RAID array. After
much brown noseing, I come to find that after many harddrives being
shipped to our IT guy and him scratching his head, it was in fact the
RAID card itself that had failed (which takes out the whole array, plus
can take
What I haven't found is a decent, no frills, sata/e-sata enclosure for a
home system.
Depending on where you are, where you can purchase from, and how much you
want to pay you may be able to get yourself ICY DOCK or Chieftec enclosures
that fit the description. ICY DOCK's 5-bay enclosure
Upon reading more into that study it seems the Wikipedia editor has derived
a distorted conclusion:
In our data sets, the replacement rates of SATA disks are not worse than
the replacement rates of SCSI or FC disks. This may indicate that
disk-independent factors, such as operating conditions,
Apparently, the distinction made between consumer and enterprise is
actually between technology classes, i.e. SCSI/Fibre Channel vs. SATA,
rather than between manufacturers' gradings, e.g. Seagate 7200 desktop
series vs. Western Digital RE3/RE4 enterprise drives.
yes this is very
On Mon, 14 Sep 2009 12:43:42 EDT erik quanstrom quans...@quanstro.net wrote:
I am going to try my hands at beating a dead horse:)
So when you create a Venti volume, it basically writes '0's' to all the
blocks of the underlying device right? If I put a venti volume on a AoE
device which
drive mfgrs don't report write error rates. i would consider any
drive with write errors to be dead as fried chicken. a more
interesting question is what is the chance you can read the
written data back correctly. in that case with desktop drives,
you have a
8 bits/byte * 1e12
I am going to try my hands at beating a dead horse:)
So when you create a Venti volume, it basically writes '0's' to all the
blocks of the underlying device right? If I put a venti volume on a AoE
device which is a linux raid5, using normal desktop sata drives, what
are my chances of a
On Mon, Sep 14, 2009 at 8:50 AM, Jack Norton j...@0x6a.com wrote:
So when you create a Venti volume, it basically writes '0's' to all the
blocks of the underlying device right?
In case anyone decides to try the experiment,
venti hasn't done this for a few years. Better to try with dd.
Russ
Russ Cox wrote:
On Mon, Sep 14, 2009 at 8:50 AM, Jack Norton j...@0x6a.com wrote:
So when you create a Venti volume, it basically writes '0's' to all the
blocks of the underlying device right?
In case anyone decides to try the experiment,
venti hasn't done this for a few years.
Thanks.
Erik Quanstrom, too, posted a link to that page, although it wasn't in HTML.
--On Monday, September 07, 2009 22:02 +0200 Uriel urie...@gmail.com wrote:
On Fri, Sep 4, 2009 at 3:56 PM, Eris Discordiaeris.discor...@gmail.com
wrote:
if you have quanstro/sd installed, sdorion(3)
erik quanstrom wrote:
I think what he means is:
You are given an inordinate amount of harddrives and some computers to
house them.
If plan9 is your only software, how would it be configured overall,
given that it has to perform as well, or better.
Or put another way: your boss wants you to
I read the paper you wrote and I have some (probably naive) questions:
The section #6 labeled core improvements seems to suggest that the
fileserver is basically using the CPU/fileserver hybrid kernel (both
major changes are quoted as coming from the CPU kernel). Is this just a
one-off
erik quanstrom wrote:
Also, another probably dumb question: did the the fileserver machine use
the AoE device as a kenfs volume or a fossil(+venti)?
s/did/does/. the fileserver is running today.
the fileserver provides the network with regular 9p fileserver
with three attach points
On Fri, Sep 4, 2009 at 3:56 PM, Eris Discordiaeris.discor...@gmail.com wrote:
if you have quanstro/sd installed, sdorion(3) discusses how it
controls the backplane lights.
Um, I don't have that because I don't have any running Plan 9 instances, but
I'll try finding it on the web (if it's been
I concur with Erik, I specced out a 20tb server earlier this year,
matching the throughputs hits you in the wallet.
I'm amazed they are using pci-e 1x , it's kind of naive
see what the guy from sun says
- a hot swap case with ses-2 lights so the tech doesn't
grab the wrong drive,
This caught my attention and you are the storage expert here. Is there an
equivalent technology on SATA disks for controlling enclosure facilities?
(Other than SMART, I mean, which seems to be only for monitoring
This caught my attention and you are the storage expert here. Is there an
equivalent technology on SATA disks for controlling enclosure facilities?
(Other than SMART, I mean, which seems to be only for monitoring and not
for control.)
SES-2/SGPIO typically interact with the backplane, not
Many thanks for the info :-)
if there's a single dual-duty led maybe this is the problem. how
many sepearte led packages do you have?
There's one multi-color (3-prong) LED responsible for this. Nominally,
green should mean drive running and okay, alternating red should mean
transfer, and
There's one multi-color (3-prong) LED responsible for this. Nominally,
green should mean drive running and okay, alternating red should mean
transfer, and orange (red + green) a disk failure. In case of 7200.11's
there's a standard for this
red fail
orange locate
green activity
I concur with Erik, I specced out a 20tb server earlier this year,
matching the throughputs hits you in the wallet.
even if you're okay with low performance, please don't
set up a 20tb server without enterprise drives. it's no
guarentee, but it's the closest you can come. also,
the #1
On Sep 3, 2009, at 6:20 PM, erik quanstrom wrote:
On Thu Sep 3 20:53:13 EDT 2009, r...@sun.com wrote:
None of those technologies [NFS, iSCSI, FC] scales as cheaply,
reliably, goes as big, nor can be managed as easily as stand-alone
pods
with their own IP address waiting for requests on
On Sep 4, 2009, at 2:37 AM, matt wrote:
I concur with Erik, I specced out a 20tb server earlier this year,
matching the throughputs hits you in the wallet.
I'm amazed they are using pci-e 1x , it's kind of naive
see what the guy from sun says
*with*, not *on* right?
with. it's an appliance.
Now, the information above is quite useful, yet my question
was more along the lines of -- if one was to build such
a box using Plan 9 as the software -- would it be:
1. feasible
2. have any advantages over Linux + JFS
aoe is
erik quanstrom wrote:
*with*, not *on* right?
with. it's an appliance.
Now, the information above is quite useful, yet my question
was more along the lines of -- if one was to build such
a box using Plan 9 as the software -- would it be:
1. feasible
2. have any advantages
I think what he means is:
You are given an inordinate amount of harddrives and some computers to
house them.
If plan9 is your only software, how would it be configured overall,
given that it has to perform as well, or better.
Or put another way: your boss wants you to compete with
there's a standard for this
red fail
orange locate
green activity
maybe you're enclosure's not standard.
That may be the case as it's really sort of a cheap hack: Chieftec
SNT-2131. A 3-in-2 solution for use in 5.25 bays of desktop computer
cases. I hear ICY DOCK has better offers but
erik quanstrom wrote:
i'm speaking for myself, and not for anybody else here.
i do work for coraid, and i do do what i believe. so
cavet emptor.
We have a 15TB unit, nice bit of hardware.
oh, and the coraid unit works with plan 9. :-)
You guys should get some Glenda-themed packing tape.
None of those technologies [NFS, iSCSI, FC] scales as cheaply,
reliably, goes as big, nor can be managed as easily as stand-alone pods
with their own IP address waiting for requests on HTTPS.
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
Apart
On Thu Sep 3 20:53:13 EDT 2009, r...@sun.com wrote:
None of those technologies [NFS, iSCSI, FC] scales as cheaply,
reliably, goes as big, nor can be managed as easily as stand-alone pods
with their own IP address waiting for requests on HTTPS.
38 matches
Mail list logo