According to the hard disk drive guide at
http://www.storagereview.com/guide2000/ref/hdd/index.html, a wopping
36% of data loss is due to human error. 49% of data loss was due to
hardware or system malfunction. With proper pool design, zfs
addresses most of the 49% of data loss due to
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote:
Zfs makes human error really easy. For example
$ zpool destroy mypool
Note that zpool destroy can be undone by zpool import -D (if you get
to it before the disks are overwritten).
___
Going back to your USB remove test, if you protect that disk
at the ZFS level, such as a mirror, then when the disk is removed
then it will be detected as removed and zfs status will show its
state as removed and the pool as degraded but it will continue
to function, as expected.
-- richard
On Thu, Jul 31, 2008 at 11:03 PM, Ross [EMAIL PROTECTED] wrote:
Going back to your USB remove test, if you protect that disk
at the ZFS level, such as a mirror, then when the disk is removed
then it will be detected as removed and zfs status will show its
state as removed and the pool as
Hey Brent,
On the Sun hardware like the Thumper you do get a nice bright blue ready to
remove led as soon as you issue the cfgadm -c unconfigure xxx command. On
other hardware it takes a little more care, I'm labelling our drive bays up
*very* carefully to ensure we always remove the right
Dave wrote:
Enda O'Connor wrote:
As for thumpers, once 138053-02 ( marvell88sx driver patch ) releases
within the next two weeks ( assuming no issues found ), then the
thumper platform running s10 updates will be up to date in terms of
marvel88sx driver fixes, which fixes some pretty
Hello Ross,
I know personally many environments using ZFS in a production for
quite some time. Quite often in business critical environments.
Some of them are small, some of them are rather large (hundreds of
TBs), some of them are clustered. Different usages like file servers,
MySQL on ZFS,
I have done a bit of testing, and so far so good really.
I have a Dell 1800 with a Perc4e and a 14 drive Dell Powervault 220S.
I have a RaidZ2 volume named 'tank' that spans 6 drives. I have made 1 drive
available as a spare to ZFS.
Normal array:
# zpool status
pool: tank
state: ONLINE
Hey folks,
I guess this is an odd question to be asking here, but I could do with some
feedback from anybody who's actually using ZFS in anger.
I'm about to go live with ZFS in our company on a new fileserver, but I have
some real concerns about whether I can really trust ZFS to keep my data
Ross wrote:
Hey folks,
I guess this is an odd question to be asking here, but I could do with some
feedback from anybody who's actually using ZFS in anger.
I'm about to go live with ZFS in our company on a new fileserver, but I have
some real concerns about whether I can really trust
On Thu, Jul 31, 2008 at 16:25, Ross [EMAIL PROTECTED] wrote:
The problems with zpool status hanging concern me, knowing that I can't hot
plug drives is an issue, and the long resilver times bug is also a potential
problem. I suspect I can work around the hot plug drive bug with a big
On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
Hey folks,
I guess this is an odd question to be asking here, but I could do with some
feedback from anybody who's actually using ZFS in anger.
ZFS in anger ? That's an interesting way of putting it :-)
but I have some real concerns about
We haven't had any real life drive failures at work, but at home I
took some old flaky IDE drives and put them in a pentium 3 box running
Nevada.
Similar story here. Some IDE and SATA drive burps under Linux (and
please don't tell me how wonderful Reiser4 is - 'cause it's banned in
this
On Jul 31, 2008, at 2:56 PM, Bob Netherton wrote:
On Thu, 2008-07-31 at 13:25 -0700, Ross wrote:
Hey folks,
I guess this is an odd question to be asking here, but I could do
with some
feedback from anybody who's actually using ZFS in anger.
ZFS in anger ? That's an interesting way of
We have 50,000 users worth of mail spool on ZFS.
So we've been trusting it for production usage for THE most critical visible
enterprise app.
Works fine. Our stores are ZFS RAID-10 built of LUNS from pairs of 3510FC.
Had an entire array go down once, the system kept going fine. Brought the
Enda O'Connor wrote:
As for thumpers, once 138053-02 ( marvell88sx driver patch ) releases
within the next two weeks ( assuming no issues found ), then the thumper
platform running s10 updates will be up to date in terms of marvel88sx
driver fixes, which fixes some pretty important
r == Ross [EMAIL PROTECTED] writes:
r This is a big step for us, we're a 100% windows company and
r I'm really going out on a limb by pushing Solaris.
I'm using it in anger. I'm angry at it, and can't afford anything
that's better.
Whatever I replaced ZFS with, I would make sure it
Ross wrote:
Hey folks,
I guess this is an odd question to be asking here, but I could do with some
feedback from anybody who's actually using ZFS in anger.
I've been using ZFS for nearly 3 years now. It has been my (mirrored :-)
home directory for that time. I've never lost any of that
18 matches
Mail list logo