Hello Joe,
Monday, February 23, 2009, 7:23:39 PM, you wrote:
MJ Mario Goebbels wrote:
One thing I'd like to see is an _easy_ option to fall back onto older
uberblocks when the zpool went belly up for a silly reason. Something
that doesn't involve esoteric parameters supplied to zdb.
MJ
Mario Goebbels wrote:
One thing I'd like to see is an _easy_ option to fall back onto older
uberblocks when the zpool went belly up for a silly reason. Something
that doesn't involve esoteric parameters supplied to zdb.
Between uberblock updates, there may be many write operations to a data
On Fri, Feb 13, 2009 at 9:47 PM, Richard Elling
richard.ell...@gmail.com wrote:
It has been my experience that USB sticks use FAT, which is an ancient
file system which contains few of the features you expect from modern
file systems. As such, it really doesn't do any write caching. Hence, it
Hey guys,
I'll let this die in a sec, but I just wanted to say that I've gone
and read the on disk document again this morning, and to be honest
Richard, without the description you just wrote, I really wouldn't
have known that uberblocks are in a 128 entry circular queue that's 4x
redundant.
I am wondering if the usb storage device is not reliable for ZFS usage, can the
situation be improved if I put the intent log on internal sata disk to avoid
corruption and utilize the convenience of usb storage
at the same time?
--
This message posted from opensolaris.org
huh? but that looses the convenience of USB.
I've used USB drives without problems at all, just remember to zpool export
them before you unplug.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
While mobility could be lost, usb storage still has the advantage of being
cheap and easy to install comparing to install internal disks on pc, so if I
just want to use it to provide zfs storage space for home file server, can a
small intent log located on internal sata disk prevent the pool
On 2/13/2009 5:58 AM, Ross wrote:
huh? but that looses the convenience of USB.
I've used USB drives without problems at all, just remember to zpool export
them before you unplug.
I think there is a subcommand of cfgaadm you should run to to notify
Solariss that you intend to unplug the
Having a separate intent log on good hardware will not prevent corruption
on a pool with bad hardware. By good I mean hardware that correctly
flush their write caches when requested.
Note, a pool is always consistent (again when using good hardware).
The function of the intent log is not to
On Fri, Feb 13 at 9:14, Neil Perrin wrote:
Having a separate intent log on good hardware will not prevent corruption
on a pool with bad hardware. By good I mean hardware that correctly
flush their write caches when requested.
Can someone please name a specific piece of bad hardware?
--eric
On Thu, Feb 12 at 19:43, Toby Thain wrote:
^^ Spec compliance is what we're testing for... We wouldn't know if this
special variant is working correctly either. :)
Time the difference between NCQ reads with and without FUA in the
presence of overlapped cached write data. That should have a
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm That implies that ZFS will have to detect removable devices
gm and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat
fc == Frank Cusack fcus...@fcusack.com writes:
Dropping a flush-cache command is just as bad as dropping a
write.
fc Not that it matters, but it seems obvious that this is wrong
fc or anyway an exaggeration. Dropping a flush-cache just means
fc that you have to wait until
t == Tim t...@tcsac.net writes:
t I would like to believe it has more to do with Solaris's
t support of USB than ZFS, but the fact remains it's a pretty
t glaring deficiency in 2009, no matter which part of the stack
t is at fault.
maybe, but for this job I don't much mind
Miles Nordin wrote:
gm That implies that ZFS will have to detect removable devices
gm and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
ridiculous. And
On February 13, 2009 12:20:21 PM -0500 Miles Nordin car...@ivy.net wrote:
fc == Frank Cusack fcus...@fcusack.com writes:
Dropping a flush-cache command is just as bad as dropping a
write.
fc Not that it matters, but it seems obvious that this is wrong
fc or anyway an
On February 13, 2009 12:10:08 PM -0500 Miles Nordin car...@ivy.net wrote:
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
ridiculous.
thank you.
___
zfs-discuss
On February 13, 2009 12:41:12 PM -0500 Miles Nordin car...@ivy.net wrote:
fc == Frank Cusack fcus...@fcusack.com writes:
fc if you have 100TB of data, wouldn't you have a completely
fc redundant storage network
If you work for a ponderous leaf-eating brontosorous maybe. If your
On Fri, 13 Feb 2009 17:53:00 +0100, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
On Fri, Feb 13 at 9:14, Neil Perrin wrote:
Having a separate intent log on good hardware will not prevent
corruption
on a pool with bad hardware. By good I mean hardware that correctly
flush their write
fc == Frank Cusack fcus...@fcusack.com writes:
fc If you're misordering writes
fc isn't that a completely different problem?
no. ignoring the flush cache command causes writes to be misordered.
fc Even then, I don't see how it's worse than DROPPING a write.
fc The data
On February 13, 2009 1:10:55 PM -0500 Miles Nordin car...@ivy.net wrote:
fc == Frank Cusack fcus...@fcusack.com writes:
fc If you're misordering writes
fc isn't that a completely different problem?
no. ignoring the flush cache command causes writes to be misordered.
oh. can you
On February 13, 2009 10:29:05 AM -0800 Frank Cusack fcus...@fcusack.com
wrote:
On February 13, 2009 1:10:55 PM -0500 Miles Nordin car...@ivy.net wrote:
fc == Frank Cusack fcus...@fcusack.com writes:
fc If you're misordering writes
fc isn't that a completely different problem?
no.
fc == Frank Cusack fcus...@fcusack.com writes:
fc why would dropping a flush cache imply dropping every write
fc after the flush cache?
it wouldn't and probably never does. It was an imaginary scenario
invented to argue with you and to agree with the guy in the USB bug
who said
Superb news, thanks Jeff.
Having that will really raise ZFS up a notch, and align it much better with
peoples expectations. I assume it'll work via zpool import, and let the user
know what's gone wrong?
If you think back to this case, imagine how different the users response would
have been
On Fri, 13 Feb 2009, Ross wrote:
Something like that will have people praising ZFS' ability to
safeguard their data, and the way it recovers even after system
crashes or when hardware has gone wrong. You could even have a
common causes of this are... message, or a link to an online help
On Fri, Feb 13, 2009 at 10:29:05AM -0800, Frank Cusack wrote:
On February 13, 2009 1:10:55 PM -0500 Miles Nordin car...@ivy.net wrote:
fc == Frank Cusack fcus...@fcusack.com writes:
fc If you're misordering writes
fc isn't that a completely different problem?
no. ignoring the
On Fri, Feb 13, 2009 at 7:41 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross wrote:
Something like that will have people praising ZFS' ability to safeguard
their data, and the way it recovers even after system crashes or when
hardware has gone wrong. You
Bob Friesenhahn wrote:
On Fri, 13 Feb 2009, Ross wrote:
Something like that will have people praising ZFS' ability to
safeguard their data, and the way it recovers even after system
crashes or when hardware has gone wrong. You could even have a
common causes of this are... message, or a
On Fri, 13 Feb 2009, Ross Smith wrote:
You have to consider that even with improperly working hardware, ZFS
has been checksumming data, so if that hardware has been working for
any length of time, you *know* that the data on it is good.
You only know this if the data has previously been read.
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
You have to consider that even with improperly working hardware, ZFS
has been checksumming data, so if that hardware has been working for
any length of time, you *know*
Greg Palmer wrote:
Miles Nordin wrote:
gm That implies that ZFS will have to detect removable devices
gm and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
You have to consider that even with improperly working hardware, ZFS
has been checksumming data, so if that hardware has been working for
any length of time, you *know*
On Fri, 13 Feb 2009, Ross Smith wrote:
Also, that's a pretty extreme situation since you'd need a device that
is being written to but not read from to fail in this exact way. It
also needs to have no scrubbing being run, so the problem has remained
undetected.
On systems with a lot of RAM,
On Fri, 13 Feb 2009, Ross Smith wrote:
Thinking about this a bit more, you've given me an idea: Would it be
worth ZFS occasionally reading previous uberblocks from the pool, just
to check they are there and working ok?
That sounds like a good idea. However, how do you know for sure that
On Fri, Feb 13, 2009 at 02:00:28PM -0600, Nicolas Williams wrote:
Ordering matters for atomic operations, and filesystems are full of
those.
Also, note that ignoring barriers is effectively as bad as dropping
writes if there's any chance that some writes will never hit the disk
because of, say,
Richard Elling wrote:
Greg Palmer wrote:
Miles Nordin wrote:
gm That implies that ZFS will have to detect removable devices
gm and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs
You don't, but that's why I was wondering about time limits. You have
to have a cut off somewhere, but if you're checking the last few
minutes of uberblocks that really should cope with a lot. It seems
like a simple enough thing to implement, and if a pool still gets
corrupted with these checks
Tim wrote:
On Fri, Feb 13, 2009 at 4:21 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us mailto:bfrie...@simple.dallas.tx.us
wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
However, I've just had another idea. Since the uberblocks are
pretty
vital in recovering
On Fri, 13 Feb 2009, Tim wrote:
I don't think it hurts in the least to throw out some ideas. If
they aren't valid, it's not hard to ignore them and move on. It
surely isn't a waste of anyone's time to spend 5 minutes reading a
response and weighing if the idea is valid or not.
Today I sat
On February 13, 2009 7:58:51 PM -0600 Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
With this level of overhead, I am surprise that there is any remaining
development motion on ZFS at all.
come on now. with all due respect, you are attempting to stifle
relevant discussion and that is,
Hi Bob,
On Fri, 13 Feb 2009 19:58:51 -0600 (CST)
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Tim wrote:
I don't think it hurts in the least to throw out some ideas. If
they aren't valid, it's not hard to ignore them and move on. It
surely isn't a waste
On Wed, February 11, 2009 18:16, Uwe Dippel wrote:
I need to disappoint you here, LED inactive for a few seconds is a very
bad indicator of pending writes. Used to experience this on a stick on
Ubuntu, which was silent until the 'umount' and then it started to write
for some 10 seconds.
after all statements read here I just want to highlight another issue regarding
ZFS.
It was here many times recommended to set copies=2.
Installing Solaris 10 10/2008 or snv_107 you can choose either to use UFS or
ZFS.
If you choose ZFS by default, the rpool will be created by default with
All that and yet the fact
remains: I#39;ve never quot;ejectedquot; a USB
drive from OS X or Windows, I simply pull it and go,
and I#39;ve never once lost data, or had it become
unrecoverable or even corrupted.br
brAnd yes, I do keep checksums of all the data
sitting on them and periodically
Hello Bob,
Wednesday, February 11, 2009, 11:25:12 PM, you wrote:
BF I agree. ZFS apparently syncs uncommitted writes every 5 seconds.
BF If there has been no filesystem I/O (including read I/O due to atime)
BF for at least 10 seconds, and there has not been more data
BF burst-written into
Ross wrote:
I can also state with confidence that very, very few of the 100 staff working
here will even be aware that it's possible to unmount a USB volume in windows.
They will all just pull the plug when their work is saved, and since they all
come to me when they have problems, I think I
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when they say
that data has been committed, but a little data loss from badly designed
hardware is I feel acceptable, so long as ZFS can have a go at recovering
corrupted pools when it
On Thu, Feb 12, 2009 at 11:31 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when they say
that data has been committed, but a little data loss from badly designed
hardware is I feel
On Thu, Feb 12, 2009 at 11:53:40AM -0500, Greg Palmer wrote:
Ross wrote:
I can also state with confidence that very, very few of the 100 staff
working here will even be aware that it's possible to unmount a USB volume
in windows. They will all just pull the plug when their work is saved,
Right, well I can't imagine it's impossible to write a small app that can
test whether or not drives are honoring correctly by issuing a commit and
immediately reading back to see if it was indeed committed or not. Like a
zfs test cXtX. Of course, then you can't just blame the hardware
That would be the ideal, but really I'd settle for just improved error
handling and recovery for now. In the longer term, disabling write
caching by default for USB or Firewire drives might be nice.
On Thu, Feb 12, 2009 at 8:35 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Thu, Feb 12, 2009
Is this the crux of the problem?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6424510
'For usb devices, the driver currently ignores DKIOCFLUSHWRITECACHE.
This can cause catastrophic data corruption in the event of power loss,
even for filesystems like ZFS that are designed to
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
Does anyone know how to push for resolution on this? USB is pretty
common, like it or not for storage purposes - especially amongst the
laptop-using dev crowd
On Thu, February 12, 2009 14:02, Tim wrote:
Right, well I can't imagine it's impossible to write a small app that can
test whether or not drives are honoring correctly by issuing a commit and
immediately reading back to see if it was indeed committed or not. Like a
zfs test cXtX. Of
I just tried putting a pool on a USB flash drive, writing a file to it, and
then yanking it. I did not lose any data or the pool, but I had to reboot
before I could get any zpool command to complete without freezing. I also had
OS reboot once on its own, when I tried to issue a zpool command
On Thu, 2009-02-12 at 17:35 -0500, Blake wrote:
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
bugs.opensolaris.org's information about this bug is out of date.
It was fixed in snv_54:
changeset:
On 12-Feb-09, at 3:02 PM, Tim wrote:
On Thu, Feb 12, 2009 at 11:31 AM, David Dyer-Bennet d...@dd-b.net
wrote:
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when
they say
that data has been committed, but a little data loss
I'm sure it's very hard to write good error handling code for hardware
events like this.
I think, after skimming this thread (a pretty wild ride), we can at
least decide that there is an RFE for a recovery tool for zfs -
something to allow us to try to pull data from a failed pool. That
seems
On Thu, Feb 12 at 21:45, Mattias Pantzare wrote:
A read of data in the disk cache will be read from the disk cache. You
can't tell the disk to ignore its cache and read directly from the
plater.
The only way to test this is to write and the remove the power from
the disk. Not easy in software.
Blake wrote:
I'm sure it's very hard to write good error handling code for hardware
events like this.
I think, after skimming this thread (a pretty wild ride), we can at
least decide that there is an RFE for a recovery tool for zfs -
something to allow us to try to pull data from a failed pool.
On 12-Feb-09, at 7:02 PM, Eric D. Mudama wrote:
On Thu, Feb 12 at 21:45, Mattias Pantzare wrote:
A read of data in the disk cache will be read from the disk cache.
You
can't tell the disk to ignore its cache and read directly from the
plater.
The only way to test this is to write and the
Blake,
On Thu, Feb 12, 2009 at 05:35:14PM -0500, Blake wrote:
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
Looks like the bug-report is out of sync.
I see that the bug has been fixed in B54. Here is
bcirvin,
you proposed something to allow us to try to pull data from a failed pool.
Yes and no. 'Yes' as a pragmatic solution; 'no' for what ZFS was 'sold' to be:
the last filesystem mankind would need. It was conceived as a filesystem that
does not need recovery, due to its guaranteed
On February 12, 2009 1:44:34 PM -0800 bdebel...@intelesyscorp.com wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6424510
...
Dropping a flush-cache command is just as bad as dropping a write.
Not that it matters, but it seems obvious that this is wrong or
anyway an
Uwe Dippel wrote:
We have seen some unfortunate miscommunication here, and misinterpretation.
This extends into differences of culture. One of the vocal person in here is
surely not 'Anti-xyz'; rather I sense his intense desire to further the
progress by pointing his finger to some potential
I'm rather tired of hearing this mantra.
[...]
Every file system needs a repair utility
Hey, wait a minute -- that's a mantra too!
I don't think there's actually any substantive disagreement here -- stating
that one doesn't need a separate program called /usr/sbin/fsck is not the
same as
Mario Goebbels wrote:
The good news is that ZFS is getting popular
enough on consumer-grade
hardware. The bad news is that said hardware has
a different set of
failure modes, so it takes a bit of work to become
resilient to them.
This is pretty high on my short list.
One
g == Gino dandr...@gmail.com writes:
g we lost many zpools with multimillion$ EMC,
Netapp and
g HDS arrays just simulating fc switches power
fails.
g The problem is that ZFS can't properly
recover itself.
I don't like what you call ``the problem''---I think
it assumes too
much.
This is CR 6667683
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
I think that would solve 99% of ZFS corruption problems!
Based on the reports I've seen to date, I think you're right.
Is there any EDT for this patch?
Well, because of this thread, this has gone from on my list
[Still waiting for answers on my earlier questions]
So I take it that ZFS solves one problem perfectly well: Integrity of data
blocks. It uses CRC and atomic writes for this purpose, and as far as I could
follow this list, nobody has ever had any problems in this respect.
However, it also - at
This is CR 6667683
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
I think that would solve 99% of ZFS corruption
problems!
Based on the reports I've seen to date, I think
you're right.
Is there any EDT for this patch?
Well, because of this thread, this has gone from
On 2/10/2009 3:37 PM, D. Eckert wrote:
(...)
Possibly so. But if you had that ufs/reiserfs on a LVM or on a RAID0
spanning removable drives, you probably wouldn't have been so lucky.
(...)
we are not talking about a RAID 5 array or an LVM. We are talking about a
single FS setup as a zpool over
On 2/10/2009 4:48 PM, Roman V. Shaposhnik wrote:
On Wed, 2009-02-11 at 09:49 +1300, Ian Collins wrote:
These posts do sound like someone who is blaming their parents after
breaking a new toy before reading the instructions.
It looks like there's a serious denial of the fact that bad
On Tue, 10 Feb 2009 21:43:00 PST
Uwe Dippel udip...@gmail.com wrote:
Back to where I started from, with some questions:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity?
I have not experienced
On Tue, February 10, 2009 23:43, Uwe Dippel wrote:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity? [I'd really appreciate an answer here, because this is what I
am starting to implement here:
On 11-Feb-09, at 10:08 AM, David Dyer-Bennet wrote:
On Tue, February 10, 2009 23:43, Uwe Dippel wrote:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity? [I'd really appreciate an answer here,
On Tue, Feb 10, 2009 at 11:44 PM, Fredrich Maney fredrichma...@gmail.comwrote:
Ah... an illiterate AND idiotic bigot. Have you even read the manual
or *ANY* of the replies to your posts? *YOU* caused the situation that
resulted in your data being corrupted. Not Sun, not OpenSolaris, not
ZFS
Tim;
The proper procedure for ejecting a USB drive in Windows is to right
click the device icon and eject the appropriate listed device.
I've done this before without ejecting and lost data before.
My personal experience with ZFS is that it is very reliable FS. I've
not lost data on it yet
(...)
Good. It looks like this thread can finally die. I received the
following in response to my message below:
(...)
I apologize that your eMail could not be delivered.
This is to either the mail server you use is considered as a machine from a
dynamic ip pool or your mail server is anywhere
On Wed, Feb 11, 2009 at 10:33 AM, Steven Sim unixan...@gmail.com wrote:
Tim;
The proper procedure for ejecting a USB drive in Windows is to right click
the device icon and eject the appropriate listed device.
I'm well aware of what the proper procedure is. My point is, I've done it
for
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
This all-or-nothing behavior of ZFS pools is kinda scary. Turns out I'd
rather have 99% of my data than 0% -- who knew? :-) I'd much rather have
100.00% than either of course, and I'm running ZFS with mirroring, and
doing regular backups, because
(...)
Ah... an illiterate AND idiotic bigot.
(...)
I apologize for my poor English. Yes, it's not my mother tongue, but I have no
doubt at all, that this
discussion could be continued in German as well.
But just to make it clear:
Finally I did understand very well were I went wrong. But it
On Wed, 11 Feb 2009, Tim wrote:
All that and yet the fact remains: I've never ejected a USB drive from OS
X or Windows, I simply pull it and go, and I've never once lost data, or had
it become unrecoverable or even corrupted.
And yes, I do keep checksums of all the data sitting on them and
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to
handle a hot unplug without any prior notice
On 2/11/2009 12:35 PM, Toby Thain wrote:
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to handle
a hot
On Wed, February 11, 2009 11:21, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
All that and yet the fact remains: I've never ejected a USB drive from
OS
X or Windows, I simply pull it and go, and I've never once lost data, or
had
it become unrecoverable or even corrupted.
And
On Wed, February 11, 2009 11:35, Toby Thain wrote:
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to
On Wed, February 11, 2009 10:49, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
This all-or-nothing behavior of ZFS pools is kinda scary. Turns out I'd
rather have 99% of my data than 0% -- who knew? :-) I'd much rather
have
100.00% than either of course, and I'm
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
Then again, I've never lost data during the learning period, nor on the
rare occasions where I just get it wrong. This is good; not quite
remembering to eject a USB memory stick is *so* easy.
With Windows and OS-X, it is up to the *user* to
On Wed, February 11, 2009 12:23, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
Then again, I've never lost data during the learning period, nor on the
rare occasions where I just get it wrong. This is good; not quite
remembering to eject a USB memory stick is *so*
On February 11, 2009 12:21:03 PM -0600 David Dyer-Bennet d...@dd-b.net
wrote:
I've spent $2000 on hardware and, by now, hundreds of hours of my time
trying to get and keep a ZFS-based home NAS working. Because it's the
only affordable modern practice, my backups are on external drives (USB
On Wed, Feb 11, 2009 at 11:19 AM, Tim t...@tcsac.net wrote:
On Tue, Feb 10, 2009 at 11:44 PM, Fredrich Maney fredrichma...@gmail.com
wrote:
Ah... an illiterate AND idiotic bigot. Have you even read the manual
or *ANY* of the replies to your posts? *YOU* caused the situation that
resulted in
On February 11, 2009 2:07:47 AM -0800 Gino dandr...@gmail.com wrote:
I agree but I'd like to point out that the MAIN problem with ZFS is that
because of a corruption you-ll loose ALL your data and there is no way to
recover it. Consider an example where you have 100TB of data and a fc
switch
David Dyer-Bennet wrote:
I've spent $2000 on hardware and, by now, hundreds of hours of my time
trying to get and keep a ZFS-based home NAS working.
Hundreds of hours doing what? I just plugged in the drives, built the
pool and left the box in a corner for the past couple of years. It's
On Wed, February 11, 2009 13:45, Ian Collins wrote:
David Dyer-Bennet wrote:
I've spent $2000 on hardware and, by now, hundreds of hours of my time
trying to get and keep a ZFS-based home NAS working.
Hundreds of hours doing what? I just plugged in the drives, built the
pool and left the
On Wed, Feb 11, 2009 at 11:46 AM, Kyle McDonald kmcdon...@egenera.comwrote:
Yep. I've never unplugged a USB drive on purpose, but I have left a drive
plugged into the docking station, Hibernated windows XP professional,
undocked the laptop, and then woken it up later undocked. It routinely
On Wed, Feb 11, 2009 at 1:36 PM, Frank Cusack fcus...@fcusack.com wrote:
if you have 100TB of data, wouldn't you have a completely redundant
storage network -- dual FC switches on different electrical supplies,
etc. i've never designed or implemented a storage network before but
such
On February 11, 2009 3:02:48 PM -0600 Tim t...@tcsac.net wrote:
On Wed, Feb 11, 2009 at 1:36 PM, Frank Cusack fcus...@fcusack.com wrote:
if you have 100TB of data, wouldn't you have a completely redundant
storage network -- dual FC switches on different electrical supplies,
etc. i've never
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in question, and it
was the *ONLY* one on the drive, meaning there is absolutely 0 chance of
their being pending writes. There's nothing to write to.
This is an interesting assumption leading to a wrong
On Wed, February 11, 2009 15:51, Frank Cusack wrote:
On February 11, 2009 3:02:48 PM -0600 Tim t...@tcsac.net wrote:
It's hardly uncommon for an entire datacenter to go down, redundant
power
or not. When it does, if it means I have to restore hundreds of
terabytes if not petabytes from
1 - 100 of 179 matches
Mail list logo