On Wed, February 11, 2009 15:52, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in question, and
it
was the *ONLY* one on the drive, meaning there is absolutely 0 chance of
their being pending writes. There's nothing to write
On Wed, 11 Feb 2009, David Dyer-Bennet wrote:
As a practical matter, it seems unreasonable to me that there would be
uncommitted data in the pool after some quite short period of time when
there's no new IO activity to the pool (not just the filesystem). 5 or 10
seconds, maybe? (Possibly
I need to disappoint you here, LED inactive for a few seconds is a very bad
indicator of pending writes. Used to experience this on a stick on Ubuntu,
which was silent until the 'umount' and then it started to write for some 10
seconds.
On the other hand, you are spot-on w.r.t. 'umount'. Once
On 11-Feb-09, at 5:52 PM, David Dyer-Bennet wrote:
On Wed, February 11, 2009 15:52, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in
question, and
it
was the *ONLY* one on the drive, meaning there is absolutely 0
chance
On 11-Feb-09, at 7:16 PM, Uwe Dippel wrote:
I need to disappoint you here, LED inactive for a few seconds is a
very bad indicator of pending writes. Used to experience this on a
stick on Ubuntu, which was silent until the 'umount' and then it
started to write for some 10 seconds.
On the
Toby,
sad that you fall for the last resort of the marketing droids here. All
manufactures (and there are only a few left) will sue the hell out of you if
you state that their drives don't 'sync'. And each and every drive I have ever
used did. So the talk about a distinct borderline between
On Wed, February 11, 2009 17:25, Bob Friesenhahn wrote:
Regardless, it seems that the ZFS problems with crummy hardware are
primarily due to the crummy hardware writting the data to the disk in
a different order than expected. ZFS expects that after a sync that
all pending writes are
On Wed, February 11, 2009 18:25, Toby Thain wrote:
Absolutely. You should never get actual corruption (inconsistency)
at any time *except* in the case Jeff Bonwick explained: i.e. faulty/
misbehaving hardware! (That's one meaning of always consistent on
disk.)
I think this is well
May I doubt that there are drives that don't 'sync'? That means you have a good
chance of corrupted data at a normal 'reboot'; or just at a 'umount' (without
considering ZFS here).
May I doubt the marketing drab that you need to buy a USCSI or whatnot to have
functional 'sync' at a shutdown or
There is no substitute for cord-yank tests - many
and often. The
weird part is, the ZFS design team simulated
millions of them.
So the full explanation remains to be uncovered?
We simulated power failure; we did not simulate disks
that simply
blow off write ordering. Any disk that
On Mon, 09 Feb 2009 01:46:01 PST
D. Eckert cont...@desystems.cc wrote:
after working for 1 month with ZFS on 2 external USB drives I have
experienced, that the all new zfs filesystem is the most unreliable
FS I have ever seen.
Since working with the zfs, I have lost datas from:
1 80 GB
On Mon, 09 Feb 2009 01:46:01 PST
D. Eckert cont...@desystems.cc wrote:
after working for 1 month with ZFS on 2 external
USB drives I have
experienced, that the all new zfs filesystem is the
most unreliable
FS I have ever seen.
Since working with the zfs, I have lost datas from:
What filesystem likes it when disks are pulled out from a LIVE
filesystem? Try that on UFS and you're f** up too.
Pulling a disk from a live filesystem is the same as pulling the power
from the computer. All modern filesystems can handle that just fine.
UFS with logging on do not even need
The good news is that ZFS is getting popular enough on consumer-grade
hardware. The bad news is that said hardware has a different set of
failure modes, so it takes a bit of work to become resilient to them.
This is pretty high on my short list.
So does this basically mean zfs rolls-back
However, I just want to state a warning, that ZFS is far from being that
what it
is promising, and so far from my sum of experience I can't recommend at all
to
use zfs on a professional system.
Or, perhaps, you've given ZFS disks which are so broken that they are
really unusable; it
Jeff, what do you mean by disks that simply blow off write ordering.?
My experience is that most enterprise disks are some flavor of SCSI, and
host SCSI drivers almost ALWAYS use simple queue tags, implying the
target is free to re-order the commands for performance. Are talking
about something
YES! I recently discovered that VirtualBox apparently defaults to
ignoring flushes, which would, if true, introduce a failure mode
generally absent from real hardware (and eventually resulting in
consistency problems quite unexpected to the user who carefully
configured her journaled
And again: Why should a 2 weeks old Seagate HDD suddenly be damaged, if there
was no shock, hit or any other event like that?
I have no information about your particular situation, but you have to
remember the ZFS uncovers problems that otherwise go unnoticed. Just
personally on my private
on a UFS ore reiserfs such errors could be corrected.
In general, UFS has zero capability to actually fix real corruption in
any reliable way.
What you normally do with fsck is repairing *expected* inconsistencies
that the file system was *designed* to produce in the event of e.g. a
sudden
On 10-Feb-09, at 1:03 PM, Charles Binford wrote:
Jeff, what do you mean by disks that simply blow off write
ordering.?
My experience is that most enterprise disks are some flavor of
SCSI, and
host SCSI drivers almost ALWAYS use simple queue tags, implying the
target is free to re-order the
jb == Jeff Bonwick jeff.bonw...@sun.com writes:
jb We simulated power failure; we did not simulate disks that
jb simply blow off write ordering. Any disk that you'd ever
jb deploy in an enterprise or storage appliance context gets this
jb right.
Did you simulate power failure
g == Gino dandr...@gmail.com writes:
g we lost many zpools with multimillion$ EMC, Netapp and
g HDS arrays just simulating fc switches power fails.
g The problem is that ZFS can't properly recover itself.
I don't like what you call ``the problem''---I think it assumes too
ps == Peter Schuller peter.schul...@infidyne.com writes:
ps This is a recommendation I would give even when you purchase
ps non-cheap battery backed hardware RAID controllers (I won't
ps mention any names or details to avoid bashing as I'm sure it's
ps not specific to the
(..)
Dave made a mistake pulling out the drives with out exporting them first.
For sure also UFS/XFS/EXT4/.. doesn't like that kind of operations but only
with ZFS you risk to loose ALL your data.
that's the point!
(...)
I did that many times after performing the umount cmd with ufs/reiserfs
I disagree, see posting above.
ZFS just accepts it 2 or 3 times. after that, your data are passed away to
nirvana for no reason.
And it should be legal, to have an external USB drive with a ZFS. with all
respect, why should a user always care for redundancy, e. g. setup a mirror on
a single
On 2/10/2009 2:50 PM, D. Eckert wrote:
(..)
Dave made a mistake pulling out the drives with out exporting them first.
For sure also UFS/XFS/EXT4/.. doesn't like that kind of operations but only
with ZFS you risk to loose ALL your data.
that's the point!
(...)
I did that many times after
(...)
If anyone asks questions, they get no actual information, but a huge
amount of blame heaped on the sysadmin. Your post is a great example
of the typical way this problem is handled because it does both: deny
information and blame the sysadmin. Though I'm really picking on you
way too much
On Feb 9, 2009, at 7:06 PM, Jeff Bonwick wrote:
There is no substitute for cord-yank tests - many and often. The
weird part is, the ZFS design team simulated millions of them.
So the full explanation remains to be uncovered?
We simulated power failure; we did not simulate disks that simply
blow
On 2/10/2009 2:54 PM, D. Eckert wrote:
I disagree, see posting above.
ZFS just accepts it 2 or 3 times. after that, your data are passed away to
nirvana for no reason.
And it should be legal, to have an external USB drive with a ZFS. with all
respect, why should a user always care for
Hi,
i've followed this thread a bit and I think there are some correct
points on any side of the discussion, but here I see a misconception (at
least I think it is):
D. Eckert schrieb:
(..)
Dave made a mistake pulling out the drives with out exporting them first.
For sure also UFS/XFS/EXT4/..
On Tue, Feb 10, 2009 at 12:46 PM, Miles Nordin car...@ivy.net wrote:
It's likely other filesystems are affected by ``the problem'' as I
define it, just much less so. If that's the case, it'd be much better
IMHO to fix the real problem once and for all, and find it so that it
stays fixed,
rs == Roman Shaposhnik r...@sun.com writes:
rs1. as a forensics tool that would let you retrieve as much
rs information as possible from a physically ill device
a nit, but I've never foudn fsck alone useful for this. Maybe for ``a
filesystem trashed by bad RAM/CPU/bugs'' it is
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the pool is still active.. 'zpool export'
releases the pool from the OS, then 'zpool import' on the other machine.
(...)
with all respect: I never read such a non logic ridiculous .
I
(...)
Possibly so. But if you had that ufs/reiserfs on a LVM or on a RAID0
spanning removable drives, you probably wouldn't have been so lucky.
(...)
we are not talking about a RAID 5 array or an LVM. We are talking about a
single FS setup as a zpool over the entire available disk space on an
D. Eckert wrote:
(...)
Possibly so. But if you had that ufs/reiserfs on a LVM or on a RAID0
spanning removable drives, you probably wouldn't have been so lucky.
(...)
we are not talking about a RAID 5 array or an LVM. We are talking about a
single FS setup as a zpool over the entire available
D. Eckert wrote:
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the pool is still active.. 'zpool export'
releases the pool from the OS, then 'zpool import' on the other machine.
(...)
with all respect: I never read such a non
On 10-Feb-09, at 1:05 PM, Peter Schuller wrote:
YES! I recently discovered that VirtualBox apparently defaults to
ignoring flushes, which would, if true, introduce a failure mode
generally absent from real hardware (and eventually resulting in
consistency problems quite unexpected to the user
The good news is that ZFS is getting popular enough on consumer-grade
hardware. The bad news is that said hardware has a different set of
failure modes, so it takes a bit of work to become resilient to them.
This is pretty high on my short list.
One thing I'd like to see is an _easy_ option
DE - could you please post the output of your 'zpool umount usbhdd1'
command? I believe the output will prove useful to the point being
discussed below.
Charles
D. Eckert wrote:
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the
On Tue, Feb 10, 2009 at 12:31:05PM -0800, D. Eckert wrote:
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the pool is still active.. 'zpool export'
releases the pool from the OS, then 'zpool import' on the other machine.
(...)
I think you are not reading carefully enough, and I
can trace from your reply a typically American
arrogant behavior.
WE, THE PROUDEST AND infallibles on earth DID NEVER MAKE
a mistake. It is just the stupid user who did not read the
fucking manual carefully enough.
Hello? Did you
ps This is a recommendation I would give even when you purchase
ps non-cheap battery backed hardware RAID controllers (I won't
ps mention any names or details to avoid bashing as I'm sure it's
ps not specific to the particular vendor I had problems with most
ps recently).
I'll make a meta comment on the thread itself, not on the ZFS issue.
There is more bashing and broad accusations than it would normally happen on a
professional usage situation. Maybe a board admin can run a script on the ip
addresses logged and find a more subtle meaning... I don't know, I'm
if you are interested in my IP Address: no problem:
83.236.164.80
it just exactly approves my assumption, that's best and easier for someone - if
he's in the right position - to adhere a big pavement on someone's mouth to
avoid hearing a legal critique instead of discussing out the problem to
de == D Eckert cont...@desystems.cc writes:
de from your reply a typically American arrogant behavior.
de WE, THE PROUDEST AND infallibles on earth DID NEVER MAKE a
de mistake.
Maybe I should speak up since I defended you at the start. To my
view:
REASONABLE:
* expect that
Mario Goebbels wrote:
The good news is that ZFS is getting popular enough on consumer-grade
hardware. The bad news is that said hardware has a different set of
failure modes, so it takes a bit of work to become resilient to them.
This is pretty high on my short list.
One thing I'd like
On Tue, 10 Feb 2009 13:14:57 PST
D. Eckert cont...@desystems.cc wrote:
Hello? Did you already recognized the sound of the shot??
I learned my lesson well, and in future this won't happen
again, because we will no longer use zfs, but we have a legal
interest, to get back our data we stored in
Roman V. Shaposhnik wrote:
On Wed, 2009-02-11 at 09:49 +1300, Ian Collins wrote:
These posts do sound like someone who is blaming their parents after
breaking a new toy before reading the instructions.
It looks like there's a serious denial of the fact that bad things
do happen to
On February 10, 2009 1:14:57 PM -0800 D. Eckert cont...@desystems.cc
wrote:
I hope I've made myself very clear.
Very. Rarely has the adage what one says reveals more about the
speaker than the subject been more evident.
And as more postings we have to read in the sound of yours as more we
We have seen some unfortunate miscommunication here, and misinterpretation.
This extends into differences of culture. One of the vocal person in here is
surely not 'Anti-xyz'; rather I sense his intense desire to further the
progress by pointing his finger to some potential wounds.
May I repeat
On Tue, Feb 10, 2009 at 4:14 PM, D. Eckert cont...@desystems.cc wrote:
I think you are not reading carefully enough, and I
can trace from your reply a typically American
arrogant behavior.
WE, THE PROUDEST AND infallibles on earth DID NEVER MAKE
a mistake. It is just the stupid user who did
Good. It looks like this thread can finally die. I received the
following in response to my message below:
This is an automatically generated Delivery Status Notification
Delivery to the following recipient failed permanently:
cont...@desystems.cc
Technical details of permanent failure:
In other words:
Dont feed the troll.
Greets
Jan Dreyer
zfs-discuss-boun...@opensolaris.org wrote :
Good. It looks like this thread can finally die. I received the
following in response to my message below:
This is an automatically generated Delivery Status Notification
Delivery
Fsck can only repair known faults; known
discrepancies in the meta data.
Since ZFS doesn't have such known discrepancies,
there's nothing to repair.
I'm rather tired of hearing this mantra.
If ZFS detects an error in part of its data structures, then there is clearly
something to repair.
Hi,
after working for 1 month with ZFS on 2 external USB drives I have experienced,
that the all new zfs filesystem is the most unreliable FS I have ever seen.
Since working with the zfs, I have lost datas from:
1 80 GB external Drive
1 1 Terrabyte external Drive
It is a shame, that zfs has
On 09 February, 2009 - D. Eckert sent me these 1,5K bytes:
Hi,
after working for 1 month with ZFS on 2 external USB drives I have
experienced, that the all new zfs filesystem is the most unreliable FS I have
ever seen.
Since working with the zfs, I have lost datas from:
1 80 GB
However, I just want to state a warning, that ZFS is far from being that what
it
is promising, and so far from my sum of experience I can't recommend at all to
use zfs on a professional system.
Or, perhaps, you've given ZFS disks which are so broken that they are
really unusable; it is USB,
Hi Caspar,
thanks for you reply.
I completely disagreed to your opinion, that is USB. And seems as well, that I
am not the only one having this opinion regarding ZFS.
However, the hardware used is:
1 Sun Fire 280R Solaris 10 generic 10-08 latest updates
1 Lenovo T61 Notebook running Solaris
D. Eckert wrote:
Hi Caspar,
thanks for you reply.
I completely disagreed to your opinion, that is USB. And seems as well, that
I am not the only one having this opinion regarding ZFS.
However, the hardware used is:
1 Sun Fire 280R Solaris 10 generic 10-08 latest updates
1 Lenovo T61
Unmount is not sufficient.
Well, umount is not the right way to do it, so he'd be simulating a
power-loss/system-crash. That still doesn't explain why massive data loss
would occur ? I would understand the last txg being lost, but 90% according
to OP ?!
Well, umount is not the right way to do it, so he'd be simulating a
power-loss/system-crash. That still doesn't explain why massive data loss
would occur ? I would understand the last txg being lost, but 90% according
to OP ?!
On USB or? I think he was trying to properly unmount the USB
On Mon, 09 Feb 2009 03:10:21 -0800 (PST)
D. Eckert cont...@desystems.cc wrote:
ok, so far so good.
but how can I get my pool up and running
I can't help you with this bit
bash-3.00# zpool status -xv usbhdd1
Pool: usbhdd1
Status: ONLINE
Zustand: Auf mindestens einem Gerät ist
on a UFS ore reiserfs such errors could be corrected.
I think some of these people are assuming your hard drive is broken. I'm not
sure what you're assuming, but if the hard drive is broken, I don't think ANY
file system can do anything about that.
At best, if the disk was in a RAID 5 array,
bash-3.00# zfs mount usbhdd1
cannot mount 'usbhdd1': E/A-Fehler
bash-3.00#
Why is there an I/O error?
Is there any information logged to /var/adm/messages when this
I/O error is reported? E.g. timeout errors for the USB storage device?
--
This message posted from opensolaris.org
James,
on a UFS ore reiserfs such errors could be corrected.
That's not true. That depends on the nature of the error.
I've seen quite a few problems on UFS with corrupted file contents;
such filesystems always are clean. Yet the filesystems are corrupted.
And no tool can fix those
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said, that on
USB drives zfs umount pool is not allowed.
So how on earth should a simple user know that, if he knows that filesystems
properly unmounted using
Hi Dave,
Having read through the whole thread, I think there are several things
that could all be adding to your problems.
At least some of which are not related to ZFS at all.
You mentioned the ZFS docs not warning you about this, and yet I know
the docs explictly tell you that:
1. While a
First: It sucks to loose data. That's very uncool...BUT
I don't know how ZFS should be able to recover data with no mirror to copy
from. If you have some kind of a RAID level you're easily able to recover your
data. I saw that several times. Without any problems and even with nearly no
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said, that on
USB drives zfs umount pool is not allowed.
You cannot unmount a pool.
You can only unmount a filesystem.
That the default name of the pool's
Full of sympathy, I still feel you might as well relax a bit.
It is the XkbVariant that starts X without any chance to return.
But look at the many boot stops after the third line, and from my side, the
not working network settings, even without nwam.
The worst part was a so-called engineer
D. Eckert wrote:
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said, that on
USB drives zfs umount pool is not allowed.
It is allowed. But it's not enough. You need to read both the 'zpool '
and
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said,
that on USB drives zfs umount pool is not allowed.
You misunderstand. This particular point has nothing to do with USB;
it's the same for any ZFS
Kyle McDonald wrote:
D. Eckert wrote:
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said, that
on USB drives zfs umount pool is not allowed.
It is allowed. But it's not enough. You need to
On Mon, 9 Feb 2009, D. Eckert wrote:
A good practice would be to care first for a proper documentation.
There's nothing stated in the man pages, if USB zpools are used,
that the zfs mount/unmount is NOT recommended and zpool export
should be used instead.
I have been using USB mirrored
Seagate7,
You are not using ZFS correctly. You have misunderstood how it is used. If you
dont follow the manual (which you havent) then any filesystem will cause
problems and corruption, even ZFS or ntfs or FAT32, etc. You must use ZFS
correctly. Start by reading the manual.
For ZFS to be
* Orvar Korvar (knatte_fnatte_tja...@yahoo.com) wrote:
Seagate7,
You are not using ZFS correctly. You have misunderstood how it is
used. If you dont follow the manual (which you havent) then any
filesystem will cause problems and corruption, even ZFS or ntfs or
FAT32, etc. You must use ZFS
ok == Orvar Korvar knatte_fnatte_tja...@yahoo.com writes:
ok You are not using ZFS correctly.
ok You have misunderstood how it is used. If you dont follow the
ok manual (which you havent) then any filesystem will cause
ok problems and corruption, even ZFS or ntfs or FAT32, etc.
On 9-Feb-09, at 6:17 PM, Miles Nordin wrote:
ok == Orvar Korvar knatte_fnatte_tja...@yahoo.com writes:
ok You are not using ZFS correctly.
ok You have misunderstood how it is used. If you dont follow the
ok manual (which you havent) then any filesystem will cause
ok
There is no substitute for cord-yank tests - many and often. The
weird part is, the ZFS design team simulated millions of them.
So the full explanation remains to be uncovered?
We simulated power failure; we did not simulate disks that simply
blow off write ordering. Any disk that you'd
101 - 179 of 179 matches
Mail list logo