On Dec 14, 2007 1:12 AM, can you guess?
[EMAIL PROTECTED] wrote:
yes. far rarer and yet home users still see
them.
I'd need to see evidence of that for current
hardware.
What would constitute evidence? Do anecdotal tales
from home users
qualify? I have two disks (and one
...
though I'm not familiar with any recent examples in
normal desktop environments
One example found during early use of zfs in Solaris
engineering was
a system with a flaky power supply.
It seemed to work just fine with ufs but when zfs was
installed the
sata drives started to
the next obvious question is, what is
causing the ZFS checksum errors? And (possibly of
some help in answering that question) is the disk
seeing CRC transfer errors (which show up in its
SMART data)?
The memory is ECC in this machine, and Memtest passed
it for five
days. The disk was
Hello can,
Thursday, December 13, 2007, 12:02:56 AM, you wrote:
cyg On the other hand, there's always the
possibility that someone
cyg else learned something useful out of this. And
my question about
To be honest - there's basically nothing useful in
the thread,
perhaps except one
Would you two please SHUT THE F$%K UP.
Just for future reference, if you're attempting to squelch a public
conversation it's often more effective to use private email to do it rather
than contribute to the continuance of that public conversation yourself.
Have a nice day!
- bill
This
Are there benchmarks somewhere showing a RAID10
implemented on an LSI card with, say, 128MB of cache
being beaten in terms of performance by a similar
zraid configuration with no cache on the drive
controller?
Somehow I don't think they exist. I'm all for data
scrubbing, but this
...
when the difference between an unrecoverable single
bit error is not just
1 bit but the entire file, or corruption of an entire
database row (etc),
those small and infrequent errors are an extremely
big deal.
You are confusing unrecoverable disk errors (which are rare but orders of
...
If the RAID card scrubs its disks
A scrub without checksum puts a huge burden on disk
firmware and
error reporting paths :-)
Actually, a scrub without checksum places far less burden on the disks and
their firmware than ZFS-style scrubbing does, because it merely has to scan the
Great questions.
1) First issue relates to the überblock. Updates to
it are assumed to be atomic, but if the replication
block size is smaller than the überblock then we
can't guarantee that the whole überblock is
replicated as an entity. That could in theory result
in a corrupt überblock
...
Now it seems to me that without parity/replication,
there's not much
point in doing the scrubbing, because you could
just wait for the error
to be detected when someone tries to read the data
for real. It's
only if you can repair such an error (before the
data is needed) that
(apologies if this gets posted twice - it disappeared the first time, and it's
not clear whether that was intentional)
Hello can,
Tuesday, December 11, 2007, 6:57:43 PM, you wrote:
Monday, December 10, 2007, 3:35:27 AM, you wrote:
cyg and it
made them slower
cyg That's the second
Hello can,
Tuesday, December 11, 2007, 6:57:43 PM, you wrote:
Monday, December 10, 2007, 3:35:27 AM, you wrote:
cyg and it
made them slower
cyg That's the second time you've claimed that, so you'll really at
cyg least have to describe *how* you measured this even if the
cyg detailed
...
Bill - I don't think there's a point in continuing
that discussion.
I think you've finally found something upon which we can agree. I still
haven't figured out exactly where on the stupid/intellectually dishonest
spectrum you fall (lazy is probably out: you have put some effort in to
Monday, December 10, 2007, 3:35:27 AM, you wrote:
cyg and it
made them slower
cyg That's the second time you've claimed that, so you'll really at
cyg least have to describe *how* you measured this even if the
cyg detailed results of those measurements may be lost in the mists of time.
...
I remember trying to help customers move
their
applications from
TOPS-20 to VMS, back in the early 1980s, and
finding
that the VMS I/O
capabilities were really badly lacking.
Funny how that works: when you're not familiar
with something, you often mistake your own
why don't you put your immense experience and
knowledge to contribute
to what is going to be
the next and only filesystems in modern operating
systems,
Ah - the pungent aroma of teenage fanboy wafts across the Net.
ZFS is not nearly good enough to become what you suggest above, nor is it
[EMAIL PROTECTED] wrote:
Darren,
Do you happen to have any links for this? I
have not seen anything
about NTFS and CAS/dedupe besides some of the third
party apps/services
that just use NTFS as their backing store.
Single Instance Storage is what Microsoft uses to
refer to
from the description here
http://www.djesys.com/vms/freevms/mentor/rms.html
so who cares here ?
RMS is not a filesystem, but more a CAS type of data
repository
Since David begins his description with the statement RMS stands for Record
Management Services. It is the underlying file
Yet another prime example.
Ah - yet another brave denizen (and top-poster) who's more than happy to dish
it out but squeals for administrative protection when receiving a response in
kind.
The fact that your pleas seem to be going unanswered actually reflects rather
well on whoever is
can you run a database on RMS?
As well as you could on must Unix file systems. And you've been able to do so
for almost three decades now (whereas features like asynchronous and direct I/O
are relative newcomers in the Unix environment).
I guess its not suited
And you guess wrong: that's
can you guess? wrote:
can you run a database on RMS?
As well as you could on must Unix file systems.
And you've been able to do so for almost three
decades now (whereas features like asynchronous and
direct I/O are relative newcomers in the Unix
environment).
nny, I
You have me at a disadvantage here, because I'm
not
even a Unix (let alone Solaris and Linux)
aficionado.
But don't Linux snapshots in conjunction with
rsync
(leaving aside other possibilities that I've never
heard of) provide rather similar capabilities
(e.g.,
incremental backup
I keep getting ETOOMUCHTROLL errors thrown while
reading this list, is
there a list admin that can clean up the mess? I
would hope that repeated
personal attacks could be considered grounds for
removal/blocking.
Actually, most of your more unpleasant associates here seem to suffer
Once again, profuse apologies for having taken so long (well over 24 hours by
now - though I'm not sure it actually appeared in the forum until a few hours
after its timestamp) to respond to this.
can you guess? wrote:
Primarily its checksumming features, since other
open source solutions
Allowing a filesystem to be rolled back without unmounting it sounds unwise,
given the potentially confusing effect on any application with a file currently
open there.
And if a user can't roll back their home directory filesystem, is that so bad?
Presumably they can still access snapshot
So name these mystery alternatives that come anywhere
close to the protection,
If you ever progress beyond counting on your fingers you might (with a lot of
coaching from someone who actually cares about your intellectual development)
be able to follow Anton's recent explanation of this
is worth a visit.
On Dec 5, 2007, at 19:42, bill todd - aka can you
guess? wrote:
what are you terming as ZFS' incremental risk
reduction? ..
(seems like a leading statement toward a
particular assumption)
Primarily its checksumming features, since other
open source
solutions
can you guess? wrote:
There aren't free alternatives in linux or freebsd
that do what zfs does, period.
No one said that there were: the real issue is
that there's not much reason to care, since the
available solutions don't need to be *identical* to
offer *comparable
(Can we
declare this thread
dead already?)
Many have already tried, but it seems to have a great deal of staying power.
You, for example, have just contributed to its continued vitality.
Others seem to care.
*identical* to offer *comparable* value (i.e., they
each have
different
can you guess? wrote:
There aren't free alternatives in linux or freebsd
that do what zfs does, period.
No one said that there were: the real issue is
that there's not much reason to care, since the
available solutions don't need to be *identical* to
offer *comparable* value
I
suspect ZFS will change that game in the future.
In
particular for someone doing lots of editing,
snapshots can help recover from user error.
Ah - so now the rationalization has changed to
snapshot support.
Unfortunately for ZFS, snapshot support is pretty
commonly available
my personal-professional data are important (this is
my valuation, and it's an assumption you can't
dispute).
Nor was I attempting to: I was trying to get you to evaluate ZFS's incremental
risk reduction *quantitatively* (and if you actually did so you'd likely be
surprised at how little
I was trying to get you
to evaluate ZFS's
incremental risk reduction *quantitatively* (and if
you actually
did so you'd likely be surprised at how little
difference it makes
- at least if you're at all rational about
assessing it).
ok .. i'll bite since there's no ignore
he isn't being
paid by NetApp.. think bigger
O frabjous day! Yet *another* self-professed psychic, but one whose internal
voices offer different counsel.
While I don't have to be psychic myself to know that they're *all* wrong
(that's an advantage of fact-based rather than faith-based
...
Hi bill, only a question:
I'm an ex linux user migrated to solaris for zfs
and
its checksumming;
So the question is: do you really need that
feature (please
quantify that need if you think you do), or do you
just like it
because it makes you feel all warm and safe?
On Tue, 4 Dec 2007, Stefano Spinucci wrote:
On 11/7/07, can you guess?
[EMAIL PROTECTED]
wrote:
However, ZFS is not the *only* open-source
approach
which may allow that to happen, so the real
question
becomes just how it compares with equally
inexpensive
current and potential
Literacy has nothing to do with the glaringly obvious
BS you keep spewing.
Actually, it's central to the issue: if you were capable of understanding what
I've been talking about (or at least sufficiently humble to recognize the
depths of your ignorance), you'd stop polluting this forum with
Now, not being a psychic myself, I can't state
with
authority that Stefano really meant to ask the
question that he posed rather than something else.
In retrospect, I suppose that some of his
surrounding phrasing *might* suggest that he was
attempting (however unskillfully) to twist
I suppose we're all just wrong.
By George, you've got it!
- bill
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Your response here appears to refer to a different post in this thread.
I never said I was a typical consumer.
Then it's unclear how your comment related to the material which you quoted
(and hence to which it was apparently responding).
If you look around photo forums, you'll see an
And some results (for OLTP workload):
http://przemol.blogspot.com/2007/08/zfs-vs-vxfs-vs-ufs
-on-scsi-array.html
While I was initially hardly surprised that ZFS offered only 11% - 15% of the
throughput of UFS or VxFS, a quick glance at Filebench's OLTP workload seems to
indicate that it's
On 11/7/07, can you guess?
[EMAIL PROTECTED]
wrote:
However, ZFS is not the *only* open-source
approach
which may allow that to happen, so the real
question
becomes just how it compares with equally
inexpensive
current and potential alternatives (and that would
make
We will be using Cyrus to store mail on 2540 arrays.
We have chosen to build 5-disk RAID-5 LUNs in 2
arrays which are both connected to same host, and
mirror and stripe the LUNs. So a ZFS RAID-10 set
composed of 4 LUNs. Multi-pathing also in use for
redundancy.
Sounds good so far: lots
Any reason why you are using a mirror of raid-5
lun's?
Some people aren't willing to run the risk of a double failure - especially
when recovery from a single failure may take a long time. E.g., if you've
created a disaster-tolerant configuration that separates your two arrays and a
fire
If it's just performance you're after for small
writes, I wonder if you've considered putting the ZIL
on an NVRAM card? It looks like this can give
something like a 20x performance increase in some
situations:
http://blogs.sun.com/perrin/entry/slog_blog_or_bloggin
g_on
That's certainly
[Zombie thread returns from the grave...]
Getting back to 'consumer' use for a moment,
though,
given that something like 90% of consumers entrust
their PC data to the tender mercies of Windows, and
a
large percentage of those neither back up their
data,
nor use RAID to guard against
Hi Bill,
...
lots of small files in a
largish system with presumably significant access
parallelism makes RAID-Z a non-starter,
Why does lots of small files in a largish system
with presumably
significant access parallelism makes RAID-Z a
non-starter?
thanks,
max
Every ZFS block in
We are running Solaris 10u4 is the log option in
there?
Someone more familiar with the specifics of the ZFS releases will have to
answer that.
If this ZIL disk also goes dead, what is the failure
mode and recovery option then?
The ZIL should at a minimum be mirrored. But since that
I think the point of dual battery-backed controllers
is
that data should never be lost. Am I wrong?
That depends upon exactly what effect turning off the ZFS cache-flush mechanism
has. If all data is still sent to the controllers as 'normal' disk writes and
they have no concept of, say,
Bill, you have a long-winded way of saying I don't
know. But thanks for elucidating the possibilities.
Hmmm - I didn't mean to be *quite* as noncommittal as that suggests: I was
trying to say (without intending to offend) FOR GOD'S SAKE, MAN: TURN IT BACK
ON!, and explaining why (i.e.,
That depends upon exactly what effect turning off
the
ZFS cache-flush mechanism has.
The only difference is that ZFS won't send a
SYNCHRONIZE CACHE command at the end of a transaction
group (or ZIL write). It doesn't change the actual
read or write commands (which are always sent as
In order to be reasonably representative of a real-world situation, I'd suggest
the following additions:
1) create a large file (bigger than main memory) on
an empty ZFS pool.
1a. The pool should include entire disks, not small partitions (else seeks
will be artificially short).
1b. The
...
This needs to be proven with a reproducible,
real-world workload before it
makes sense to try to solve it. After all, if we
cannot measure where
we are,
how can we prove that we've improved?
Ah - Tests Measurements types: you've just gotta love 'em.
Wife: Darling, is there really
I'm going to combine three posts here because they all involve jcone:
First, as to my message heading:
The 'search forum' mechanism can't find his posts under the 'jcone' name (I was
curious, because they're interesting/strange, depending on how one looks at
them). I've also noticed (once in
The B-trees I'm used to tree divide in arbitrary
places across the whole
key, so doing partial-key queries is painful.
While the b-trees in DEC's Record Management Services (RMS) allowed
multi-segment keys, they treated the entire key as a byte-string as far as
prefix searches went (i.e.,
...
My understanding of ZFS (in short: an upside down
tree) is that each block is referenced by it's
parent. So regardless of how many snapshots you take,
each block is only ever referenced by one other, and
I'm guessing that the pointer and checksum are both
stored there.
If that's the
...
With regards sharing the disk resources with other
programs, obviously it's down to the individual
admins how they would configure this,
Only if they have an unconstrained budget.
but I would
suggest that if you have a database with heavy enough
requirements to be suffering noticable
Rats - I was right the first time: there's a messy problem with snapshots.
The problem is that the parent of the child that you're about to update in
place may *already* be in one or more snapshots because one or more of its
*other* children was updated since each snapshot was created. If so,
But the whole point of snapshots is that they don't
take up extra space on the disk. If a file (and
hence a block) is in every snapshot it doesn't mean
you've got multiple copies of it. You only have one
copy of that block, it's just referenced by many
snapshots.
I used the wording copies
...
just rearrange your blocks sensibly -
and to at least some degree you could do that while
they're still cache-resident
Lots of discussion has passed under the bridge since that observation above,
but it may have contained the core of a virtually free solution: let your
table become
OTOH, when someone whom I don't know comes across as
a pushover, he loses
credibility.
It may come as a shock to you, but some people couldn't care less about those
who assess 'credibility' on the basis of form rather than on the basis of
content - which means that you can either lose out
: Big talk from someone who seems so intent on hiding
: their credentials.
: Say, what? Not that credentials mean much to me since I evaluate people
: on their actual merit, but I've not been shy about who I am (when I
: responded 'can you guess?' in registering after giving billtodd
Regardless of the merit of the rest of your proposal, I think you have put your
finger on the core of the problem: aside from some apparent reluctance on the
part of some of the ZFS developers to believe that any problem exists here at
all (and leaving aside the additional monkey wrench that
until I remembered that you said that you were speaking for others as well and
decided that I'd like to speak to them too.
As I said in a different thread, I really do try to respond to people in the
manner that they deserve (and believe that in most cases here I have done so):
even though I
You've been trolling from the get-go and continue to
do so.
Y'know, cookie, before letting the drool onto your keyboard you really ought to
learn to research it.
I said a good deal of what I've said recently well over a year ago here (and in
fact had forgotten how much detail I went into
Ah - no references to back up your drivel, I see.
No surprise there, of course - but thanks for playing.
- bill
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Big talk from someone who seems so intent on hiding
their credentials.
Say, what? Not that credentials mean much to me since I evaluate people on
their actual merit, but I've not been shy about who I am (when I responded 'can
you guess?' in registering after giving billtodd as my member name
I've been observing two threads on zfs-discuss with
the following
Subject lines:
Yager on ZFS
ZFS + DB + fragments
and have reached the rather obvious conclusion that
the author can
you guess? is a professional spinmeister,
Ah - I see we have another incompetent psychic chiming
...
I personally believe that since most people will have
hardware LUN's
(with underlying RAID) and cache, it will be
difficult to notice
anything. Given that those hardware LUN's might be
busy with their own
wizardry ;) You will also have to minimize the effect
of the database
cache ...
can you guess? billtodd at metrocast.net writes:
You really ought to read a post before responding
to it: the CERN study
did encounter bad RAM (and my post mentioned that)
- but ZFS usually can't
do a damn thing about bad RAM, because errors tend
to arise either
before ZFS ever
can you guess? wrote:
For very read intensive and position sensitive
applications, I guess
this sort of capability might make a difference?
No question about it. And sequential table scans
in databases
are among the most significant examples, because
(unlike things
like
...
Well, ZFS allows you to put its ZIL on a separate
device which could
be NVRAM.
And that's a GOOD thing (especially because it's optional rather than requiring
that special hardware be present). But if I understand the ZIL correctly not
as effective as using NVRAM as a more general kind
...
At home the biggest reason I
went with ZFS for my
data is ease of management. I split my data up based
on what it is ...
media (photos, movies, etc.), vendor stuff (software,
datasheets,
etc.), home directories, and other misc. data. This
gives me a good
way to control backups based
Richard Elling wrote:
...
there are
really two very different configurations used to
address different
performance requirements: cheap and fast. It seems
that when most
people first consider this problem, they do so from
the cheap
perspective: single disk view. Anyone who strives
for
Adam Leventhal wrote:
On Thu, Nov 08, 2007 at 07:28:47PM -0800, can you guess? wrote:
How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
Nope.
A decent RAID-5 hardware implementation has no 'write hole' to worry about,
and one can make a software implementation
...
For modern disks, media bandwidths are now getting to
be 100 MBytes/s.
If you need 500 MBytes/s of sequential read, you'll
never get it from
one disk.
And no one here even came remotely close to suggesting that you should try to.
You can get it from multiple disks, so the questions
This question triggered some silly questions in my
mind:
Actually, they're not silly at all.
Lots of folks are determined that the whole COW to
different locations
are a Bad Thing(tm), and in some cases, I guess it
might actually be...
What if ZFS had a pool / filesystem property
some business do not accept any kind of risk
Businesses *always* accept risk: they just try to minimize it within the
constraints of being cost-effective. Which is a good thing for ZFS, because it
can't eliminate risk either, just help to minimize it cost-effectively.
However, the subject
...
And how about FAULTS?
hw/firmware/cable/controller/ram/...
If you had read either the CERN study or what I
already said about
it, you would have realized that it included the
effects of such
faults.
...and ZFS is the only prophylactic available.
You don't *need* a
can you guess? wrote:
at the moment only ZFS can give this assurance,
plus
the ability to
self correct detected
errors.
You clearly aren't very familiar with WAFL (which
can do the same).
...
so far as I can tell it's quite
irrelevant to me at home; I
can't
Nathan Kroenert wrote:
...
What if it did a double update: One to a
staged area, and another
immediately after that to the 'old' data blocks.
Still always have
on-disk consistency etc, at a cost of double the
I/O's...
This is a non-starter. Two I/Os is worse than one.
Well, that
On 14-Nov-07, at 7:06 AM, can you guess? wrote:
...
And how about FAULTS?
hw/firmware/cable/controller/ram/...
If you had read either the CERN study or what I
already said about
it, you would have realized that it included the
effects of such
faults.
...and ZFS
...
Well single bit error rates may be rare in
normal
operation hard
drives, but from a systems perspective, data can
be
corrupted anywhere
between disk and CPU.
The CERN study found that such errors (if they
found any at all,
which they couldn't really be sure of) were
In the previous and current responses, you seem quite
determined of
others misconceptions.
I'm afraid that your sentence above cannot be parsed grammatically. If you
meant that I *have* determined that some people here are suffering from various
misconceptions, that's correct.
Given
Thanks for taking the time to flesh these points out. Comments below:
...
The compression I see varies from something like 30%
to 50%, very
roughly (files reduced *by* 30%, not files reduced
*to* 30%). This is
with the Nikon D200, compressed NEF option. On some
of the lower-level
Well, I guess we're going to remain stuck in this sub-topic for a bit longer:
The vast majority of what ZFS can detect (save for
*extremely* rare
undetectable bit-rot and for real hardware
(path-related) errors that
studies like CERN's have found to be very rare -
and you have yet to
Chill. It's a filesystem. If you don't like it,
don't use it.
Hey, I'm cool - it's mid-November, after all. And it's not about liking or not
liking ZFS: it's about actual merits vs. imagined ones, and about legitimate
praise vs. illegitimate hype.
Some of us have a professional interest
Hallelujah! I don't know when this post actually appeared in the forum, but it
wasn't one I'd seen until right now. If it didn't just appear due to whatever
kind of fluke made the 'disappeared' post appear right now too, I apologize for
having missed it earlier.
In a compressed raw file,
...
Having
my MP3 collection
gotten fucked up thanks to neither Windows nor NTFS
being able to
properly detect and report in-flight data corruption
(i.e. bad cable),
after copying it from one drive to another to replace
one of them, I'm
really glad that I've ZFS to manage my data these
On 9-Nov-07, at 3:23 PM, Scott Laird wrote:
Most video formats are designed to handle
errors--they'll drop a frame
or two, but they'll resync quickly. So, depending
on the size of the
error, there may be a visible glitch, but it'll
keep working.
Interestingly enough, this
On 9-Nov-07, at 2:45 AM, can you guess? wrote:
...
This suggests that in a ZFS-style installation
without a hardware
RAID controller they would have experienced at
worst a bit error
about every 10^14 bits or 12 TB
And how about FAULTS?
hw/firmware/cable/controller/ram
to.
- bill
can you guess? wrote:
...
Most of the balance of your post isn't
addressed
in
any detail because it carefully avoids the
fundamental issues that I raised:
Not true; and by selective quoting you have
removed
my specific
responses to most
No, you aren't cool, and no it isn't about zfs or
your interest in it. It was clear from the get-go
that netapp was paying you to troll any discussion on
it,
It's (quite literally) amazing how the most incompetent individuals turn out to
be those who are the most certain of their
This is a bit weird: I just wrote the following response to a dd-b post that
now seems to have disappeared from the thread. Just in case that's a temporary
aberration, I'll submit it anyway as a new post.
can you guess? wrote:
Ah - thanks to both of you. My own knowledge of
video format
can you guess? wrote:
...
If you include
'image files of various
sorts', as he did (though this also raises the
question of whether we're
still talking about 'consumers'), then you also
have to specify exactly
how damaging single-bit errors are to those various
'sorts' (one might
can you guess? wrote:
This is a bit weird: I just wrote the following
response to a dd-b post that now seems to have
disappeared from the thread. Just in case that's a
temporary aberration, I'll submit it anyway as a new
post.
Strange things certainly happen here now
can you guess? wrote:
...
Most of the balance of your post isn't addressed in
any detail because it carefully avoids the
fundamental issues that I raised:
Not true; and by selective quoting you have removed
my specific
responses to most of these issues.
While I'm naturally
can you guess? wrote:
...
Most of the balance of your post isn't addressed
in
any detail because it carefully avoids the
fundamental issues that I raised:
Not true; and by selective quoting you have
removed
my specific
responses to most of these issues.
While I'm
Thanks for the detailed reply, Robert. A significant part of it seems to be
suggesting that high-end array hardware from multiple vendors may be
*introducing* error sources that studies like CERN's (and Google's, and CMU's)
never encountered (based, as they were, on low-end hardware).
If so,
bull*
-- richard
Hmmm.
Was that bull* as in
Numbers? We don't need no stinking numbers! We're so cool that we work for a
guy who thinks he's Steve Jobs!
or
Silly engineer! Can't you see that I've got my rakish Marketing hat on?
Backwards!
or
I jes got back from an early start on my
1 - 100 of 113 matches
Mail list logo