On Wed, 2006-05-03 at 17:20, Matthew Ahrens wrote:
We appreciate your suggestion that we implement a higher-performance
method for storing additional metadata associated with files. This will
most likely not be possible within the extended attribute interface, and
will require that we design
On Wed, 2006-05-24 at 12:22, James Dickens wrote:
how about changing the name of the file to uid or username-filename
this atleast gets you the ability to let each user the ability to
delete there own file, shouldn't be much work. Another possible
enhancement would be adding anything field in
On Tue, 2006-05-30 at 16:47, Haik Aftandilian wrote:
Technically, this bug should not be marked as fixed. It should be closed
as a dupe or just closed as will not fix with a comment indicating it
was fixed by 6410698.
In past cases like this, I was told to close it as unreproduceable
rather
On Wed, 2006-05-31 at 10:48, Anton Rang wrote:
We generally take one interrupt for each I/O
(if the CPU is fast enough), so instead of taking one
interrupt for 8 MB (for instance), we take 64.
Hunh. Gigabit ethernet devices typically implement some form of
interrupt blanking or
On Thu, 2006-06-01 at 04:36, Jeff Bonwick wrote:
It would be far
better, when allocating a B-byte intent log block in an N-disk
RAID-Z group, to allocate B*N bytes but only write to one disk
(or two if you want to be paranoid). This simple change should
make synchronous I/O on N-way RAID-Z
On Wed, 2006-06-07 at 17:31, Nicolas Williams wrote:
Views would be faster, initially (they could be O(1) to create),
if you're not incrementally maintaining indexes, and you want O(1)
creation and O(D) readdir (where D is the size of the directory) of
directories, I don't think you'll be able
On Wed, 2006-06-21 at 14:15, Neil Perrin wrote:
Of course we would need to stress the dangers of setting 'deferred'.
What do you guys think?
I can think of a use case for deferred: improving the efficiency of a
large mega-transaction/batch job such as a nightly build.
You create an initially
On Thu, 2006-06-22 at 03:55, Roch wrote:
How about the 'deferred' option be on a leased basis with a
deadline to revert to normal behavior; at most 24hrs at a
time.
why?
Console output everytime the option is enabled.
in general, no. error messages to the console should be reserved
On Thu, 2006-06-22 at 13:01, Roch wrote:
Is there a sync command that targets individual FS ?
Yes. lockfs -f
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Thu, 2006-06-22 at 13:19, Darren J Moffat wrote:
Yes. lockfs -f
Does lockfs work with ZFS ? The man page appears to indicate it is very
UFS specific.
all of lockfs does not. but, if truss is to believed, the ioctl used by
lockfs -f appears to. or at least, it returns without error.
On Thu, 2006-06-22 at 10:55, [EMAIL PROTECTED] wrote:
To me, a PRIV_OBJECT_MODIFY which is required for any file modifying
operation would seem to be more useful as often a read-only user is
a worthwhile thing to have; perhaps mirrored with a PRIV_OBJECT_ACCESS
in case you want to prevent any
On Fri, 2006-07-14 at 07:03, Darren J Moffat wrote:
The current plan is that encryption must be turned on when the file
system is created and can't be turned on later. This means that the
zfs-crypto work depends on the RFE to set properties at file system
creation time.
You also won't
On Sun, 2006-07-16 at 15:21, Darren J Moffat wrote:
Jeff Victor wrote:
Why? Is the 'data is encrypted' flag only stored in filesystem
metadata, or is that flag stored in each data block?
Like compression and which checksum algorithm it will be stored in
every dmu object.
I thought
On Tue, 2006-07-25 at 13:45, Rainer Orth wrote:
At other times, the kernel time can be even as high as 80%. Unfortunately,
I've not been able to investigate how usec_delay is called since there's no
fbt provider for that function (nor for the alternative entry point
drv_usecwait found in
On Tue, 2006-07-25 at 14:36, Rainer Orth wrote:
Perhaps lockstat(1M) should be updated to include something like
this in the EXAMPLES section.
I filed 6452661 with this suggestion.
Any word when this might be fixed?
I can't comment in terms of time, but the engineer working on it has a
On Tue, 2006-08-15 at 12:47 -0700, Eric Schrock wrote:
The copy-on-write nature of ZFS makes this extremely difficult,
particularly w.r.t. to snapshots. That's not to say it can't be solved,
only that it won't be solved in the near term (i.e. within the next
year). The timeframe for ZFS
On Wed, 2006-08-16 at 11:49 -0400, Eric Enright wrote:
On 8/16/06, William Fretts-Saxton [EMAIL PROTECTED] wrote:
I'm having trouble finding information on any hooks into ZFS. Is
there information on a ZFS API so I can access ZFS information
directly as opposed to having to constantly
On Wed, 2006-08-23 at 14:38 -0700, Darren Dunham wrote:
For those folks that like to live just *over* the edge and would like to
use ACL-less backups on ZFS with existing networker clients, what is the
possibility of creating a pre-loadable library that wrapped acl(2)?
I may regret admitting
On Fri, 2006-09-01 at 06:03 -0700, Marlanne DeLaSource wrote:
As I understand it, the snapshot of a set is used as a reference by
the clone.
So the clone is initially a set of pointers to the snapshot. That's
why it is so fast to create.
How can I separate it from the snapshot ? (so that
On Wed, 2006-09-13 at 02:30, Richard Elling wrote:
The field data I have says that complete disk failures are the exception.
I hate to leave this as a teaser, I'll expand my comments later.
That matches my anecdotal experience with laptop drives; maybe I'm just
lucky, or maybe I'm just paying
I would go with:
(3) Three 4D+1P h/w RAID-5 groups, no hot spare, mapped to one LUN each.
Setup a ZFS pool of one RAID-Z group consisting of those three LUN's.
Only ~3200GB available space, but what looks like very good resiliency
in face of multiple disk failures.
IMHO building
On Fri, 2006-10-06 at 00:07 -0700, Richard L. Hamilton wrote:
Some people are making money on the concept, so I
suppose there are those who perceive benefits:
http://en.wikipedia.org/wiki/Rational_ClearCase
(I dimly remember DSEE on the Apollos; ...)
I used both fairly extensively. Much
On Sun, 2006-10-15 at 20:54 -0700, Matthew Ahrens wrote:
Frank Cusack wrote:
Someone said being able to add devices to a raidz vdev is on the todo list.
Don't hold your breath. This isn't being worked on, or on anyone's todo
list at this point, and implementing it would be pretty
On Thu, 2006-10-19 at 08:09 +0800, Hong Wei Liam wrote:
I understand that ZFS leaves multipathing to MPXIO or the like. For a
combination of dual-path A5200 with QLGC2100 HBAs ...
I'd actually not bother with multipathing A5200's for ZFS
I have a pair of A5200's which I'm using with zfs. I
On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote:
Bill Sommerfield,
Again, that's not how my name is spelled.
With some normal sporadic read failure, accessing
the whole spool may force repeated reads for
the replace.
please look again at the iostat I posted:
On Thu, 2006-11-09 at 21:19 -0800, Erblichs wrote:
Bill, Sommerfeld, Sorry,
However, I am trying to explain what I think is
happening on your system and why I consider this
normal.
I'm not interested in speculation. Please do not respond to this
message.
To copy
On Fri, 2006-11-10 at 14:40 -0800, Thomas Maier-Komor wrote:
Might it be possible to add an extension that would make it possible,
to support dumping without the whole ZFS cache? I guess this would
make kernel live dumps smaller again, as they used to be...
It's just a bug:
4894692 caching
On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
After examining the source, it clearly wipes the vdev label during a detach.
I suppose it does this so that the machine can't get confused at a later date.
It would be nice if the detach simply renamed something, rather than
destroying
On Thu, 2006-11-16 at 16:08 -0800, Fred Zlotnick wrote:
I'm curious to hear of any migration success stories - or not - that
folks on this alias have experienced. You can send them to me and
I'll summarize to the alias.
I sent one to this list some months ago.
To recap, I used a variant of
On Sat, 2006-12-02 at 00:08 -0500, Theo Schlossnagle wrote:
I had a disk malfunction in a raidz pool today. I had an extra on in
the enclosure and performed a: zpool replace pool old new and several
unexpected behaviors have transpired:
the zpool replace command hung for 52 minutes
On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote:
mypool2/[EMAIL PROTECTED] 34.4M - 151G -
mypool2/[EMAIL PROTECTED] 141K - 189G -
mypool2/d3 492G 254G 11.5G legacy
I am so confused with all of this... Why its taking so long to replace that
one
bad
On Wed, 2006-12-13 at 10:24 -0800, Richard Elling wrote:
I've seen two cases of disk failure where errors only occurred during
random I/O; all blocks were readable sequentially; in both cases, this
permitted the disk to be replaced without data loss and without
resorting to backups by
On Thu, 2006-12-14 at 11:33 +0100, Roch - PAE wrote:
We did have a use case for zil synchronicity which was a
big user controlled transaction :
turn zil off
do tons of thing to the filesystem.
big sync
turn zil back on
Yep. The bulk of the heavy lifting on
On Mon, 2006-12-18 at 16:05 +, Darren J Moffat wrote:
6) When modifying any file you want to bleach the old blocks in a way
very simlar to case 1 above.
I think this is the crux of the problem. If you fail to solve it, you
can't meaningfully say that all blocks which once contained parts
On Tue, 2006-12-19 at 16:19 -0800, Matthew Ahrens wrote:
Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.
I see two use cases here:
I agree with your two, but I think I see a third use
On Wed, 2006-12-20 at 03:21 -0800, james hughes wrote:
This would be mostly a vanity erase not really a serious security
erase since it will not over write the remnants of remapped sectors.
Yup. As usual, your milage will vary depending on your threat model.
My gut feel is that there's
On Tue, 2006-12-26 at 14:01 +0300, Victor Latushkin wrote:
What happens if fatal failure occurs after the txg which frees blocks
have been written but before before txg doing bleaching will be
started/completed?
clearly you'd need to store the unbleached list persistently in the
pool.
On Tue, 2006-12-26 at 13:59 -0500, Torrey McMahon wrote:
clearly you'd need to store the unbleached list persistently in the
pool.
Which could then be easily referenced to find all the blocks that were
recently deleted but not yet bleached? Is my paranoia running a bit too
high?
I
Note that you'd actually have to verify that the blocks were the same;
you cannot count on the hash function. If you didn't do this, anyone
discovering a collision could destroy the colliding blocks/files.
Given that nobody knows how to find sha256 collisions, you'd of course
need to test
On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote:
So there's no way to treat a 6140 as JBOD? If you wanted to use a 6140
with ZFS, and really wanted JBOD, your only choice would be a RAID 0
config on the 6140?
Why would you want to treat a 6140 like a JBOD? (See the previous
On Fri, 2007-01-26 at 06:16 -0800, Jeffery Malloch wrote:
2. Recommended config.
1) Since this is a system that many users will depend on, use
zfs-managed redundancy, either mirroring or raid-z, between the LUNs
exported by the storage system. You may think your storage system is
perfect,
On Mon, 2007-01-29 at 14:15 -0800, Matt Ingenthron wrote:
inside the sausage factory
btw - just wondering - is this some english phrase or some running gag ? i
have seen it once ago on another blog and so i`m wondering
greetings from the beer and sausage nation ;)
On Thu, 2007-04-12 at 14:09 -0600, Mark Shellenbaum wrote:
Pawel Jakub Dawidek wrote:
What are your suggestions?
I am currently working on adding a number of the BSD flags into ZFS.
The existance of the FreeBSD port plus the desire to add a subset of the
BSD file flags to solaris means
On Thu, 2007-04-12 at 17:26 -0500, Nicolas Williams wrote:
(the system flags on *BSD are tied to securelevel; the closet solaris
equivalent would be to define new set system flag and clear system
flag privileges).
There'd have to be a way to drop these privs from L on all running
On Thu, 2007-04-19 at 11:59 -0700, Mario Goebbels wrote:
Is it possible to gracefully and permanently remove a vdev from a pool
without data loss?
Not yet. But it's on lots of people's wishlists, there's an open RFE,
and members of the zfs team have said on this list that they're working
on
On Tue, 2007-04-17 at 17:25 -0500, Shawn Walker wrote:
I would think the average person would want
to have access to 1000s of DVDs / CDs within
a small box versus taking up the full wall.
This is already being done now, and most of the companies doing it are
On Wed, 2007-04-25 at 21:30 -0700, Richard Elling wrote:
Brian Gupta wrote:
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
IMHO, only a few people in the world care about dumps at all (and you
know who you are
On Mon, 2007-04-30 at 16:53 -0700, Darren Dunham wrote:
zpool list doesn't reflect pool usage stats instantly. Why?
This is no different to how UFS behaves.
It is different now (although I spent about 5 minutes looking for an old
bug that would point to *when* the UFS change went in, I
On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:
Btw: In one experiment I tried to boot the kernel under kmdb
control (-kd), patched minclsyspri := 61 and used a
breakpoint inside spa_active() to patch the spa_zio_* taskq
to use prio 60 when importing the gzip compressed pool
(so that
On Fri, 2007-05-25 at 10:20 -0600, Lori Alt wrote:
We've been kicking around the question of whether or
not zfs root mounts should appear in /etc/vfstab (i.e., be
legacy mount) or use the new zfs approach to mounts.
Instead of writing up the issues again, here's a blog
entry that I just
On Tue, 2007-05-29 at 18:48 -0700, Richard Elling wrote:
The belief is that COW file systems which implement checksums and data
redundancy (eg, ZFS and the ZFS copies option) will be redundant over
CF's ECC and wear leveling *at the block level.* We believe ZFS will
excel in this area, but
On Thu, 2007-05-31 at 13:27 +0100, Darren J Moffat wrote:
What errors and error rates have you seen?
I have seen switches flip bits in NFS traffic such that the TCP checksum
still match yet the data was corrupted. One of the ways we saw this was
when files were being checked out of
On Mon, 2007-06-11 at 00:57 -0700, Frank Batschulat wrote:
a directory is strictly speaking not a regular file and this is in a way
enforced by ZFS,
the standards wording further defines later on..
So, yes, the standards allow this behavior -- but it's important to
distinguish between
On Mon, 2007-06-11 at 23:03 +0200, [EMAIL PROTECTED] wrote:
Maybe some additional pragmatism is called for here. If we want NFS
over ZFS to work well for a variety of clients, maybe we should set
st_size to larger values..
+1; let's teach the admins to do st_size /= 24 mentally :-)
Mental
On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote:
The implication of which, of course, is that any app build for Solaris 9
or before which uses scandir may have picked up a broken one.
or any app which includes its own copy of the BSD scandir code, possibly
under a different name,
On Thu, 2007-06-14 at 17:45 -0700, Bart Smaalders wrote:
This is how I run my home server w/ 4 500GB drives - a small
40GB IDE drive provides root swap/dump device, the 4 500 GB
drives are RAIDZ contain all the data. I ran out of drive
bays, so I used one of those 5 1/4 - 3.5 adaptor
On Wed, 2007-06-20 at 12:45 +0200, Pawel Jakub Dawidek wrote:
Will be nice to not EFI label disks, though:) Currently there is a
problem with this - zpool created on Solaris is not recognized by
FreeBSD, because FreeBSD claims GPT label is corrupted.
Hmm. I'd think the right answer here is to
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
The most common non-proprietary hash calc for file-level deduplication seems
to be the combination of the SHA1 and MD5 together. Collisions have been
shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but
the
[This is version 2. the first one escaped early by mistake]
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
The most common non-proprietary hash calc for file-level deduplication seems
to be the combination of the SHA1 and MD5 together. Collisions have been
shown to exist in MD5 and
On Thu, 2007-07-12 at 10:45 -0700, Bart Smaalders wrote:
aside
For those of us who've been swapping to zvols for some time, can
you describe the failure modes?
/aside
I asked about this during the zfs boot inception review -- the high
level answer is occasional deadlock in low-memory
On Thu, 2007-07-12 at 16:27 -0700, Richard Elling wrote:
I think we should up-level this and extend to the community for comments.
The proposal, as I see it, is to create a simple,
yes
contiguous (?)
as I understand the proposal, not necessarily contiguous.
space which sits in a zpool.
On Mon, 2007-07-16 at 18:19 -0700, Russ Petruzzelli wrote:
Or am I just getting myself into shark infested waters?
configurations that might be interesting to play with:
(emphasis here on play...)
1) use the T3's management CLI to reconfigure the T3 into two raid-0
volumes, and mirror them
On Mon, 2007-08-13 at 16:42 +0100, Paul Lippai wrote:
If so, could you point me in the direction of where I can obtain
details of this new feature from.
proposed specs and architecture review discussion can be found at:
http://www.opensolaris.org/os/community/arc/caselog/2006/525/
On Wed, 2007-08-29 at 09:41 -0700, Eric Schrock wrote:
Note that 'fstyp -v' does the same thing as 'zdb -l', and
is marginally more stable. The output it still technically subject to
change, but it's highly unlikely (given the pain such a change would
cause).
If other programs depend on
On Wed, 2007-09-05 at 14:26 -0700, Richard Elling wrote:
AFAIK, nobody has characterized resilvering, though this is about the 4th
time this week someone has brought the topic up. Has anyone done work here
that we don't know about? If so, please speak up :-)
I haven't been conducting
On Tue, 2007-09-11 at 13:43 -0700, Gino wrote:
-ZFS+FC JBOD: failed hard disk need a reboot :(
(frankly unbelievable in 2007!)
So, I've been using ZFS with some creaky old FC JBODs (A5200's) and old
disks which have been failing regularly and haven't seen that; the worst
I've seen
On Tue, 2007-09-25 at 10:14 -0700, Vincent Fox wrote:
Where is ZFS with regards to the NVRAM cache present on arrays?
I have a pile of 3310 with 512 megs cache, and even some 3510FC with
1-gig cache. It seems silly that it's going to waste. These are
dual-controller units so I have no
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli for 3xxx, etc., etc.,
How would you ensure that it stayed accurate in
On Thu, 2007-10-18 at 08:04 -0500, Gary Mills wrote:
Here's a suggestion on the cause:
The root problem seems to be an interaction between Solaris' concept
of global memory consistency and the fact that Cyrus spawns many
processes that all memory map (mmap) the same file. Whenever any
On Wed, 2007-10-31 at 14:15 -0700, Ben Rockwood wrote:
I've run across an odd issue with ZFS Quota's. This is an snv_43 system with
several zones/zfs datasets, but only one effected. The dataset shows 10GB
used, 12GB refered but when counting the files only has 6.7GB of data:
zones/ABC
On Thu, 2007-11-01 at 18:15 -0700, Denis wrote:
But after the reboot, where the resilvering restarted by itself
without a problem I noticed that it started from the beginning!?
that's expected behavior today. it remembers it has work to do but not
where it left off.
Why is that the case
On Thu, 2007-11-01 at 08:08 -0700, Scott Spyrison wrote:
Given 4 internal drives in a server, what kind of ZFS layout would you use?
Assuming you needed more than one disk's worth of ZFS space after
mirroring:
disks 0+1: partition them with a space hog partition at the start of
the disk
On Fri, 2007-11-02 at 11:20 -0700, Chris Williams wrote:
I have a 9-bay JBOD configured as a raidz2. One of the disks, which
is on-line and fine, needs to be swapped out and replaced. I have
been looking though the zfs admin guide and am confused on how I
should go about swapping out. I
On Sat, 2007-12-15 at 22:00 -0800, Sasidhar Kasturi wrote:
If i want to make some modifications in the code.. Can i do it
for /xpg4/bin commands or .. i should do it for /usr/bin commands??
If possible (if there's no inherent conflict with either the applicable
standards or existing practice)
On Thu, 2008-02-21 at 11:06 -0800, John Tracy wrote:
I've read that this behavior can be expected depending on how the LAG
is setup, whether it divides hashes up the data on a per packet or per
source/destination basis/or other options.
(this is a generic answer, not specific to zfs exported
On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote:
How was it MVFS could do this without any changes to the shells or any
other programs?
I ClearCase could 'grep FOO /dir1/dir2/file@@/main/*' to see which
version of 'file' added FOO.
(I think @@ was the special hidden key. It might
On Fri, 2008-03-14 at 18:11 -0600, Mark Shellenbaum wrote:
I think it is a misnomer to call the current
implementation of ZFS a pure ACL system, as clearly the ACLs are heavily
contaminated by legacy mode bits.
Feel free to open an RFE. It may be a tough sell with PSARC, but maybe
if
On Fri, 2008-05-23 at 13:45 -0700, Orvar Korvar wrote:
Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large
zpool? Is it correct?
I have 8 port SATA card. I have 4 drives into one zpool.
zpool create mypool raidz1 disk0 disk1 disk2 disk3
you have a pool consisting
On Wed, 2008-06-04 at 23:12 +, A Darren Dunham wrote:
Best story I've heard is that it dates from before the time when
modifiable (or at least *easily* modifiable) slices didn't exist. No
hopping into 'format' or using 'fmthard'. Instead, your disk came with
an entry in 'format.dat' with
On Thu, 2008-06-05 at 23:04 +0300, Cyril Plisko wrote:
1. Are there any reasons to *not* enable compression by default ?
Not exactly an answer:
Most of the systems I'm running today on ZFS root have compression=on
and copies=2 for rpool/ROOT
2. How can I do it ? (I think I can run zfs set
On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote:
I'm not even trying to stripe it across multiple
disks, I just want to add another partition (from the
same physical disk) to the root pool. Perhaps that
is a distinction without a difference, but my goal is
to grow my root
On Tue, 2008-06-24 at 09:41 -0700, Richard Elling wrote:
IMHO, you can make dump optional, with no dump being default.
Before Sommerfeld pounces on me (again :-))
actually, in the case of virtual machines, doing the dump *in* the
virtual machine into preallocated virtual disk blocks is silly.
On Tue, 2008-07-15 at 15:32 -0500, Bob Friesenhahn wrote:
On Tue, 15 Jul 2008, Ross Smith wrote:
It sounds like you might be interested to read up on Eric Schrock's work.
I read today about some of the stuff he's been doing to bring integrated
fault management to Solaris:
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
pool: r00t
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
unaffected.
action: Determine if the device needs
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits: It also found checksum errors
#
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote:
Zfs makes human error really easy. For example
$ zpool destroy mypool
Note that zpool destroy can be undone by zpool import -D (if you get
to it before the disks are overwritten).
___
On Tue, 2008-08-05 at 12:11 -0700, soren wrote:
soren wrote:
ZFS has detected that my root filesystem has a
small number of errors. Is there a way to tell which
specific files have been corrupted?
After a scrub a zpool status -v should give you a
list of files with
unrecoverable
See the long thread titled ZFS deduplication, last active
approximately 2 weeks ago.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 2008-08-07 at 11:34 -0700, Richard Elling wrote:
How would you describe the difference between the data recovery
utility and ZFS's normal data recovery process?
I'm not Anton but I think I see what he's getting at.
Assume you have disks which once contained a pool but all of the
On Thu, 2008-08-21 at 21:15 -0700, mike wrote:
I've seen 5-6 disk zpools are the most recommended setup.
This is incorrect.
Much larger zpools built out of striped redundant vdevs (mirror, raidz1,
raidz2) are recommended and also work well.
raidz1 or raidz2 vdevs of more than a single-digit
On Thu, 2008-08-28 at 13:05 -0700, Eric Schrock wrote:
A better option would be to not use this to perform FMA diagnosis, but
instead work into the mirror child selection code. This has already
been alluded to before, but it would be cool to keep track of latency
over time, and use this to
On Sun, 2008-08-31 at 12:00 -0700, Richard Elling wrote:
2. The algorithm *must* be computationally efficient.
We are looking down the tunnel at I/O systems that can
deliver on the order of 5 Million iops. We really won't
have many (any?) spare cycles to play with.
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote:
It's sort of like network QoS, but not quite, because:
(a) you don't know exactly how big the ``pipe'' is, only
approximately,
In an ip network, end nodes generally know no more than the pipe size of
the first hop -- and in
On Fri, 2008-09-05 at 09:41 -0700, Richard Elling wrote:
Also does the resilver deliberately pause? Running iostat I see
that it will pause for five to ten seconds where no IO is done at all,
then it continues on at a more reasonable pace.
I have not seen such behaviour during resilver
On Wed, 2008-10-01 at 11:54 -0600, Robert Thurlow wrote:
like they are not good enough though, because unless this broken
router that Robert and Darren saw was doing NAT, yeah, it should not
have touch the TCP/UDP checksum.
NAT was not involved.
I believe we proved that the problem bit
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size and
ZFS' causes some performance problems for Thunderbird users.
I was seeing a severe performance problem with sqlite3 databases as used
by evolution (not
On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
I'm assuming this is local filesystem rather than ZFS backed NFS (which
is what I have).
Correct, on a laptop.
What has setting the 32KB recordsize done for the rest of your home
dir, or did you give the evolution directory its own
On Wed, 2008-10-22 at 10:45 -0600, Neil Perrin wrote:
Yes: 6280630 zil synchronicity
Though personally I've been unhappy with the exposure that zil_disable has
got.
It was originally meant for debug purposes only. So providing an official
way to make synchronous behaviour asynchronous is
On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote:
If I turn zfs compression on, does the recordsize influence the
compressratio in anyway?
zfs conceptually chops the data into recordsize chunks, then compresses
each chunk independently, allocating on disk only the space needed to
store each
On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
I vaguely remember a time when UFS had limits to prevent
ordinary users from consuming past a certain limit, allowing
only the super-user to use it. Not that I'm advocating that
approach for ZFS.
looks to me like zfs already provides a
1 - 100 of 172 matches
Mail list logo