On Wed, May 03, 2006 at 03:34:56PM -0700, Ed Gould wrote:
I think this might be a case where a structured record (like the
compact XML suggestion made earlier) would help. At least having
distinguished start and end markers (whether they be one byte each,
or XML constructs) for a record
On Thu, May 04, 2006 at 12:39:59AM -0700, Jeff Bonwick wrote:
Why not use the Solaris audit facility?
Several reasons:
(1) We want the history to follow the data, not the host. If you
export the pool from one host and import it on another, we want
the command history to move
On Fri, May 05, 2006 at 09:43:05AM -0700, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compare ZFS to NetApp? I hope not
It's a public list, you can do the
On Mon, May 08, 2006 at 10:23:44PM +0100, Tim Foster wrote:
This could be easily implemented via a set of SMF instances which
create/destroy cron jobs which would themselves call a simple script
responsible for taking the snapshots.
There was talk a while back about an extended cron service
6370738 zfs diffs filesystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, May 11, 2006 at 03:38:59PM +0100, Darren J Moffat wrote:
What would the output of zfs diffs be ?
My original conception was:
- dnode # + changed blocks
- + some naming hints so that one could quickly find changed dnodes in
clones
I talked about this with Bill Moore and he came up
On Thu, May 11, 2006 at 11:15:12AM -0400, Bill Sommerfeld wrote:
This situation is analogous to the merge with common ancestor
operations performed on source code by most SCM systems; with a named
snapshot as the clone base, the ancestor is preserved and can easily be
retrieved.
Yes, and in
On Fri, May 12, 2006 at 05:23:53PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
For read it is an interesting concept. Since
Reading into cache
Then copy into user space
then keep data around but never use it
is not optimal.
So 2 issues, there is the cost
On Fri, May 12, 2006 at 06:33:00PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
Directio is non-posix anyway and given that people have been
train to inform the system that the cache won't be useful,
that it's a hard problem to detect automatically, let's
avoid the copy and save
The ZFS discuss list is re-delivering old messages.
Have the problems with the archives been fixed?
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, May 12, 2006 at 09:59:56AM -0700, Richard Elling wrote:
On Fri, 2006-05-12 at 10:42 -0500, Anton Rang wrote:
Now latency wise, the cost of copy is small compared to the
I/O; right ? So it now turns into an issue of saving some
CPU cycles.
CPU cycles and memory bandwidth
On Sat, May 13, 2006 at 08:23:55AM +0200, Franz Haberhauer wrote:
Given that ISV apps can be only changed by the ISV who may or may not be
willing to
use such a new interface, having a no cache property for the file - or
given that filesystems
are now really cheap with ZFS - for the
On Mon, May 15, 2006 at 07:16:38PM +0200, Franz Haberhauer wrote:
Nicolas Williams wrote:
Yes, but remember, DB vendors have adopted new features before -- they
want to have the fastest DB. Same with open source web servers. So I'm
a bit optimistic.
Yes, but they usually adopt it only
On Mon, May 15, 2006 at 11:17:17AM -0700, Bart Smaalders wrote:
Perhaps an fadvise call is in order?
We already have directio(3C).
(That was a surprise for me also.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Thu, May 18, 2006 at 02:23:55PM -0600, Gregory Shaw wrote:
I'd agree except for backups. If the pools are going to grow beyond a
reasonable-to-backup and reasonable-to-restore threshold (measured by
the backup window), it would be practical to break it into smaller
pools.
Speaking of
On Thu, May 18, 2006 at 03:41:13PM -0700, Erik Trimble wrote:
On the topic of ZFS snapshots:
does the snapshot just capture the changed _blocks_, or does it
effectively copy the entire file if any block has changed?
Incremental sends capture changed blocks.
Snapshots capture all of the FS
On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote:
is raidz double parity optional or mandatory?
Backwards compatibility dictates that it will be optional.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, May 30, 2006 at 06:19:16AM +0200, [EMAIL PROTECTED] wrote:
The requirement is not that inodes and data are separate; the requirement
is a specific upperbound to disk transactions. The question therefor
is not when will ZFS be able to separate inods and data; the question
is when ZFS
On Wed, Jun 07, 2006 at 11:15:43AM -0700, Philip Brown wrote:
Also, why shouldn't lofs grow similar support?
aha!
This to me sounds much much better. Put all the funky potentially
disasterous code, in lofs, not in zfs please :-) plus that way any
filesystem will potentially get the
On Wed, Jun 07, 2006 at 06:48:01PM -0400, Bill Sommerfeld wrote:
On Wed, 2006-06-07 at 17:31, Nicolas Williams wrote:
Views would be faster, initially (they could be O(1) to create),
if you're not incrementally maintaining indexes, and you want O(1)
creation and O(D) readdir (where D
On Wed, Jun 21, 2006 at 10:41:50AM -0600, Neil Perrin wrote:
Why is this option available then? (Yes, that's a loaded question.)
I wouldn't call it an option, but an internal debugging switch that I
originally added to allow progress when initially integrating the ZIL.
As Roch says it really
On Thu, Jun 22, 2006 at 01:01:38AM +0200, [EMAIL PROTECTED] wrote:
I'm not sure if I like the name, then; nor the emphasis on the
euid/egid (as those terms are not commonly used in the kernel;
there's a reason why the effective uid was cr-cr_uid and not cr_euid.
In other words, what your are
On Thu, Jun 22, 2006 at 02:45:50AM +0200, Nicolai Johannes wrote:
Spo as I have understood you, explaining the new privileges with the
term anonymous user would be better? I actually thought about that
idea, but there is a subtle difference:
Hmmm, no I have no good name for it.
Concerning
On Thu, Jun 22, 2006 at 11:55:05AM +0200, [EMAIL PROTECTED] wrote:
Yes. It's kind of enticing.
I'm not entirely clear as to the problem which it solves; I think
I'd much rather have a user which cannot modify anything.
The canonical example would be, I think, ssh-agent(1), although I'm
not
On Thu, Jun 22, 2006 at 12:54:32PM +0200, Nicolai Johannes wrote:
Concerning the reopen problem of files created in world writable
directories: One may use the following algorithm: First compute the
permissions of the newly created file. For every permission granted
to the user or group,
On Thu, Jun 22, 2006 at 11:06:48AM -0700, Jonathan Adams wrote:
On Thu, Jun 22, 2006 at 12:49:03PM -0500, Nicolas Williams wrote:
On Thu, Jun 22, 2006 at 12:54:32PM +0200, Nicolai Johannes wrote:
Concerning the reopen problem of files created in world writable
directories: One may use
On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
Again, the issue is the multiple fsyncs that NFS requires, and likely
the serialization of those iscsi requests. Apparently, there is a
basic latency in iscsi that one could improve upon with FC, but we are
definitely in the all
On Fri, Jun 23, 2006 at 02:20:54PM -0400, Dale Ghent wrote:
Second, while there is a way for Joe Random to submit a bug, there is
zero way for Joe Random to interact with a bug. No voting to bump or
drop a priority, no easy way to find hot topic bugs, no way to add
one's own notes to the
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already
On Wed, Jun 21, 2006 at 04:34:59PM -0600, Mark Shellenbaum wrote:
Can you give us an example of a 'file' the ssh-agent wishes to open and
what the permission are on the file and also what privileges the
ssh-agent has, and what the expected results are.
ssh-agent(1) should need to open no
On Mon, Jul 17, 2006 at 10:11:35AM -0700, Matthew Ahrens wrote:
I want root to create a new filesystem for a new user under
the /export/home filesystem, but then have that user get the
right privs via inheritance rather than requiring root to run
a set of zfs commands.
In that case, how
On Thu, Jul 27, 2006 at 10:17:47AM -0700, Praveen Mogili wrote:
S10 and ZFS is opensource is great but If there is
some solid material with technical detailsI would
really appreciate it.
The ZFS on-disk file format is here:
On Thu, Aug 03, 2006 at 01:35:54AM -0700, Tom Simpson wrote:
Well,
You're spot on. Turns out that our datacentre boys change the umask of root
to 0027.
:-(
Many years ago, back in the days of Solaris 2.5.1, changing root's umask
to 027 caused problems if you, say, restarted the
On Tue, Aug 15, 2006 at 06:12:47PM -0400, Bill Sommerfeld wrote:
On Tue, 2006-08-15 at 12:47 -0700, Eric Schrock wrote:
The copy-on-write nature of ZFS makes this extremely difficult,
particularly w.r.t. to snapshots. That's not to say it can't be solved,
only that it won't be solved in
On Thu, Aug 24, 2006 at 10:46:27AM -0700, Noel Dellofano wrote:
ZFS actually uses the ZAP to handle directory lookups. The ZAP is
not a btree but a specialized hash table where a hash for each
directory entry is generated based on that entry's name. Hence you
won't be doing any sort of
On Wed, Aug 30, 2006 at 07:51:45PM +0100, Dick Davies wrote:
On 30/08/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
'zfs send' is *incredibly* faster than rsync.
That's interesting. We had considered it as a replacement for a
certain task (publishing a master docroot to multiple webservers)
On Tue, Sep 12, 2006 at 05:57:33PM +1000, Boyd Adamson wrote:
On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:
Now you have a persistent SSH connection to remote-host that forwards
connections to localhost:12345 to port 56789 on remote-host.
So now you can use your Perl scripts more
On Tue, Sep 12, 2006 at 10:36:30AM +0100, Darren J Moffat wrote:
Mike Gerdts wrote:
Is there anything in the works to compress (or encrypt) existing data
after the fact? For example, a special option to scrub that causes
the data to be re-written with the new properties could potentially do
On Thu, Sep 14, 2006 at 10:32:59PM +0200, Henk Langeveld wrote:
Bady, Brant RBCM:EX wrote:
Part of the archiving process is to generate checksums (I happen to use
MD5), and store them with other metadata about the digital object in
order to verify data integrity and demonstrate the
On Thu, Sep 14, 2006 at 06:26:46PM -0500, Mike Gerdts wrote:
On 9/14/06, Chad Lewis [EMAIL PROTECTED] wrote:
Better still would be the forthcoming cryptographic extensions in some
kind of digital-signature mode.
When I first saw extended attributes I thought that would be a great
place to
On Fri, Sep 15, 2006 at 09:31:04AM +0100, Ceri Davies wrote:
On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
Yes, but the checksum is stored with the pointer.
So then, for each file/directory there's a dnode, and that dnode has
several block pointers to data blocks
On Thu, Sep 28, 2006 at 05:29:27PM +0200, [EMAIL PROTECTED] wrote:
Any mkdir in a builds directory on a shared build machine. Would be
very cool because then every user/project automatically gets a ZFS
fileystems.
Why map it to mkdir rather than using zfs create ? Because mkdir means
On Wed, Sep 27, 2006 at 08:55:48AM -0600, Mark Maybee wrote:
Patrick wrote:
So ... how about an automounter? Is this even possible? Does it exist ?
*sigh*, one of the issues we recognized, when we introduced the new
cheap/fast file system creation, was that this new model would stress
the
On Thu, Sep 28, 2006 at 05:36:16PM +0200, Robert Milkowski wrote:
Hello Chris,
Thursday, September 28, 2006, 4:55:13 PM, you wrote:
CG I keep thinking that it would be useful to be able to define a
CG zfs file system where all calls to mkdir resulted not just in a
CG directory but in a
On Fri, Oct 06, 2006 at 11:25:29PM +0800, Jeremy Teo wrote:
A couple of use cases I was considering off hand:
1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots,
On Fri, Oct 06, 2006 at 09:18:16AM -0700, Anton B. Rang wrote:
ClearCase is a version control system, though — not the same as file
versioning.
But they have a filesystem interface. Crucially, this involves
additional interfaces. VC cannot be automatic.
On Fri, Oct 06, 2006 at 03:30:20PM -0600, Chad Leigh -- Shire.Net LLC wrote:
On Oct 6, 2006, at 3:08 PM, Erik Trimble wrote:
OK. So, now we're on to FV. As Nico pointed out, FV is going to
need a new API. Using the VMS convention of simply creating file
names with a version string
On Fri, Oct 06, 2006 at 04:06:37PM -0600, Chad Leigh -- Shire.Net LLC wrote:
On Oct 6, 2006, at 3:53 PM, Nicolas Williams wrote:
On Fri, Oct 06, 2006 at 03:30:20PM -0600, Chad Leigh -- Shire.Net
LLC wrote:
On Oct 6, 2006, at 3:08 PM, Erik Trimble wrote:
OK. So, now we're on to FV. As Nico
On Fri, Oct 06, 2006 at 06:22:01PM -0700, Joseph Mocker wrote:
Nicolas Williams wrote:
Automatically capturing file versions isn't possible in the general case
with applications that aren't aware of FV.
Don't snapshots have the same problem. A snapshot could potentially be
taken when
On Sat, Oct 07, 2006 at 01:43:29PM +0200, Joerg Schilling wrote:
The only idea I get thast matches this criteria is to have the versions
in the extended attribute name space.
Indeed. All that's needed then, CLI UI-wise, beyond what we have now is
a way to rename versions extended attributes to
On Thu, Oct 05, 2006 at 05:25:17PM -0700, David Dyer-Bennet wrote:
No, any sane VC protocol must specifically forbid the checkin of the
stuff I want versioning (or file copies or whatever) for. It's
partial changes, probably doesn't compile, nearly certainly doesn't
work. This level of work
On Mon, Oct 09, 2006 at 09:27:14AM +0800, Wee Yeh Tan wrote:
On 10/7/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
I've never encountered branch being used that way, anywhere. It's
used for things like developing release 2.0 while still supporting 1.5
and 1.6.
However, especially with
On Sun, Oct 08, 2006 at 10:28:06PM -0400, Jonathan Edwards wrote:
On Oct 8, 2006, at 21:40, Wee Yeh Tan wrote:
On 10/7/06, Ben Gollmer [EMAIL PROTECTED] wrote:
Hmm, what about file.txt - ._file.txt.1, ._file.txt.2, etc? If you
don't like the _ you could use @ or some other character.
It
On Sun, Oct 08, 2006 at 11:16:21PM -0400, Jonathan Edwards wrote:
On Oct 8, 2006, at 22:46, Nicolas Williams wrote:
You're arguing for treating FV as extended/named attributes :)
kind of - but one of the problems with EAs is the increase/bloat in
the inode/dnode structures
On Sun, Oct 08, 2006 at 03:38:54PM -0700, Erik Trimble wrote:
Joseph Mocker wrote:
Which brings me back to the point of file versioning. If an
implementation were based on something like when a file is open()ed
with write bits set. There would be no potential for broken files like
this.
On Fri, Oct 06, 2006 at 07:37:47PM -0600, Chad Leigh -- Shire.Net LLC wrote:
On Oct 6, 2006, at 7:33 PM, Erik Trimble wrote:
This is what Nico and I are talking about: if you turn on file
versioning automatically (even for just a directory, and not a
whole filesystem), the number of files
On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
You're arguing for treating FV as extended/named attributes :)
I think that'd be the right thing to do, since we have tools that are
aware of those already. Of course, we're
On Fri, Oct 13, 2006 at 11:03:51AM +0200, Joerg Schilling wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
Before we start defining the first offocial functionality for this Sun
feature,
we should define a mapping
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do
On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote:
For extremely large files (25 to 100GBs), that are accessed
sequentially for both read write, I would expect 64k or 128k.
Lager files accessed sequentially don't need any special heuristic for
record size determination:
On Tue, Nov 14, 2006 at 07:32:08PM +0100, [EMAIL PROTECTED] wrote:
Actually, we have considered this. On both SPARC and x86, there will be
a way to specify the root file system (i.e., the bootable dataset) to be
booted,
at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
If
On Wed, Nov 15, 2006 at 11:00:01AM +, Darren J Moffat wrote:
I think we first need to define what state up actually is. Is it the
kernel booted ? Is it the root file system mounted ? Is it we reached
milestone all ? Is it we reached milestone all with no services in
maintenance ?
On Wed, Nov 15, 2006 at 09:58:35PM +, Ceri Davies wrote:
On Wed, Nov 15, 2006 at 12:10:30PM +0100, [EMAIL PROTECTED] wrote:
I think we first need to define what state up actually is. Is it the
kernel booted ? Is it the root file system mounted ? Is it we reached
milestone all ?
On Tue, Nov 28, 2006 at 03:02:59PM -0500, Elizabeth Schwartz wrote:
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
On Tue, Nov 28, 2006 at 08:03:33PM -0500, Toby Thain wrote:
As others have pointed out, you wouldn't have reached this point with
redundancy - the file would have remained intact despite the hardware
failure. It is strictly correct that to restore the data you'd need
to refer to a
IMO:
- The hardest problem in the case of bleaching individual files or
datasets is dealing with snapshots/clones:
- blocks not shared with parent/child snapshots can be bleached with
little trouble, of course.
- But what about shared blocks?
IMO we have two options:
On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams
[EMAIL PROTECTED] wrote:
I'd say go for both, (a) and (b). Of course, (b) may not be easy to
implement.
Another option would be to warn the user and set
On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams
[EMAIL PROTECTED] wrote:
Or an iovec-style specification. But really, how often will one prefer
this to truncate-and-bleach? Also, the to-be-bleached
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
In case it wasn't clear I am NOT proposing a UI like this:
$ zfs bleach ~/Documents/company-finance.odp
Instead ~/Documents or ~ would be a ZFS file system with a policy set
something like this:
# zfs set erase=file:zero
On Tue, Dec 19, 2006 at 04:37:36PM +, Darren J Moffat wrote:
I think you are saying it should have INHERITY set to YES and EDIT set
to NO. We don't currently have any properties like that but crypto will
need this as well - for a very similar reason with clones.
What I mean is that if
On Tue, Dec 19, 2006 at 03:09:03PM -0500, Jeffrey Hutzelman wrote:
On Tuesday, December 19, 2006 01:54:56 PM + Darren J Moffat
[EMAIL PROTECTED] wrote:
While I think having this in the VOP/FOP layer is interesting it isn't
the problem I was trying to solve and to be completely
On Thu, Dec 21, 2006 at 03:31:59PM +, Darren J Moffat wrote:
Pawel Jakub Dawidek wrote:
I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well,
On Thu, Dec 21, 2006 at 03:47:07PM +, Darren J Moffat wrote:
Nicolas Williams wrote:
James makes a good argument that this scheme won't suffice for customers
who need that level of assurance. I'm inclined to agree. For customers
who don't need that level of assurance then encryption
On Wed, Dec 27, 2006 at 08:45:23AM -0500, Bill Sommerfeld wrote:
I think your paranoia is indeed running a bit high if the alternative is
that some blocks escape bleaching forever when they were freed shortly
before a crash.
Lazy bg bleaching of freed blocks is not enough if you're really
On Tue, Jan 23, 2007 at 04:49:38PM +, Darren J Moffat wrote:
Jeremy Teo wrote:
I'm defining zpool split as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
and
On Thu, Jan 25, 2007 at 10:57:17AM +0800, Wee Yeh Tan wrote:
On 1/25/07, Bryan Cantrill [EMAIL PROTECTED] wrote:
...
after all, what was ZFS going to do with that expensive but useless
hardware RAID controller? ...
I almost rolled over reading this.
This is exactly what I went through
On Thu, Jan 25, 2007 at 09:52:05AM -0800, Richard Elling wrote:
Nicolas Williams wrote:
The only benefit of using a HW RAID controller with ZFS is that it
reduces the I/O that the host needs to do, but the trade off is that ZFS
cannot do combinatorial parity reconstruction so that it could
On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote:
The only benefit of using a HW RAID controller with ZFS is that it
reduces the I/O that the host needs to do, but the trade off is that ZFS
cannot do combinatorial parity reconstruction so that it could only
detect errors, not
On Tue, Jan 30, 2007 at 06:41:25PM +0100, Roch - PAE wrote:
I think I got the point. Mine was that if the data travels a
single time toward the storage and is corrupted along the
way then there will be no hope of recovering it since the
array was given bad data. Having the data travel twice
On Wed, Jan 31, 2007 at 01:11:52PM -0800, Brian Gao wrote:
Which structure in ZFS stores file property info such as permissions, owner
etc? What is its relationship with uberblock, block pointer or metadnode etc?
I thought it would be dnode. However, I don't know which structure in dnode
is
On Wed, Jan 31, 2007 at 09:31:34PM +, James Blackburn wrote:
Or look at pages 46-50 of the ZFS on-disk format document:
http://opensolaris.org/os/community/zfs/docs/ondiskformatfinal.pdf
There's an final version? That link appears to be broken (and the
lastest version linked from the
On Fri, Feb 02, 2007 at 08:46:34AM +, Darren J Moffat wrote:
My current plan is that once set the encryption property that describes
which algorithm (mechanism actually: algorithm, key length and mode, eg
aes-128-ccm) can not be changed, it would be inherited by any clones.
Creating new
On Fri, Feb 02, 2007 at 12:25:04AM +0100, Pawel Jakub Dawidek wrote:
On Thu, Feb 01, 2007 at 11:00:07AM +, Darren J Moffat wrote:
Neil Perrin wrote:
No it's not the final version or even the latest!
The current on disk format version is 3. However, it hasn't
diverged much and the
On Fri, Jan 26, 2007 at 05:15:28PM -0700, Jason J. W. Williams wrote:
Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.
But a continuous zfs send/recv would be cool too. In fact, I think ZFS
tightly integrated with SNDR
On Wed, Feb 21, 2007 at 04:20:58PM -0800, Eric Schrock wrote:
Seems like there are a two pieces you're suggesting here:
1. Some sort of background process to proactively find errors on disks
in use by ZFS. This will be accomplished by a background scrubbing
option, dependent on the
On Sat, Feb 24, 2007 at 11:15:33AM -0800, Tor wrote:
I have read the FAQ, and it states that encrypted data can't be
compressed. But is there any point in using compression on my media
file server, that will store ripped DVD's (which are compressed in
their default state), our digital photos
On Sun, Feb 25, 2007 at 10:46:38PM -0800, Vikrant Kumar Choudhary wrote:
I am using Solaris 10 and i am not a super user. How do i know which
filesytem ,i am using. and can i use ZFS filesystem locally. I mean in
case my admin is not using that can i test it locally.
df(1M) and mount(1M) show
On Mon, Feb 26, 2007 at 10:05:08AM -0800, Eric Schrock wrote:
The slow part of zpool import is actually discovering the pool
configuration. This involves examining every device on the system (or
every device within a 'import -d' directory) and seeing if it has any
labels. Internally, the
On Mon, Feb 26, 2007 at 10:10:15AM -0800, Eric Schrock wrote:
On Mon, Feb 26, 2007 at 12:06:14PM -0600, Nicolas Williams wrote:
Couldn't all that tasting be done in parallel?
Yep, that's certainly possible. Sounds like a perfect feature for
someone in the community to work on :-) Simply
On Mon, Feb 26, 2007 at 10:32:22AM -0800, Eric Schrock wrote:
On Mon, Feb 26, 2007 at 12:27:48PM -0600, Nicolas Williams wrote:
What is slow, BTW? The open(2)s of the devices? Or the label reading?
And is there a way to do async open(2)s w/o a thread per-open? The
open(2) man page
On Thu, Mar 01, 2007 at 11:05:44AM -0500, ozan s. yigit wrote:
i am forced to reinstall s10u3 on my x4500. SP 1.1.1. exported zpool,
and discovered during the reinstall that the controller numbers have
changed. what used to be c5t0d0 is now c6t0d0. it it happens the exported
zpool is using
On Wed, Mar 28, 2007 at 06:55:17PM -0700, Anton B. Rang wrote:
It's not defined by POSIX (or Solaris). You can rely on being able to
atomically write a single disk block (512 bytes); anything larger than
that is risky. Oh, and it has to be 512-byte aligned.
File systems with overwrite
On Mon, Apr 02, 2007 at 03:27:39PM -0700, Anton B. Rang wrote:
All file systems provide writes by default which are
atomic with respect to readers of the file.
Surely, only in the absence of a crash - otherwise,
POSIX would require implementation of transactional
write semantics in
On Wed, Apr 04, 2007 at 10:08:07AM -0700, Adam Leventhal wrote:
On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
- RAID-Z is _very_ slow when one disk is broken.
Do you have data on this? The reconstruction should be relatively cheap
especially when compared with the
On Thu, Apr 12, 2007 at 05:47:46PM -0400, Bill Sommerfeld wrote:
On Thu, 2007-04-12 at 14:09 -0600, Mark Shellenbaum wrote:
Pawel Jakub Dawidek wrote:
What are your suggestions?
I am currently working on adding a number of the BSD flags into ZFS.
The existance of the FreeBSD
On Thu, Apr 12, 2007 at 06:59:45PM -0300, Toby Thain wrote:
On 12-Apr-07, at 12:15 AM, Rayson Ho wrote:
On 4/11/07, Toby Thain [EMAIL PROTECTED] wrote:
I hope this isn't turning into a License flame war. But why do Linux
contributors not deserve the right to retain their choice of license
On Thu, Apr 12, 2007 at 07:07:33PM -0300, Toby Thain wrote:
Now, all we have to do is respect each other. End of problem.
I think this sub-thread started with a comment by you about someone
else's kneejerk anti-GPL comments.
I don't recall any such comments in this thread. I think you might
On Tue, Apr 17, 2007 at 01:00:00PM -0400, Rayson Ho wrote:
Apple is integrating DTrace too, and yet I don't see more than 10% of
the Mac users writing D programs.
But 100% of MacOS users might end up using DTrace without knowing it.
___
zfs-discuss
On Wed, Apr 18, 2007 at 03:47:55PM -0400, Dennis Clarke wrote:
Maybe with a definition of what a backup is and then some way to
achieve it. As far as I know the only real backup is one that can be
tossed into a vault and locked away for seven years. Or any arbitrary
amount of time within in
On Wed, Apr 18, 2007 at 04:32:18PM -0400, Dennis Clarke wrote:
I just finished installing Solaris 10 and ZFS at a manufacturing site
that needs fast cheap storage. Its real tough to argue with ZFS once
you see it in action. They were sold and I went ahead with a few
terabytes of storage
1 - 100 of 361 matches
Mail list logo