On Wed, Nov 15, 2006 at 09:58:35PM +, Ceri Davies wrote:
On Wed, Nov 15, 2006 at 12:10:30PM +0100, [EMAIL PROTECTED] wrote:
I think we first need to define what state up actually is. Is it the
kernel booted ? Is it the root file system mounted ? Is it we reached
milestone all ?
On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote:
For extremely large files (25 to 100GBs), that are accessed
sequentially for both read write, I would expect 64k or 128k.
Lager files accessed sequentially don't need any special heuristic for
record size determination:
On Fri, Oct 13, 2006 at 11:03:51AM +0200, Joerg Schilling wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
Before we start defining the first offocial functionality for this Sun
feature,
we should define a mapping
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do
On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
Nicolas Williams [EMAIL PROTECTED] wrote:
You're arguing for treating FV as extended/named attributes :)
I think that'd be the right thing to do, since we have tools that are
aware of those already. Of course, we're
On Fri, Oct 06, 2006 at 06:22:01PM -0700, Joseph Mocker wrote:
Nicolas Williams wrote:
Automatically capturing file versions isn't possible in the general case
with applications that aren't aware of FV.
Don't snapshots have the same problem. A snapshot could potentially be
taken when
On Sat, Oct 07, 2006 at 01:43:29PM +0200, Joerg Schilling wrote:
The only idea I get thast matches this criteria is to have the versions
in the extended attribute name space.
Indeed. All that's needed then, CLI UI-wise, beyond what we have now is
a way to rename versions extended attributes to
On Thu, Oct 05, 2006 at 05:25:17PM -0700, David Dyer-Bennet wrote:
No, any sane VC protocol must specifically forbid the checkin of the
stuff I want versioning (or file copies or whatever) for. It's
partial changes, probably doesn't compile, nearly certainly doesn't
work. This level of work
On Mon, Oct 09, 2006 at 09:27:14AM +0800, Wee Yeh Tan wrote:
On 10/7/06, David Dyer-Bennet [EMAIL PROTECTED] wrote:
I've never encountered branch being used that way, anywhere. It's
used for things like developing release 2.0 while still supporting 1.5
and 1.6.
However, especially with
On Sun, Oct 08, 2006 at 10:28:06PM -0400, Jonathan Edwards wrote:
On Oct 8, 2006, at 21:40, Wee Yeh Tan wrote:
On 10/7/06, Ben Gollmer [EMAIL PROTECTED] wrote:
Hmm, what about file.txt - ._file.txt.1, ._file.txt.2, etc? If you
don't like the _ you could use @ or some other character.
It
On Sun, Oct 08, 2006 at 11:16:21PM -0400, Jonathan Edwards wrote:
On Oct 8, 2006, at 22:46, Nicolas Williams wrote:
You're arguing for treating FV as extended/named attributes :)
kind of - but one of the problems with EAs is the increase/bloat in
the inode/dnode structures
On Sun, Oct 08, 2006 at 03:38:54PM -0700, Erik Trimble wrote:
Joseph Mocker wrote:
Which brings me back to the point of file versioning. If an
implementation were based on something like when a file is open()ed
with write bits set. There would be no potential for broken files like
this.
On Fri, Oct 06, 2006 at 07:37:47PM -0600, Chad Leigh -- Shire.Net LLC wrote:
On Oct 6, 2006, at 7:33 PM, Erik Trimble wrote:
This is what Nico and I are talking about: if you turn on file
versioning automatically (even for just a directory, and not a
whole filesystem), the number of files
On Fri, Oct 06, 2006 at 11:25:29PM +0800, Jeremy Teo wrote:
A couple of use cases I was considering off hand:
1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots,
On Fri, Oct 06, 2006 at 09:18:16AM -0700, Anton B. Rang wrote:
ClearCase is a version control system, though — not the same as file
versioning.
But they have a filesystem interface. Crucially, this involves
additional interfaces. VC cannot be automatic.
On Fri, Oct 06, 2006 at 03:30:20PM -0600, Chad Leigh -- Shire.Net LLC wrote:
On Oct 6, 2006, at 3:08 PM, Erik Trimble wrote:
OK. So, now we're on to FV. As Nico pointed out, FV is going to
need a new API. Using the VMS convention of simply creating file
names with a version string
On Fri, Oct 06, 2006 at 04:06:37PM -0600, Chad Leigh -- Shire.Net LLC wrote:
On Oct 6, 2006, at 3:53 PM, Nicolas Williams wrote:
On Fri, Oct 06, 2006 at 03:30:20PM -0600, Chad Leigh -- Shire.Net
LLC wrote:
On Oct 6, 2006, at 3:08 PM, Erik Trimble wrote:
OK. So, now we're on to FV. As Nico
On Thu, Sep 28, 2006 at 05:29:27PM +0200, [EMAIL PROTECTED] wrote:
Any mkdir in a builds directory on a shared build machine. Would be
very cool because then every user/project automatically gets a ZFS
fileystems.
Why map it to mkdir rather than using zfs create ? Because mkdir means
On Wed, Sep 27, 2006 at 08:55:48AM -0600, Mark Maybee wrote:
Patrick wrote:
So ... how about an automounter? Is this even possible? Does it exist ?
*sigh*, one of the issues we recognized, when we introduced the new
cheap/fast file system creation, was that this new model would stress
the
On Thu, Sep 28, 2006 at 05:36:16PM +0200, Robert Milkowski wrote:
Hello Chris,
Thursday, September 28, 2006, 4:55:13 PM, you wrote:
CG I keep thinking that it would be useful to be able to define a
CG zfs file system where all calls to mkdir resulted not just in a
CG directory but in a
On Fri, Sep 15, 2006 at 09:31:04AM +0100, Ceri Davies wrote:
On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
Yes, but the checksum is stored with the pointer.
So then, for each file/directory there's a dnode, and that dnode has
several block pointers to data blocks
On Thu, Sep 14, 2006 at 10:32:59PM +0200, Henk Langeveld wrote:
Bady, Brant RBCM:EX wrote:
Part of the archiving process is to generate checksums (I happen to use
MD5), and store them with other metadata about the digital object in
order to verify data integrity and demonstrate the
On Thu, Sep 14, 2006 at 06:26:46PM -0500, Mike Gerdts wrote:
On 9/14/06, Chad Lewis [EMAIL PROTECTED] wrote:
Better still would be the forthcoming cryptographic extensions in some
kind of digital-signature mode.
When I first saw extended attributes I thought that would be a great
place to
On Tue, Sep 12, 2006 at 05:57:33PM +1000, Boyd Adamson wrote:
On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:
Now you have a persistent SSH connection to remote-host that forwards
connections to localhost:12345 to port 56789 on remote-host.
So now you can use your Perl scripts more
On Tue, Sep 12, 2006 at 10:36:30AM +0100, Darren J Moffat wrote:
Mike Gerdts wrote:
Is there anything in the works to compress (or encrypt) existing data
after the fact? For example, a special option to scrub that causes
the data to be re-written with the new properties could potentially do
On Wed, Aug 30, 2006 at 07:51:45PM +0100, Dick Davies wrote:
On 30/08/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
'zfs send' is *incredibly* faster than rsync.
That's interesting. We had considered it as a replacement for a
certain task (publishing a master docroot to multiple webservers)
On Thu, Aug 24, 2006 at 10:46:27AM -0700, Noel Dellofano wrote:
ZFS actually uses the ZAP to handle directory lookups. The ZAP is
not a btree but a specialized hash table where a hash for each
directory entry is generated based on that entry's name. Hence you
won't be doing any sort of
On Tue, Aug 15, 2006 at 06:12:47PM -0400, Bill Sommerfeld wrote:
On Tue, 2006-08-15 at 12:47 -0700, Eric Schrock wrote:
The copy-on-write nature of ZFS makes this extremely difficult,
particularly w.r.t. to snapshots. That's not to say it can't be solved,
only that it won't be solved in
On Thu, Aug 03, 2006 at 01:35:54AM -0700, Tom Simpson wrote:
Well,
You're spot on. Turns out that our datacentre boys change the umask of root
to 0027.
:-(
Many years ago, back in the days of Solaris 2.5.1, changing root's umask
to 027 caused problems if you, say, restarted the
On Thu, Jul 27, 2006 at 10:17:47AM -0700, Praveen Mogili wrote:
S10 and ZFS is opensource is great but If there is
some solid material with technical detailsI would
really appreciate it.
The ZFS on-disk file format is here:
On Mon, Jul 17, 2006 at 10:11:35AM -0700, Matthew Ahrens wrote:
I want root to create a new filesystem for a new user under
the /export/home filesystem, but then have that user get the
right privs via inheritance rather than requiring root to run
a set of zfs commands.
In that case, how
On Wed, Jun 21, 2006 at 04:34:59PM -0600, Mark Shellenbaum wrote:
Can you give us an example of a 'file' the ssh-agent wishes to open and
what the permission are on the file and also what privileges the
ssh-agent has, and what the expected results are.
ssh-agent(1) should need to open no
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already
On Fri, Jun 23, 2006 at 02:20:54PM -0400, Dale Ghent wrote:
Second, while there is a way for Joe Random to submit a bug, there is
zero way for Joe Random to interact with a bug. No voting to bump or
drop a priority, no easy way to find hot topic bugs, no way to add
one's own notes to the
On Thu, Jun 22, 2006 at 11:55:05AM +0200, [EMAIL PROTECTED] wrote:
Yes. It's kind of enticing.
I'm not entirely clear as to the problem which it solves; I think
I'd much rather have a user which cannot modify anything.
The canonical example would be, I think, ssh-agent(1), although I'm
not
On Thu, Jun 22, 2006 at 12:54:32PM +0200, Nicolai Johannes wrote:
Concerning the reopen problem of files created in world writable
directories: One may use the following algorithm: First compute the
permissions of the newly created file. For every permission granted
to the user or group,
On Thu, Jun 22, 2006 at 11:06:48AM -0700, Jonathan Adams wrote:
On Thu, Jun 22, 2006 at 12:49:03PM -0500, Nicolas Williams wrote:
On Thu, Jun 22, 2006 at 12:54:32PM +0200, Nicolai Johannes wrote:
Concerning the reopen problem of files created in world writable
directories: One may use
On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
Again, the issue is the multiple fsyncs that NFS requires, and likely
the serialization of those iscsi requests. Apparently, there is a
basic latency in iscsi that one could improve upon with FC, but we are
definitely in the all
On Wed, Jun 21, 2006 at 10:41:50AM -0600, Neil Perrin wrote:
Why is this option available then? (Yes, that's a loaded question.)
I wouldn't call it an option, but an internal debugging switch that I
originally added to allow progress when initially integrating the ZIL.
As Roch says it really
On Thu, Jun 22, 2006 at 01:01:38AM +0200, [EMAIL PROTECTED] wrote:
I'm not sure if I like the name, then; nor the emphasis on the
euid/egid (as those terms are not commonly used in the kernel;
there's a reason why the effective uid was cr-cr_uid and not cr_euid.
In other words, what your are
On Thu, Jun 22, 2006 at 02:45:50AM +0200, Nicolai Johannes wrote:
Spo as I have understood you, explaining the new privileges with the
term anonymous user would be better? I actually thought about that
idea, but there is a subtle difference:
Hmmm, no I have no good name for it.
Concerning
On Wed, Jun 07, 2006 at 11:15:43AM -0700, Philip Brown wrote:
Also, why shouldn't lofs grow similar support?
aha!
This to me sounds much much better. Put all the funky potentially
disasterous code, in lofs, not in zfs please :-) plus that way any
filesystem will potentially get the
On Wed, Jun 07, 2006 at 06:48:01PM -0400, Bill Sommerfeld wrote:
On Wed, 2006-06-07 at 17:31, Nicolas Williams wrote:
Views would be faster, initially (they could be O(1) to create),
if you're not incrementally maintaining indexes, and you want O(1)
creation and O(D) readdir (where D
On Tue, May 30, 2006 at 06:19:16AM +0200, [EMAIL PROTECTED] wrote:
The requirement is not that inodes and data are separate; the requirement
is a specific upperbound to disk transactions. The question therefor
is not when will ZFS be able to separate inods and data; the question
is when ZFS
On Sat, May 27, 2006 at 08:29:05AM +1000, grant beattie wrote:
is raidz double parity optional or mandatory?
Backwards compatibility dictates that it will be optional.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Thu, May 18, 2006 at 02:23:55PM -0600, Gregory Shaw wrote:
I'd agree except for backups. If the pools are going to grow beyond a
reasonable-to-backup and reasonable-to-restore threshold (measured by
the backup window), it would be practical to break it into smaller
pools.
Speaking of
On Thu, May 18, 2006 at 03:41:13PM -0700, Erik Trimble wrote:
On the topic of ZFS snapshots:
does the snapshot just capture the changed _blocks_, or does it
effectively copy the entire file if any block has changed?
Incremental sends capture changed blocks.
Snapshots capture all of the FS
On Mon, May 15, 2006 at 07:16:38PM +0200, Franz Haberhauer wrote:
Nicolas Williams wrote:
Yes, but remember, DB vendors have adopted new features before -- they
want to have the fastest DB. Same with open source web servers. So I'm
a bit optimistic.
Yes, but they usually adopt it only
On Mon, May 15, 2006 at 11:17:17AM -0700, Bart Smaalders wrote:
Perhaps an fadvise call is in order?
We already have directio(3C).
(That was a surprise for me also.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Sat, May 13, 2006 at 08:23:55AM +0200, Franz Haberhauer wrote:
Given that ISV apps can be only changed by the ISV who may or may not be
willing to
use such a new interface, having a no cache property for the file - or
given that filesystems
are now really cheap with ZFS - for the
On Fri, May 12, 2006 at 05:23:53PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
For read it is an interesting concept. Since
Reading into cache
Then copy into user space
then keep data around but never use it
is not optimal.
So 2 issues, there is the cost
On Fri, May 12, 2006 at 06:33:00PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
Directio is non-posix anyway and given that people have been
train to inform the system that the cache won't be useful,
that it's a hard problem to detect automatically, let's
avoid the copy and save
The ZFS discuss list is re-delivering old messages.
Have the problems with the archives been fixed?
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, May 12, 2006 at 09:59:56AM -0700, Richard Elling wrote:
On Fri, 2006-05-12 at 10:42 -0500, Anton Rang wrote:
Now latency wise, the cost of copy is small compared to the
I/O; right ? So it now turns into an issue of saving some
CPU cycles.
CPU cycles and memory bandwidth
6370738 zfs diffs filesystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, May 11, 2006 at 03:38:59PM +0100, Darren J Moffat wrote:
What would the output of zfs diffs be ?
My original conception was:
- dnode # + changed blocks
- + some naming hints so that one could quickly find changed dnodes in
clones
I talked about this with Bill Moore and he came up
On Thu, May 11, 2006 at 11:15:12AM -0400, Bill Sommerfeld wrote:
This situation is analogous to the merge with common ancestor
operations performed on source code by most SCM systems; with a named
snapshot as the clone base, the ancestor is preserved and can easily be
retrieved.
Yes, and in
On Mon, May 08, 2006 at 10:23:44PM +0100, Tim Foster wrote:
This could be easily implemented via a set of SMF instances which
create/destroy cron jobs which would themselves call a simple script
responsible for taking the snapshots.
There was talk a while back about an extended cron service
On Fri, May 05, 2006 at 09:43:05AM -0700, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compare ZFS to NetApp? I hope not
It's a public list, you can do the
On Thu, May 04, 2006 at 12:39:59AM -0700, Jeff Bonwick wrote:
Why not use the Solaris audit facility?
Several reasons:
(1) We want the history to follow the data, not the host. If you
export the pool from one host and import it on another, we want
the command history to move
On Wed, May 03, 2006 at 03:34:56PM -0700, Ed Gould wrote:
I think this might be a case where a structured record (like the
compact XML suggestion made earlier) would help. At least having
distinguished start and end markers (whether they be one byte each,
or XML constructs) for a record
301 - 361 of 361 matches
Mail list logo