On Mon, Dec 04, 2006 at 04:10:07PM -0800, Mark Fasheh wrote:
Hi Steve,
On Mon, Dec 04, 2006 at 10:54:53AM +, Steven Whitehouse wrote:
In the future, I'd like to see a relative atime mode, which functions
in the manner described by Valerie Henson at:
http://lkml.org/lkml/2006/8
On Mon, Dec 04, 2006 at 04:36:20PM -0800, Valerie Henson wrote:
On Mon, Dec 04, 2006 at 04:10:07PM -0800, Mark Fasheh wrote:
Hi Steve,
On Mon, Dec 04, 2006 at 10:54:53AM +, Steven Whitehouse wrote:
In the future, I'd like to see a relative atime mode, which functions
On Tue, Dec 05, 2006 at 08:58:02PM -0800, Andrew Morton wrote:
On Mon, 4 Dec 2006 16:36:20 -0800 Valerie Henson [EMAIL PROTECTED] wrote:
Add relatime (relative atime) support. Relative atime only updates
the atime if the previous atime is older than the mtime or ctime.
Like noatime
Just some quick notes on possible ways to fix the ext2 fsync bug that
eXplode found. Whether or not anyone will bother to implement it is
another matter.
Background: The eXplode file system checker found a bug in ext2 fsync
behavior. Do the following: truncate file A, create file B which
On Thu, Feb 15, 2007 at 09:20:21AM -0500, Theodore Tso wrote:
It's actually not the case that fsck will complete the truncate for
file A. The problem is that while e2fsck is processing indirect
blocks in pass 1, the block which is marked as file A's indirect block
(but which actually
Hi folks,
I'm chairing a workshop on storage security and survivability this
fall. I'd really like to have participation from the Linux file
systems and storage community. It's not much work; a good 2-6 page
paper or a really killer abstract is enough to get you into the
workshop. This is also
On Tue, Apr 24, 2007 at 11:34:48PM +0400, Nikita Danilov wrote:
Maybe I failed to describe the problem presicely.
Suppose that all chunks have been checked. After that, for every inode
I0 having continuations I1, I2, ... In, one has to check that every
logical block is presented in at most
On Wed, Apr 25, 2007 at 03:34:03PM +0400, Nikita Danilov wrote:
What is more important, design puts (as far as I can see) no upper limit
on the number of continuation inodes, and hence, even if _average_ fsck
time is greatly reduced, occasionally it can take more time than ext2 of
the same
On Wed, Apr 25, 2007 at 08:54:34PM +1000, David Chinner wrote:
On Tue, Apr 24, 2007 at 04:53:11PM -0500, Amit Gud wrote:
The structure looks like this:
-- --
| cnode 0 |--| cnode 0 |-- to another cnode or NULL
-- --
On Wed, Apr 25, 2007 at 05:38:34AM -0600, Andreas Dilger wrote:
The case where only a fsck of the corrupt chunk is done would not find the
cnode references. Maybe there needs to be per-chunk info which contains
a list/bitmap of other chunks that have cnodes shared with each chunk?
Yes,
On Thu, Apr 26, 2007 at 12:05:04PM -0400, Jeff Dike wrote:
No, I'm referring to a different file. The scenario is that you have
a growing file in a nearly full disk with files being deleted (and
thus space being freed) such that allocations for the growing file
bounce back and forth between
On Thu, Apr 26, 2007 at 10:47:38AM +0200, Jan Kara wrote:
Do I get it right that you just have in each cnode a pointer to the
previous next cnode? But then if two consecutive cnodes get corrupted,
you have no way to connect the chain, do you? If each cnode contained
some unique identifier
On Fri, Apr 27, 2007 at 11:06:47AM -0400, Jeff Dike wrote:
On Thu, Apr 26, 2007 at 09:58:25PM -0700, Valerie Henson wrote:
Here's an example, spelled out:
Allocate file 1 in chunk A.
Grow file 1.
Chunk A fills up.
Allocate continuation inode for file 1 in chunk B.
Chunk A gets some
On Mon, Apr 23, 2007 at 02:53:33PM -0600, Andreas Dilger wrote:
Also, is it considered a cross-chunk reference if a directory entry is
referencing an inode in another group? Should there be a continuation
inode in the local group, or is the directory entry itself enough?
(Sorry for the
On Mon, Apr 23, 2007 at 02:05:47AM +0530, Karuna sagar K wrote:
Hi,
The attached code contains program to estimate the cross-chunk
references for ChunkFS file system (idea from Valh). Below are the
results:
Nice work! Thank you very much for doing this!
-VAL
-
To unsubscribe from this
On Mon, Apr 23, 2007 at 08:13:06PM -0400, Theodore Tso wrote:
There may also be special things we will need to do to handle
scenarios such as BackupPC, where if it looks like a directory
contains a huge number of hard links to a particular chunk, we'll need
to make sure that directory is
On Sun, Apr 29, 2007 at 02:21:13PM +0200, J??rn Engel wrote:
On Sat, 28 April 2007 17:05:22 -0500, Matt Mackall wrote:
This is a relatively simple scheme for making a filesystem with
incremental online consistency checks of both data and metadata.
Overhead can be well under 1% disk space
On Sun, Apr 29, 2007 at 08:40:42PM -0500, Matt Mackall wrote:
This does mean that our time to make progress on a check is bounded at
the top by the size of our largest file. If we have a degenerate
filesystem filled with a single file, this will in fact take as long
as a conventional fsck.
On Wed, May 09, 2007 at 03:16:41PM +0400, Nikita Danilov wrote:
I guess I miss something. If chunkfs maintains at most one continuation
per chunk invariant, then continuation inode might end up with multiple
byte ranges, and to check that they do not overlap one has to read
indirect blocks
On Wed, May 09, 2007 at 12:06:52PM -0500, Matt Mackall wrote:
On Wed, May 09, 2007 at 12:56:39AM -0700, Valerie Henson wrote:
On Sun, Apr 29, 2007 at 08:40:42PM -0500, Matt Mackall wrote:
This does mean that our time to make progress on a check is bounded at
the top by the size of our
On Sun, Apr 29, 2007 at 08:40:42PM -0500, Matt Mackall wrote:
On Sun, Apr 29, 2007 at 07:23:49PM -0400, Theodore Tso wrote:
There are a number of filesystem corruptions this algorithm won't
catch. The most obvious is one where the directory tree isn't really
a tree, but an cyclic graph.
On Wed, May 09, 2007 at 02:51:41PM -0500, Matt Mackall wrote:
We will, unfortunately, need to be able to check an entire directory
at once. There's no other efficient way to assure that there are no
duplicate names in a directory, for instance.
I don't see that being a major problem for the
Hey all,
I altered Karuna's cref tool to print the number of seconds it would
take to check the cross-references for a chunk. The results look good
for chunkfs: on my laptop /home file system and a 1 GB chunk size, the
per-chunk cross-reference check time would be an average of 5 seconds
and a
The below patch is a proof of concept that e2fsck can get a
performance improvement on file systems with more than one disk
underneath. On my test case, a 500GB file system with 150GB in use
and 10+1 RAID underneath, elapsed time is reduced by 40-50%. I see no
performance improvement in the
On Jan 8, 2008 8:40 PM, Al Boldi [EMAIL PROTECTED] wrote:
Rik van Riel wrote:
Al Boldi [EMAIL PROTECTED] wrote:
Has there been some thought about an incremental fsck?
You know, somehow fencing a sub-dir to do an online fsck?
Search for chunkfs
Sure, and there is TileFS too.
But
On Jan 16, 2008 3:49 AM, Pavel Machek [EMAIL PROTECTED] wrote:
ext3's lets fsck on every 20 mounts is good idea, but it can be
annoying when developing. Having option to fsck while filesystem is
online takes that annoyance away.
I'm sure everyone on cc: knows this, but for the record you can
On Jan 17, 2008 5:15 PM, David Chinner [EMAIL PROTECTED] wrote:
On Wed, Jan 16, 2008 at 01:30:43PM -0800, Valerie Henson wrote:
Hi y'all,
This is a request for comments on the rewrite of the e2fsck IO
parallelization patches I sent out a few months ago. The mechanism is
totally
27 matches
Mail list logo