blk_recalc_rq_segments calls blk_recount_segments on each bio,
then does some extra calculations to handle segments that overlap
two bios.
If we merge the code from blk_recount_segments into
blk_recalc_rq_segments, we can process the whole request one bio_vec
at a time, and not need the messy cro
Some requests signal partial completion. We currently record this
by updating bi_idx, bv_len, and bv_offset.
This is bad if the bi_io_vec is to be shared.
So instead keep in "struct request" the amount of the first bio
that has completed. This is "first_offset" (i.e. offset in to
first bio). U
Current bi_end_io can be called multiple times as sub-requests complete.
However no ->bi_end_io function wants to know about that. So
only call when the bio is complete.
Note that bi_sector and bi_size are now not updated when subrequests
complete. This does not appear to be a problem as they a
As bi_end_io is only called once when the reqeust is compelte,
the 'size' argument is now redundant. Remove it.
Now there is no need for bio_endio to subtract the size completed
from bi_size. So don't do that either.
While we are at it, change bi_end_io to return void.
Signed-off-by: Neil Bro
This count is currently only used by raid5 (which used to use bi_phys_segments),
but it will be used more widely in future.
generic_make_request sets the count to 1, and bio_endio decrements it and
calls bi_end_io only when it hits zero. A make_request_fn can do whatever
it likes if it doesn't c
When a read request that bypassed the cache needs to be retried
(due to device failure) we need to process it one stripe_head at a time,
and record where we were up to.
We were recording this in bi_hw_segments. But as there is only
ever one such request that is being resubmitted, this info can
be
ll_back_merge_fn is currently exported to SCSI where is it used,
together with blk_rq_bio_prep, in exactly the same way these
functions are used in __blk_rq_map_user.
So move the common code into a new function (blk_rq_append_bio), and
don't export ll_back_merge_fn any longer.
Signed-off-by: Nei
blk_rq_bio_prep is exported for use in exactly
one place. That place can benefit from using
the new blk_rq_append_bio instead.
So
- change dm-emc to call blk_rq_append_bio
- stop exporting blk_rq_bio_prep, and
- initialise rq_disk in blk_rq_bio_prep,
as dm-emc needs it.
Signed-off-
These have very similar functions and should share code where
possible.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_
ll_merge_requests_fn can update bi_hw_*_size in one case where we end
up not merging. This is wrong.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk
These functions are always passed the last bio of one request
and the first of the next. So it can work to just pass the
two requests and let them pick off the bios. This makes life
easier for a future patch.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.
Drivers that define their own make_request_fn have no need of
bi_hw_back_size and bi_hw_front_size, and the code that does
use it is only ever interested in bi_hw_back_size for
rq->bio and bi_hw_front_size for rq->biotail
So move these fields from the bio into the request.
This involves passing
Every error return from ll_{front,back}_merge_fn sets
REQ_NOMERGE. So move this to after the call to these functions.
This is only a small saving here, but will help a future patch.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c | 43 +++
umem.c:
advances bi_idx and bi_sector to track where it is up to.
But it is only ever doing this on one bio, so the updated
fields can easily be kept elsewhere (current_*).
updates bi_size, but never uses the updated values, so
this isn't needed.
reuses bi_phys_segments to count how
It is almost always set to zero. The one case where it isn't
is in dm.c when splitting a bio. In this case we can simply offset
bi_io_vec rather than storing the offset in bi_idx.
bio_to_phys, bio_iovec, bio_page, bio_offset, bio_segments all depend
on bi_idx, so they go too.
Also __bio_for_ea
i.e. instread of providing a pointer to each bio_vec, it provides
a copy of each bio_vec.
This allows a future patch to cause bio_for_each_segment to
provide bio_vecs that are not in the bi_io_vec list, thus allowing
for offsets and length restrictions.
We consequently remove the only call for b
To allow bi_io_vec sharing, a bio now can reference just part of the
io_vec. In particular, the first bi_offset bytes are not included,
and exactly bi_size bytes are included, even if the bi_io_vec goes
beyond there.
bi_offset must be less than bv_len of the first bvec.
This patch only handles
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/block/umem.c | 16 +---
1 file changed, 13 insertions(+), 3 deletions(-)
diff .prev/drivers/block/umem.c ./drivers/block/umem.c
--- .prev/drivers/block/umem.c 2007-07-31 11:21:03.0 +1000
+++ ./dr
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/dm-crypt.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff .prev/drivers/md/dm-crypt.c ./drivers/md/dm-crypt.c
--- .prev/drivers/md/dm-crypt.c 2007-07-31 11:21:03.0 +1000
+++ .
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/block/pktcdvd.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers/block/pktcdvd.c ./drivers/block/pktcdvd.c
--- .prev/drivers/block/pktcdvd.c 2007-07-31 11:21:03.0 +1000
+++ ./dr
If we are going to share a bio between requests, then the
last bio in a list may not point to NULL, but may point to
the next bio in a different list.
So instead of testing if ->bi_next is NULL, test if the bio
matches rq->biotail.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat outpu
For a request to be able to refer to part of a bio, we need to be able
to impose a size limit at the request level. So allow hard_nr_sectors
to be less than the size of the bios (and bio_vecs) and interpret it
such that anything in the last bio beyond that limit is ignored.
As some bios can be l
Now that bi_io_vec and bio can be shared, we can handle arbitrarily
large bios in __make_request by splitting them over multiple
requests.
If we do split a request, we mark both halves as "REQ_NOMERGE".
It is only really necessary to mark the first part as
NO_BACK_MERGE
and the second part as
NO
As bi_io_vec is now never modified, bio_clone does not need to
copy it any more.
Make a new bio_multi_split function which can be used to split a single
bio into mutliple other bios dependant on the one parent.
Use that in raid0 and linear to handle any arbitrary bios,
and remove mergeable_bvec
.. using the new bio_multi_split.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid10.c | 70 ++
1 file changed, 14 insertions(+), 56 deletions(-)
diff .prev/drivers/md/raid10.c ./drivers/md/raid10.c
--- .prev/
We only need to split bios if we want to read around the cache,
as when we go through the cache, the sharing is already done.
So use bio_multi_split to split up read requests, and get rid of
raid5_mergeable_bvec as it is no longer needed.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffst
pktcdvd now accepts arbitrarily large bios and will split as necessary.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/block/pktcdvd.c | 44
1 file changed, 12 insertions(+), 32 deletions(-)
diff .prev/drivers/block/pk
No driver calls blk_queue_merge_bvec or bio_split any more,
so they can go.
Also, several places test if merge_bvec_fn is set or not. As it is
now never set (it doens't even exist) they can be cleaned up too.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.
Stacking device drives (dm/md) no longer need to worry about
most queue limits as they are handled at a lower level. The
only limit of any interest at the top level now is the hard
sector size.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c |
__bio_add_page no longer needs 'max_sectors' and can now
only fail when the bio is full.
So raid1/raid10 do not need to cope with unpredictable failure of
bio_add_page, and can be simplified. Infact they get simplified so
much that they don't use bio_add_page at all (they we only using
before to
Use the new bio_multi_split to simplify dm bio splitting.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/dm.c | 166 ++
1 file changed, 20 insertions(+), 146 deletions(-)
diff .prev/drivers/md/dm.c ./drivers/m
__make_request now handles bios with too many segments, and it tracks
segment counts in 'struct request' so we no longer need to track
the counts in each bio, or to check the counts when adding a page
to a bio.
So bi_phys_segments, bi_hw_segments, blk_recount_segments(),
BIO_SEG_VALID, bio_phys_se
Following 2 patches contain bugfixes for md. Both apply to earlier
kernels, but probably aren't significant enough for -stable (no oops,
no data corruption, no security hole).
They should go in 2.6.23 though.
Thanks,
NeilBrown
[PATCH 001 of 2] md: Make sure a re-add after a restart ho
Commit 1757128438d41670ded8bc3bc735325cc07dc8f9 was slightly bad. If
an array has a write-intent bitmap, and you remove a drive, then readd
it, only the changed parts should be resynced. However after the
above commit, this only works if the array has not been shut down and
restarted.
This is b
When a raid1 array is reshaped (number of drives changed),
the list of devices is compacted, so that slots for missing
devices are filled with working devices from later slots.
This requires the "rd%d" symlinks in sysfs to be updated.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat o
/built-in.o of 96 bytes, though there is some growth out in
drivers making and over-all decrease of only 48 bytes.
Thanks,
NeilBrown
[PATCH 001 of 5] Don't update bi_hw_*_size if we aren't going to merge.
[PATCH 002 of 5] Replace bio_data with blk_rq_data
[PATCH 003 of
ll_merge_requests_fn can update bi_hw_*_size in one case where we end
up not merging. This is wrong.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk
Almost every call to bio_data is for the first bio
in a request. A future patch will add some accounting
information to 'struct request' which will need to be
used to find the start of the request in the bio.
So replace bio_data with blk_rq_data which takes a 'struct request *'
The one exception
All calls to bio_cur_sectors are for the first bio in a 'struct request'.
A future patch will make the discovery of this number dependant on
information in the request. So change the function to take a
'struct request *' instread of a 'struct bio *', and make it a real
function as more code will
(almost) every usage of rq_for_each_bio wraps a usage of
bio_for_each_segment, so these can be combined into
rq_for_each_segment.
We get it to fill in a bio_vec structure rather than provide a
pointer, as future changes to make bi_io_vec immutable will require
that.
The one place where rq_for_ea
blk_recalc_rq_segments calls blk_recount_segments on each bio,
then does some extra calculations to handle segments that overlap
two bios.
If we merge the code from blk_recount_segments into
blk_recalc_rq_segments, we can process the whole request one bio_vec
at a time, and not need the messy cro
other user of bvec_kmap_irq is ide-floppy.c. If that does
need to disable interrupts, and ps3disk doesn't, make the disabling of
interrupts should be separated from the kmapping??
Thanks,
NeilBrown
[PATCH 001 of 6] Merge blk_recount_segments into blk_recalc_rq_segments
[PATCH 002 of 6] Intr
blk_recalc_rq_segments calls blk_recount_segments on each bio,
then does some extra calculations to handle segments that overlap
two bios.
If we merge the code from blk_recount_segments into
blk_recalc_rq_segments, we can process the whole request one bio_vec
at a time, and not need the messy cro
Every usage of rq_for_each_bio wraps a usage of
bio_for_each_segment, so these can be combined into
rq_for_each_segment.
We define "struct req_iterator" to hold the 'bio' and 'index' that
are needed for the double iteration.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./D
umem.c:
advances bi_idx and bi_sector to track where it is up to.
But it is only ever doing this on one bio, so the updated
fields can easily be kept elsewhere (current_*).
updates bi_size, but never uses the updated values, so
this isn't needed.
reuses bi_phys_segments to count how
ll_back_merge_fn is currently exported to SCSI where is it used,
together with blk_rq_bio_prep, in exactly the same way these
functions are used in __blk_rq_map_user.
So move the common code into a new function (blk_rq_append_bio), and
don't export ll_back_merge_fn any longer.
Signed-off-by: Nei
blk_rq_bio_prep is exported for use in exactly
one place. That place can benefit from using
the new blk_rq_append_bio instead.
So
- change dm-emc to call blk_rq_append_bio
- stop exporting blk_rq_bio_prep, and
- initialise rq_disk in blk_rq_bio_prep,
as dm-emc needs it.
Signed-off-
These have very similar functions and should share code where
possible.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./block/ll_rw_blk.c |8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff .prev/block/ll_rw_blk.c ./block/ll_rw_blk.c
--- .prev/block/ll_rw_b
It appears that a couple of bugs slipped in to md for 2.6.23.
These two patches fix them and are appropriate for 2.6.23.y as well
as 2.6.24-rcX
Thanks,
NeilBrown
[PATCH 001 of 2] md: Fix an unsigned compare to allow creation of bitmaps with
v1.0 metadata.
[PATCH 002 of 2] md: raid5: fix
As page->index is unsigned, this all becomes an unsigned comparison, which
almost always returns an error.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Cc: Stable <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/bitmap.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev
From: Dan Williams <[EMAIL PROTECTED]>
ops_complete_biofill() runs outside of spin_lock(&sh->lock) and clears the
'pending' and 'ack' bits. Since the test_and_ack_op() macro only checks
against 'complete' it can get an inconsistent snapshot of pending work.
Move the clearing of these bits to ha
Following are 5 minor patches for md in current -mm.
The first 4 are suitable to flow into 2.6.24.
The last fixes a small bug in Dan Williams' patches currently in -mm,
which are not scheduled for 2.6.24.
Thanks,
NeilBrown
[PATCH 001 of 5] md: Fix a bug in some never-used code.
[PATC
http://bugzilla.kernel.org/show_bug.cgi?id=3277
There is a seq_printf here that isn't being passed a 'seq'.
Howeve as the code is inside #ifdef MD_DEBUG, nobody noticed.
Also remove some extra spaces.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid0.c | 1
When an array is started read-only, MD_RECOVERY_NEEDED can be set but
no recovery will be running. This causes 'sync_action' to report the
wrong value.
We could remove the test for MD_RECOVERY_NEEDED, but doing so would
leave a small gap after requesting a sync action, where 'sync_action'
would
From: Iustin Pop <[EMAIL PROTECTED]>
The 'degraded' attribute is useful to quickly determine if the array is
degraded, instead of parsing 'mdadm -D' output or relying on the other
techniques (number of working devices against number of defined devices, etc.).
The md code already keeps track of th
Whenever a read error is found, we should attempt to overwrite with
correct data to 'fix' it.
However when do a 'check' pass (which compares data blocks that are
successfully read, but doesn't normally overwrite) we don't do that.
We should.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Dif
This kmem_cache_create is creating a cache that already exists. We
could us the alternate name, just like we do a few lines up.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
Cc: "Dan Williams" <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid5.c |2 +-
1 file changed, 1 insertion(+
commit 4ae3f847e49e3787eca91bced31f8fd328d50496 did not get applied
correctly, presumably due to substantial similarities between
handle_stripe5 and handle_stripe6.
This patch (with lots of context) moves the chunk of new code from
handle_stripe6 (where it isn't needed (yet)) to handle_stripe5.
Following are 5 patches for kNFSd suitable for inclusion in 2.6.20.
A couple should go to 2.6.19-stable, but I'll send those separately.
NeilBrown
[PATCH 001 of 5] knfsd: Update email address and status for NFSD in
MAINTAINERS.
[PATCH 002 of 5] knfsd: Fix setting of ACL server ver
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./MAINTAINERS |5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff .prev/MAINTAINERS ./MAINTAINERS
--- .prev/MAINTAINERS 2007-01-23 11:13:46.0 +1100
+++ ./MAINTAINERS 2007-01-23 11:14:14.0 +
Due to silly typos, if the nfs versions are explicitly set,
no NFSACL versions get enabled.
Also improve an error message that would have made this bug
a little easier to find.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./fs/nfsd/nfssvc.c |8
./net/sunrpc/sv
A couple of the warning will be followed by an Oops if they ever fire,
so may as well be BUG_ON. Another isn't obviously fatal but has never
been known to fire, so make it a WARN_ON.
Cc: Adrian Bunk <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./include
NFSd assumes that largest number of pages that will be needed
for a request+response is 2+N where N pages is the size of the largest
permitted read/write request. The '2' are 1 for the non-data part of
the request, and 1 for the non-data part of the reply.
However, when a read request is not pag
From: Peter Staubach <[EMAIL PROTECTED]>
NFS V3 (and V4) support exclusive create by passing a 'cookie' which
can get stored with the file. If the file exists but has exactly the
right cookie stored, then we assume this is a retransmit and the
exclusive create was successful.
The cookie is 64bi
Following are 4 patches suitable for inclusion in 2.6.20.
Thanks,
NeilBrown
[PATCH 001 of 4] md: Update email address and status for MD in MAINTAINERS.
[PATCH 002 of 4] md: Make 'repair' actually work for raid1.
[PATCH 003 of 4] md: Make sure the events count in an md array never r
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./MAINTAINERS |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff .prev/MAINTAINERS ./MAINTAINERS
--- .prev/MAINTAINERS 2007-01-23 11:14:14.0 +1100
+++ ./MAINTAINERS 2007-01-23 11:23:03.0 +1
In most cases we check the size of the bitmap file before
reading data from it. However when reading the superblock,
we always read the first PAGE_SIZE bytes, which might not
always be appropriate. So limit that read to the size of the
file if appropriate.
Also, we get the count of available b
Now that we sometimes step the array events count backwards
(when transitioning dirty->clean where nothing else interesting
has happened - so that we don't need to write to spares all the time),
it is possible for the event count to return to zero, which is
potentially confusing and triggers and M
When 'repair' finds a block that is different one the various
parts of the mirror. it is meant to write a chosen good version
to the others. However it currently writes out the original data
to each. The memcpy to make all the data the same is missing.
Signed-off-by: Neil Brown <[EMAIL PROTECTE
Another nfsd patch suitable for 2.6.20...
Thanks,
NeilBrown
### Comments for Changeset
nfsd defines a type 'encode_dent_fn' which is much like 'filldir_t'
except that the first pointer is 'struct readdir_cd *' rather than
'void *'. It then casts encode
Another md patch suitable for 2.6.20.
Thanks,
NeilBrown
### Comments for Changeset
If a GFP_KERNEL allocation is attempted in md while the mddev_lock is
held, it is possible for a deadlock to eventuate.
This happens if the array was marked 'clean', and the memalloc triggers
a write-
One more... (sorry about the dribs-and-drabs approach)
NeilBrown
### Comments for Changeset
raid5_mergeable_bvec tries to ensure that raid5 never sees a read
request that does not fit within just one chunk. However as we
must always accept a single-page read, that is not always possible.
So
Following patch is suitable for 2.6.20. It fixes some minor bugs that
need to be fix in order to use new functionality in mdadm-2.6.
Thanks,
NeilBrown
### Comments for Changeset
While developing more functionality in mdadm I found some bugs in md...
- When we remove a device from an inactive
correctly with a little bit of
fuzz to 2.6.19 and 2.6.20. It should be considered at least for 2.6.20.1.
Thanks,
NeilBrown
### Comments for Changeset
If you lose this race, it can iput a socket inode twice and you
get a BUG in fs/inode.c
When I added the option for user-space to close a socket,
I
Most files in the 'nfsd' filesystem are transactional.
When you write, a reply is generated that can be read back
only on the same 'file'.
If the reply has zero length, the 'write' will incorrectly
return a value of '0' instead of the length that was
written. This causes 'rpc.nfsd' to give an an
Add support for using a filesystem UUID to identify and
export point in the filehandle.
For NFSv2, this UUID is xor-ed down to 4 or 8 bytes so
that it doesn't take up too much room. For NFSv3+, we
use the full 16 bytes, and possibly also a 64bit inode number
for exports beneath the root of a file
Following are 4 patchs from knfsd suitable for 2.6.21.
Numbers 3 and 4 provide new usability features that require a new
nfs-utils to make full use of (all nfs-utils will ofcourse continue to
work providing the functionality it always provided).
(3) allows a 16 byte uuid to be used to identify th
If we are using the same version/fsid as a current filehandle, then
there is no need to verify the the numbers are valid for this
export, and they must be (we used them to find this export).
This allows us to simplify the fsid selection code.
Also change "ref_fh_version" and "ref_fh_fsid_type" t
AUTH_UNIX authentication (the standard with NFS) has a limit of 16
groups ids. This causes problems for people in more than 16
groups.
So allow the server to map a uid into a list of group ids based on
local knowledge rather depending on the (possibly truncated) list
from the client.
If there i
following are 9 patches from Bruce Fields for the NFSv4 server,
mostly ACL related.
Suitable for 2.6.21.
Thanks,
NeilBrown
[PATCH 001 of 9] knfsd: nfsd4: fix non-terminated string
[PATCH 002 of 9] knfsd: nfsd4: relax checking of ACL inheritance bits
[PATCH 003 of 9] knfsd: nfsd4: simplify
From: J. Bruce Fields <[EMAIL PROTECTED]>
The server name is expected to be a null-terminated string, so we can't
pass in the raw client identifier.
What's more, the client identifier is just a binary, not necessarily
printable, blob. Let's just use the ip address instead. The server
name appea
From: J. Bruce Fields <[EMAIL PROTECTED]>
The rfc allows us to be more permissive about the ACL inheritance bits we
accept:
"If the server supports a single "inherit ACE" flag that applies to
both files and directories, the server may reject the request
(i.e., requiring th
From: J. Bruce Fields <[EMAIL PROTECTED]>
The wrong pointer is being kfree'd in savemem() when defer_free
returns with an error.
Signed-off-by: Benny Halevy <[EMAIL PROTECTED]>
Signed-off-by: J. Bruce Fields <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
.
From: J. Bruce Fields <[EMAIL PROTECTED]>
Simplify the memory management and code a bit by representing acls with an
array instead of a linked list.
Signed-off-by: J. Bruce Fields <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./fs/nfsd/nfs4acl.c|
From: J. Bruce Fields <[EMAIL PROTECTED]>
We should be returning ATTRNOTSUPP, not NOTSUPP, when acls are unsupported.
Also fix a comment.
Signed-off-by: "J. Bruce Fields" <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./fs/nfsd/nfs4xdr.c |2 +-
./fs/n
From: J. Bruce Fields <[EMAIL PROTECTED]>
The code that splits an incoming nfsv4 ACL into inheritable and effective parts
can be combined with the the code that translates each to a posix acl,
resulting in simpler code that requires one less pass through the ACL.
Signed-off-by: "J. Bruce Fields"
From: J. Bruce Fields <[EMAIL PROTECTED]>
When setting an ACL that lacks inheritable ACEs on a directory, we
should set a default ACL of zero length, not a default ACL with all bits
denied.
Signed-off-by: "J. Bruce Fields" <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Dif
From: J. Bruce Fields <[EMAIL PROTECTED]>
Return just the effective permissions, and forget about the mask. It isn't
worth the complexity.
WARNING: This breaks backwards compatibility with overly-picky nfsv4->posix acl
translation, as may has been included in some patched versions of libacl. To
From: J. Bruce Fields <[EMAIL PROTECTED]>
We're inserting deny's between some ACEs in order to enforce posix draft
acl semantics which prevent permissions from accumulating across entries
in an acl.
That's fine, but we're doing that by inserting a deny after *every* allow,
which is overkill. We
Another nfsd patch for 2.6.21...
### Comments for Changeset
When NFSD receives a write request, the data is typically in a number
of 1448 byte segments and writev is used to collect them together.
Unfortunately, generic_file_buffered_write passes these to the filesystem
one at a time, so an e.g.
hardware-xor patches - one line
of context is different.
Patch 1 should probably go in -stable - the bug could cause data
corruption in a fairly uncommon raid10 configuration, so that one and
this intro are Cc:ed to [EMAIL PROTECTED]
Thanks,
NeilBrown
[PATCH 001 of 6] md: Fix raid10 recovery problem
There are two errors that can lead to recovery problems with raid10
when used in 'far' more (not the default).
Due to a '>' instead of '>=' the wrong block is located which would
result in garbage being written to some random location, quite
possible outside the range of the device, causing the n
md tries to warn the user if they e.g. create a raid1 using two partitions
of the same device, as this does not provide true redundancy.
However it also warns if a raid0 is created like this, and there is
nothing wrong with that.
At the place where the warning is currently printer, we don't nece
From: "H. Peter Anvin" <[EMAIL PROTECTED]>
- Use kernel_fpu_begin() and kernel_fpu_end()
- Use boot_cpu_has() for feature testing even in userspace
Signed-off-by: H. Peter Anvin <[EMAIL PROTECTED]>
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid6mmx.c | 1
An error always aborts any resync/recovery/reshape on the understanding
that it will immediately be restarted if that still makes sense.
However a reshape currently doesn't get restarted. With this patch
it does.
To avoid restarting when it is not possible to do work, we call
into the personalit
i.e. one or more drives can be added and the array will re-stripe
while on-line.
Most of the interesting work was already done for raid5.
This just extends it to raid6.
mdadm newer than 2.6 is needed for complete safety, however any
version of mdadm which support raid5 reshape will do a good enou
The mddev and queue might be used for another array which does not
set these, so they need to be cleared.
Signed-off-by: NeilBrown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/md.c |3 +++
1 file changed, 3 insertions(+)
diff .prev/drivers/md/md.c ./drivers/md/md.c
---
Another nfsd patch suitable for 2.6.20, though it could wait for .21
if we feel it is time to be more cautious.
Thanks,
NeilBrown
### Comments for Changeset
Also remove {NFSD,RPC}_PARANOIA as having the defines doesn't
really add anything.
The printks covered by RPC_PARANOIA were tirgger
-sockaddr-struct-length.patch
and include the following 14 patches instead?
They are mostly from Chuck Level and make preparating for IPv6 support
in the NFS server.
They are *not* for 2.6.20, but should be ok for .21.
Thanks,
NeilBrown
[PATCH 001 of 14] knfsd: SUNRPC: update internal API: separate
From: Chuck Lever <[EMAIL PROTECTED]>
Currently in the RPC server, registering with the local portmapper and
creating "permanent" sockets are tied together. Expand the internal APIs
to allow these two socket characteristics to be separately specified.
This will be externalized in the next patch.
201 - 300 of 2514 matches
Mail list logo