Sorry ... hand and head do not coordinate well. Somehow I use spin
lock but it is actually a semaphore Wendy
Bob Peterson wrote:
Part of the problem was that inodes were being recycled
before their buffers were flushed to the journal logs.
Set aside after this patch, the problem goes
Bob Peterson wrote:
On Thu, 2007-08-09 at 09:46 -0400, Wendy Cheng wrote:
Set aside after this patch, the problem goes away thing ...
I haven't checked previous three patches yet so I may not have the
overall picture ... but why adding the journal flush spin lock here
could prevent
path. Since setattr is a system
call, time stamps update is still required.
Signed-off-by: S. Wendy Cheng [EMAIL PROTECTED]
bmap.c | 32 +++-
1 files changed, 31 insertions(+), 1 deletion(-)
--- gfs2-2.6-nmw/fs/gfs2/bmap.c 2007-08-11 19:06:12.0 -0400
to handle the new request,
instead of erroneously setting gl_demote_state to a different state.
Signed-off-by: S. Wendy Cheng [EMAIL PROTECTED]
glock.c | 13 -
incore.h |2 ++
2 files changed, 14 insertions(+), 1 deletion(-)
--- e48-brew/fs/gfs2/incore.h 2007-09-20 17:29
() will loop again to handle the new request,
instead of erronously setting gl_demote_state to a different state.
Signed-off-by: S. Wendy Cheng [EMAIL PROTECTED]
glock.c | 15 ++-
incore.h |2 ++
2 files changed, 16 insertions(+), 1 deletion(-)
--- e48-brew/fs/gfs2/incore.h 2007-09
Steven Whitehouse wrote:
From aa974896eb5c1a70d4be6df1e3f9f5e12b8887f9 Mon Sep 17 00:00:00 2001
From: Steven Whitehouse [EMAIL PROTECTED]
Date: Mon, 15 Oct 2007 16:29:05 +0100
Subject: [PATCH] [GFS2] Remove useless i_cache from inodes
The i_cache was designed to keep references to the
Fabio Massimo Di Nitto wrote:
Hi guys,
this is purely cosmetic and I didn't prepare a patch but see this:
given a 4GB block device (just as an example):
/dev/nbd2 3,9G 518M 3,4G 14% /mnt/gfs2
/dev/nbd1 3,1G 20K 3,1G 1% /mnt/gfs
you can see that gfs1 masks the
Fabio M. Di Nitto wrote:
Yes .. but gfs(1)-kernel HEAD is currently broken ..
I am adding Phillip in CC.
So far he has been doing some work on gfs1 for Ubuntu, that includes a
bunch of fixes. I was not able (mainly I did run out of time) to push
them all.
We have a git import based on a
/unlock_filesystem
They are intended to allow admin or user mode script to release NLM locks
based on either a path name or a server in-bound ip address (ipv4 for now)
as;
shell echo 10.1.1.2 /proc/fs/nfsd/unlock_ip
shell echo /mnt/sfs1 /proc/fs/nfsd/unlock_filesystem
Signed-off-by: S. Wendy Cheng [EMAIL
locks_remove_flock() (fs/locks.c:2034) as part of
the fclose call due to NFS-NLM locks still hanging on inode-i_flock list.
Signed-off-by: S. Wendy Cheng [EMAIL PROTECTED]
svcsubs.c |3 +--
1 files changed, 1 insertion(+), 2 deletions(-)
--- linux-nlm-1/fs/lockd/svcsubs.c 2008-01-06 18:23:20.0 -0500
Steven Whitehouse wrote:
--- a/fs/gfs2/ops_inode.c
+++ b/fs/gfs2/ops_inode.c
@@ -113,8 +113,18 @@ static struct dentry *gfs2_lookup(struct inode *dir,
struct dentry *dentry,
if (inode IS_ERR(inode))
return ERR_PTR(PTR_ERR(inode));
- if (inode)
+ if (inode) {
+
Christoph Hellwig wrote:
Ok, I played around with this and cleaned up the ip/path codepathes to
be entirely setup which helped the code quite a bit. Also a few other
Thanks for doing this :) . In the middle of running it with our cluster
test - if passed, will repost it. Get your signed-off
Neil Brown wrote:
On Monday January 7, [EMAIL PROTECTED] wrote:
We've implemented two new NFSD procfs files:
o /proc/fs/nfsd/unlock_ip
o /proc/fs/nfsd/unlock_filesystem
They are intended to allow admin or user mode script to release NLM
locks based on either a path name or a server
Neil Brown wrote:
If I'm reading this correctly, this bug is introduced by your previous
patch.
Depending on how you see the issue. From my end, I view this as the
existing code has a trap and I fell into it. This is probably a chance
to clean up this logic.
The important difference
Christoph Hellwig wrote:
+/* cluster failover support */
+
+typedef struct {
+ int cmd;
+ int stat;
+ int gp;
+ void*datap;
+} nlm_fo_cmd;
please don't introduce typedefs for struct types.
I don't do much community version of linux code so its
Wendy Cheng wrote:
Neil Brown wrote:
Some options:
Have an initial patch which removes all references to f_locks and
includes the change in this patch. With that in place your main
patch won't introduce a bug. If you do this, you should attempt to
understand and justify the performance
Wendy Cheng wrote:
Wendy Cheng wrote:
Neil Brown wrote:
Some options:
Have an initial patch which removes all references to f_locks and
includes the change in this patch. With that in place your main
patch won't introduce a bug. If you do this, you should attempt to
understand and justify
Wendy Cheng wrote:
The point here is with this patch, f_locks it not used at all any
more. Note that we have a nice inline function nlm_file_inuse, why
should we use f_locks (that I assume people agree that it is awkward)
? Could we simply drop f_locks all together in this section of code
Neil Brown wrote:
On Saturday January 12, [EMAIL PROTECTED] wrote:
This is a combined patch that has:
* changes made by Christoph Hellwig
* code segment that handles f_locks so we would not walk inode-i_flock
list twice.
.
if (unlikely(failover)
!failover(data, file))
Neil Brown wrote:
On Tuesday January 15, [EMAIL PROTECTED] wrote:
I don't feel comfortable to change the existing code structure,
especially a BUG() statement. It would be better to separate lock
failover function away from lockd code clean-up. This is to make it
easier for problem
shell echo /mnt/sfs1 /proc/fs/nfsd/unlock_filesystem
Signed-off-by: S. Wendy Cheng [EMAIL PROTECTED]
Signed-off-by: Lon Hohberger [EMAIL PROTECTED]
Signed-off-by: Christoph Hellwig [EMAIL PROTECTED]
fs/lockd/svcsubs.c | 66 +++-
fs/nfsd
J. Bruce Fields wrote:
Yeah, sounds good. Maybe under Documentation/filesystems? And it might
also be helpful to leave a reference to it in the code, e.g., in
nfsctl.c:
/*
* The following are used for failover; see
* Documentation/filesystems/nfsd-failover.txt for
Add a more detailed description into the top of the patch itself. I'm
working on the resume patch now - it will include an overall write-up in
the Documentation directory.
-- Wendy
J. Bruce Fields wrote:
On Thu, Jan 17, 2008 at 10:48:56AM -0500, Wendy Cheng wrote:
J. Bruce Fields wrote:
Remind me: why do we need both per-ip and per-filesystem methods? In
practice, I assume that we'll always do *both*?
Failover normally is done via virtual IP address
J. Bruce Fields wrote:
On Thu, Jan 17, 2008 at 11:31:22AM -0500, Wendy Cheng wrote:
J. Bruce Fields wrote:
On Thu, Jan 17, 2008 at 10:48:56AM -0500, Wendy Cheng wrote:
J. Bruce Fields wrote:
Remind me: why do we need both per-ip and per-filesystem methods
Frank Filz wrote:
I assume the intent here with this implementation is that the node
taking over will start lock recovery for the ip address? With that
perspective, I guess it would be important that each file system only be
accessed with a single ip address otherwise lock recovery will not
Frank van Maarseveen wrote:
shell echo 10.1.1.2 /proc/fs/nfsd/unlock_ip
shell echo /mnt/sfs1 /proc/fs/nfsd/unlock_filesystem
The expected sequence of events can be:
1. Tear down the IP address
You might consider using iptables at this point for dropping outgoing
packets with that
J. Bruce Fields wrote:
On Thu, Jan 17, 2008 at 03:23:42PM -0500, J. Bruce Fields wrote:
To summarize a phone conversation from today:
On Thu, Jan 17, 2008 at 01:07:02PM -0500, Wendy Cheng wrote:
J. Bruce Fields wrote:
Would there be any advantage to enforcing that requirement
J. Bruce Fields wrote:
On Thu, Jan 24, 2008 at 04:06:49PM -0500, Wendy Cheng wrote:
J. Bruce Fields wrote:
On Thu, Jan 24, 2008 at 02:45:37PM -0500, Wendy Cheng wrote:
J. Bruce Fields wrote:
In practice, it seems that both the unlock_ip and unlock_pathname
Felix Blyakher wrote:
(I think Wendy's pretty close to that api already after adding the
second method to start grace?)
For reclaiming grace period issues, maybe we should move to
https://www.redhat.com/archives/cluster-devel/2008-January/msg00340.html
thread ?
I view this (unlock) patch
Chuck Lever wrote:
On Jan 28, 2008, at 9:56 PM, J. Bruce Fields wrote:
On Fri, Jan 25, 2008 at 12:17:30AM -0500, Wendy Cheng wrote:
The logic is implemented on top of linux nfsd procfs with core
functions
residing in lockd kernel module. Entry function is
nlmsvc_resume_ip() where
it stores
31 matches
Mail list logo