Hi Phil, others,
I think I am seeing the same problem (or some variants of it) with the
NFSv4 bug that I reported a few days ago..
Here is a very simple way that I am able to reproduce it reliably..
Mount a pvfs2 volume and run a few instances of
dd if=/dev/zero of=/mnt/pvfs2/file bs=2M count=100
A
hey,
i know we're trying to keep the # of DBs down, but would it really hurt
that much to just use a separate DB for this data rather than having to
play funny games with the key strings?
also, it seems a little wacky that we have to pass a flag to tell trove
when to count and when not to co
I went back and added some much more specific debugging messages and put
some special prefixes on flow, bmi, and request processor messages so I
could group them a little easier, and got rid of the extra mutexes.
After running a few more tests and double checking the logs, this is
what I am se
On Jun 12, 2006, at 3:55 PM, Bradley W Settlemyer wrote:
Is there a strong reason not to #define these strings somewhere?
I don't think so. Making them #defines would mean either replacing
the strings in the array with the defs, or creating the structs
dynamically. I think sizeof on an
Is there a strong reason not to #define these strings somewhere?
Cheers,
Brad
Sam Lang wrote:
I missed a keyval string in mkspace.c. I've committed a fix which
seems to allow ls to work properly now.
-sam
On Jun 12, 2006, at 3:26 PM, Sam Lang wrote:
Hmm...doesn't work for me either.
Yep, my stuff is working again.
Cheers,
Brad
Sam Lang wrote:
I missed a keyval string in mkspace.c. I've committed a fix which
seems to allow ls to work properly now.
-sam
On Jun 12, 2006, at 3:26 PM, Sam Lang wrote:
Hmm...doesn't work for me either. I'll keep debugging.
-sam
On J
I missed a keyval string in mkspace.c. I've committed a fix which
seems to allow ls to work properly now.
-sam
On Jun 12, 2006, at 3:26 PM, Sam Lang wrote:
Hmm...doesn't work for me either. I'll keep debugging.
-sam
On Jun 12, 2006, at 3:19 PM, Bradley W Settlemyer wrote:
It no longe
Hmm...doesn't work for me either. I'll keep debugging.
-sam
On Jun 12, 2006, at 3:19 PM, Bradley W Settlemyer wrote:
It no longer core dumps when I do a pvfs2-ls on a freshly created
filesystem, but I still cannot get pvfs2-ls to function on a just
created file system.
Is it me or you?
It no longer core dumps when I do a pvfs2-ls on a freshly created
filesystem, but I still cannot get pvfs2-ls to function on a just
created file system.
Is it me or you?
Cheers,
brad
Sam Lang wrote:
Brad,
I think this was caused by a commit that I made last week to set
shorten the keyva
Brad,
I think this was caused by a commit that I made last week to set
shorten the keyval strings. I had the wrong values for the lengths.
I've committed a fix, can you update and try again?
-sam
On Jun 12, 2006, at 2:38 PM, Bradley W Settlemyer wrote:
I'm not certain whether its somet
I'm not certain whether its something I've done or what, but it's pretty
screwed up. I just did a pvfs2-ls, and got this:
software/pvfs2/bin/pvfs2-ls /parl/bradles/software/pvfs2/mnt/
PVFS_sys_readdir: Invalid argument
My attributes are totally hosed, here is the server:
[D 06/12 15:34] (0x92
Are you using a previously created storage space? When was the last
time you did a cvs update? The storage format has changed, so if its
been a while since you updated, you may need to run the migration
tool or re-create your storage spaces.
I can't think of anything else that would bit
I just did a cvs update this morning, not having much luck with the
filesystem.
Is pvfs2-cp or pvfs2-touch working for anyone else at the moment?
Create is returning me an invalid argument problem.
[D 14:58:55.314670] Handle created: 4294967297
[D 14:58:55.314828] (0x86a2f10) create (FR sm) s
Hi Sam,
I may be wrong, but so far it looks like the problem is a little
different this time. In the scenario that I am seeing now it doesn't
look like the flow has finished everything and then just failed to mark
completion- it looks like one side actually posted more BMI messages
than the
Phil, that last log is definitely munged - and yeah it does look like
two overlapping logs.
It's probably worth looking at Sam's suggestion first, as that has been
at the center of a few similar bugs. If that's not it I'm ready to help
out debugging the request code if you can get a clean log
On Jun 12, 2006, at 8:34 AM, Phil Carns wrote:
Hi all,
I am looking at an I/O problem that I don't completely understand.
The setup is that there are 15 servers and 20 clients (all RHEL3
SMP). The clients are running a proprietary application. At the
end of the run they each write the
This patch is a reimplementation of name cache based off tcache, and it now bears a striking resemblance to the attribute cache. There was a significant API change, but that did not have an effect other than fixing the calls already in place. There are no changes to the wire protocol or the stora
Hi all,
I am looking at an I/O problem that I don't completely understand. The
setup is that there are 15 servers and 20 clients (all RHEL3 SMP). The
clients are running a proprietary application. At the end of the run
they each write their share of a data set into a 36 GB file. So each is
18 matches
Mail list logo