Sure, here they are
/etc/pvfs2/pvfs-fs.conf
(the file was copied to the other server, as well as
<Defaults>
UnexpectedRequests 50
LogFile /tmp/pvfs2-server.log
EventLogging none
LogStamp datetime
BMIModules bmi_tcp
FlowModules flowproto_multiqueue
PerfUpdateInterval 1000
ServerJobBMITimeoutSecs 30
ServerJobFlowTimeoutSecs 30
ClientJobBMITimeoutSecs 300
ClientJobFlowTimeoutSecs 300
ClientRetryLimit 5
ClientRetryDelayMilliSecs 2000
</Defaults>
<Aliases>
Alias lustre1 tcp://lustre1:3334
Alias lustre2 tcp://lustre2:3334
</Aliases>
<Filesystem>
Name pvfs2-fs
ID 1180916662
RootHandle 1048576
<MetaHandleRanges>
Range lustre1 4-2147483650
</MetaHandleRanges>
<DataHandleRanges>
Range lustre1 2147483651-4294967297
Range lustre2 4294967298-6442450944
</DataHandleRanges>
<StorageHints>
TroveSyncMeta yes
TroveSyncData no
</StorageHints>
</Filesystem>
*pvfs2-server.conf-lustre1*
StorageSpace /pvfs2-storage-space
HostID "tcp://lustre1:3334"
LogFile /tmp/pvfs2-server.log
*pvfs2-server.conf-lustre2*
StorageSpace /pvfs2-storage-space
HostID "tcp://lustre1:3334"
LogFile /tmp/pvfs2-server.log
MDS + I/O server log:
[D 05/11 23:13] PVFS2 Server version 1.5.1 starting.
2nd I/O server log
[D 05/11 23:14] PVFS2 Server version 1.5.1 starting.
[E 05/11 23:15] TROVE:DBPF:Berkeley DB: DB_THREAD mandates memory allocation
flag on key DBT
[E 05/11 23:15] TROVE:DBPF:Berkeley DB: DB_THREAD mandates memory allocation
flag on key DBT
Berkeley version for lustre1
Name : db4-devel
Arch : x86_64
Version : 4.3.29
Release : 10.el5
for lustre2
i libdb-dev 4.7.25.3
Berkeley Database Libraries [development]
ii libdb4.7 4.7.25-7ubuntu2
Berkeley v4.7 Database Libraries [runtime]
ii libdb4.7-dev 4.7.25-7ubuntu2
Berkeley v4.7
Thanks a lot,
Andriy
On Tue, May 11, 2010 at 8:47 PM, Kevin Harms <[email protected]> wrote:
>
> Can you supply your fs.conf file? And perhaps the pvfs2-server.log from
> both nodes?
>
> kevin
>
> On May 11, 2010, at 10:45 AM, Andriy Chut wrote:
>
> > Hello everyone,
> >
> > Is there any way to deal with the problem of root handle ownership?
> > My installation:
> > MDS+ i/o server
> > 1) SL 5.0 with 2.6.17-CITI_NFS4_ALL-pnfs-1 from
> > http://www.citi.umich.edu/projects/asci/pnfs/linux/ and PVFS2 1.5.1
> > 2) i/o server: Ubuntu Server 9.04 with 2.6.31-14-server kernel.
> > PVFS2 1.5.1 (the same version)
> >
> > 1 installation with 1 MDS + 1 i/o configuration works just fine, but as
> soon as I add a second node (re-generating configuration, deleting and even
> re-formating the storage), I have a small issue while executing pvfs2-ping:
> >
> > (7) Verifying that root handle is owned by one server...
> >
> > Root handle: 1048576
> > Failure: check root handle failed
> > PVFS_mgmt_setparam_all: Detailed per-server errors are available
> >
> > Per-server errors:
> > Server: tcp://lustre1:3334: Reports ownership of root handle
> >
> >
> > # 2 handles for server 0
> > # 0 handles for server 1
> > File: <Root>
> > handle = 1048576, type = Directory, server = 0
> > remaining handles:
> >
> > Is there (possibly) any way to get a clue on where the problem might be?
> > (I've been checking whether there's just one server instance running on
> each machine).
> >
> > I know that is irrelevant for PVFS2 2.8.1 at least, but, unfortunately,
> 1.5.1 seems to be the last (for now) version supporting PVFS-pNFS, so I kind
> of have no option, but to stick to it.
> > Any suggestions?
> > Thanks,
> > Andriy
> > _______________________________________________
> > Pvfs2-users mailing list
> > [email protected]
> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users