Sure. pvfs2-ping output is:

[EMAIL PROTECTED] src]# pvfs2-ping -m /mnt/pvfs2/

(1) Parsing tab file...

(2) Initializing system interface...

(3) Initializing each file system found in tab file: /etc/pvfs2tab...

/mnt/pvfs2: Ok

(4) Searching for /mnt/pvfs2/ in pvfstab...

PVFS2 servers: tcp://corona:3334

Storage name: pvfs2-fs

Local mount point: /mnt/pvfs2

meta servers:

tcp://corona1:3334

data servers:

tcp://corona1:3334

tcp://corona2:3334

tcp://corona3:3334

tcp://corona:3334

(5) Verifying that all servers are responding...

meta servers:

tcp://corona1:3334 Ok

data servers:

tcp://corona1:3334 Ok

tcp://corona2:3334 Ok

tcp://corona3:3334 Ok

tcp://corona:3334 Ok

(6) Verifying that fsid 1299783161 is acceptable to all servers...

Ok; all servers understand fs_id 1299783161

(7) Verifying that root handle is owned by one server...

Root handle: 1048576

Ok; root handle is owned by exactly one server.

=============================================================

The PVFS filesystem at /mnt/pvfs2/ appears to be correctly configured.

(Hmmm! I'm sure I set the Meta Server as corona, the head node.)

And the contents of the pvfs2-fs.conf are:

[EMAIL PROTECTED] src]# cat /etc/pvfs2-fs.conf

<Defaults>

UnexpectedRequests 50

LogFile /var/log/pvfs2.log

EventLogging storage,network,server

LogStamp usec

BMIModules bmi_tcp

FlowModules flowproto_multiqueue

PerfUpdateInterval 1000

ServerJobBMITimeoutSecs 30

ServerJobFlowTimeoutSecs 30

ClientJobBMITimeoutSecs 300

ClientJobFlowTimeoutSecs 300

ClientRetryLimit 5

ClientRetryDelayMilliSecs 2000

</Defaults>

<Aliases>

Alias corona tcp://corona1:3334

Alias corona1 tcp://corona2:3334

Alias corona2 tcp://corona3:3334

Alias corona3 tcp://corona:3334

</Aliases>

<Filesystem>

Name pvfs2-fs

ID 1299783161

RootHandle 1048576

<MetaHandleRanges>

Range corona 4-858993461

</MetaHandleRanges>

<DataHandleRanges>

Range corona 858993462-1717986919

Range corona1 1717986920-2576980377

Range corona2 2576980378-3435973835

Range corona3 3435973836-4294967293

</DataHandleRanges>

<StorageHints>

TroveSyncMeta yes

TroveSyncData no

AttrCacheKeywords datafile_handles,metafile_dist

AttrCacheKeywords dir_ent, symlink_target

AttrCacheSize 4093

AttrCacheMaxNumElems 32768

</StorageHints>

</Filesystem>

Thanks for your help!

On Wednesday 17 May 2006 15:36, Robert Latham wrote:

> On Wed, May 17, 2006 at 11:17:20AM +0100, Patrick Tuite wrote:

> > My schema is the server corona is the head node and the nodes

> > corona1, corona2 and corona3 are it's slaves. As such I would want

> > the head node to be the client and I thought to also have it as the

> > Meta Server. I started from scratch configuring this, thinking that

> > I may have misconfigured it during the pvfs2-genconfig but the same

> > configuration happen again.

> >

> > I changed my pvfs2tab file from :

> >

> > tcp://corona:3334/pvfs2-fs /mnt/pvfs2 pvfs2 default,noauto 0 0

> > to

> > tcp://corona1:3334/pvfs2-fs /mnt/pvfs2 pvfs2 default,noauto 0 0

> >

> > but it still produced the same error output when attempting a

> > pvfs2-ls /mnt/pvfs2

>

> this isn't terribly well documented, but you can make your pvfs2tab

> point to any pvfs2 server. Clients query the meta server address from

> whatever server is listed in the tab file.

>

> > What am I doing wrong?

>

> This might be a bug in the way genconfig sorts 'corona' and 'corona1'.

> Can you send the full output of pvfs2-ping and your fs.conf ?

>

> Thanks

> ==rob

--

Patrick Tuite

Research IT Support

UCD Computing Services

Ext: 2037

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to