Hi 

I installed pvfs2 on a small cluster, firstly just on the head node, it being 
the Meta server, I/O node & client which worked as documented. Then I 
subsequently installed it on 4 nodes, after removing the initial 
configuration using the pvfs2-server <global-config> <server-config> -r. The 
configuration chosen was the head node is the Meta Server and an I/O nodes 
and the client and the three slave/compute nodes are I/O nodes. For this I 
used the pvfs2-genconfig and set the servers accordingly.

I created the server space on each server, started the pvfs2 server process 
and created the pvfs2tab file and a mount point /mnt/pvfs2 on the head node 
which will be the client. When I run pvfs2-ping -m /mnt/pvfs2, it seems to 
return successfully the required output. i.e. everything down to
......
Root handle: 1048576
   Ok; root handle is owned by exactly one server.
=============================================================
The PVFS filesystem at /mnt/pvfs2/ appears to be correctly configured.

However when I try to run any other directory manipulating command .i.e 
pvfs2-cp, pvfs2-ls on /mnt/pvfs2 it returns the following output:

[EMAIL PROTECTED] etc]# pvfs2-ls /mnt/pvfs2/
[E 10:45:34.610987] Object Type mismatch error: Not a directory
[E 10:45:34.611195] getattr_object_getattr_failure : Not a directory
PVFS_sys_readdir: Not a directory
[EMAIL PROTECTED] etc]#

I don't know whether this is symptomatic of the problem outlined but when I 
run pvfs2-statfs -m /mnt/pvfs2, I bizarrely get the information that one of 
the slave nodes is the Meta Server, when it should be and has be configured 
to be the head node. i.e.

meta server statistics:
---------------------------------------

server: tcp://corona1:3334
        RAM bytes total  : 4156985344
        RAM bytes free   : 3743469568
        uptime (seconds) : 760959
        load averages    : 0 0 0
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 73597988864
        bytes total      : 74626777088
        mode: serving both metadata and I/O data


I/O server statistics:
---------------------------------------

server: tcp://corona1:3334
        RAM bytes total  : 4156985344
        RAM bytes free   : 3743469568
        uptime (seconds) : 760959
        load averages    : 0 0 0
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 73597988864
        bytes total      : 74626777088
        mode: serving both metadata and I/O data

server: tcp://corona2:3334
        RAM bytes total  : 4156985344
        RAM bytes free   : 3737341952
        uptime (seconds) : 760955
        load averages    : 0 0 0
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 73597956096
        bytes total      : 74626777088
        mode: serving only I/O data

server: tcp://corona3:3334
        RAM bytes total  : 1327017984
        RAM bytes free   : 351223808
        uptime (seconds) : 157917
        load averages    : 0 0 0
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 7525617664
        bytes total      : 9797459968
        mode: serving only I/O data

server: tcp://corona:3334
        RAM bytes total  : 4156825600
        RAM bytes free   : 3274174464
        uptime (seconds) : 59815
        load averages    : 1632 1824 0
        handles available: 1717986912
        handles total    : 1717986916
        bytes available  : 846307328
        bytes total      : 9798094848
        mode: serving only I/O data

My schema is the server corona is the head node and the nodes corona1, corona2 
and corona3 are it's slaves. As such I would want the head node to be the 
client and I thought to also have it as the Meta Server. I started from 
scratch configuring this, thinking that I may have misconfigured it during 
the pvfs2-genconfig but the same configuration happen again.

I changed my pvfs2tab file from :

tcp://corona:3334/pvfs2-fs /mnt/pvfs2 pvfs2 default,noauto 0 0
to
tcp://corona1:3334/pvfs2-fs /mnt/pvfs2 pvfs2 default,noauto 0 0

but it still produced the same error output when attempting a 
pvfs2-ls /mnt/pvfs2

What am I doing wrong?

Cheers
-- 
Patrick Tuite
Research IT Support
UCD Computing Services
Ext: 2037
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to