Dear All,
This is my first experience with parallel filesystems. I compiled PVFS2
1.3.2 into a Rocks 4.1 cluster (Centos 4.2) following the quick start guide.
No errors encountered and all tests passed. My cluster is 1 frontend + 7
compute nodes. I did the configuration so the frontend will act as a meta
data server as well as a client. All the 7 compute nodes act as IO servers
and clients.
When I tried to test the kernel space by making a directory (e.g., mkdir
/pvfs2/user) it tooks around 5 minutes to do that and the desktop panic.
Similarly, when I ssh to one of the compute nodes and I run any command
(e.g., ls, mkdir...).
Is there any way to improve the performance? Im attaching below the steps
and configuration I followed in case I missed something. Apologies for the
long email.
PS: I chose pvfs2 1.3.2 since I had an error when installing 1.5.1
complaining about db (invalid argument). No changes in the configuration
files generated by genconfig were made.
Many thanks in advance,
nasr
------------------------------------------------------------------------------------------------------------------------------
# mkdir /opt/pvfs2
# cd /root/Desktop/cluster/pvfs2
# cp pvfs2-1.3.2.tar.gz /usr/local/src
# cd /usr/local/src
# tar -xzf pvfs2-1.3.2.tar.gz
# ln -s pvfs2-1.3.2 pvfs2
# cd pvfs2
# ./configure --prefix=/opt/pvfs2
--with-kernel=/usr/src/kernels/2.6.9-22.EL-i686
# make
# make install
# make kmod
# make kmod_install
# cd /opt
# chmod -R 700 pvfs2
# chown -R nobody.nobody pvfs2
# ssh-agent $SHELL
# ssh-add
# cluster-fork scp -r <frontend>:/opt/pvfs2 /opt/pvfs2
# cluster-fork scp -r nasrcluster:/lib/modules/2.6.9-22.EL/kernel/fs/pvfs2
/lib/modules/2.6.9-22.EL/kernel/fs/pvfs2
# /opt/pvfs2/bin/pvfs2-genconfig /etc/pvfs2-fs.conf /etc/pvfs2-server.conf
**********************************************************************
Welcome to the PVFS2 Configuration Generator:
This interactive script will generate configuration files suitable
for use with a new PVFS2 file system. Please see the PVFS2 quickstart
guide for details.
**********************************************************************
You must first select the network protocol that your file system will use.
The only currently supported options are "tcp", "gm", and "ib".
(For multi-homed configurations, use e.g. "ib,tcp".)
* Enter protocol type [Default is tcp]:
Choose a TCP/IP port for the servers to listen on. Note that this
script assumes that all servers will use the same port number.
* Enter port number [Default is 3334]:
Next you must list the hostnames of the machines that will act as
I/O servers. Acceptable syntax is "node1, node2, ..." or "node{#-#,#,#}".
* Enter hostnames [Default is localhost]: compute-0-{0-6}
Now list the hostnames of the machines that will act as Metadata
servers. This list may or may not overlap with the I/O server list.
* Enter hostnames [Default is localhost]: <frontend>
Configured a total of 8 servers:
7 of them are I/O servers.
1 of them are Metadata servers.
* Would you like to verify server list (y/n) [Default is n]? y
****** I/O servers:
tcp://compute-0-0:3334
tcp://compute-0-1:3334
tcp://compute-0-2:3334
tcp://compute-0-3:3334
tcp://compute-0-4:3334
tcp://compute-0-5:3334
tcp://compute-0-6:3334
****** Metadata servers:
tcp://<frontend>:3334
* Does this look ok (y/n) [Default is y]? y
Choose a file for each server to write log messages to.
* Enter log file location [Default is /tmp/pvfs2-server.log]:
Choose a directory for each server to store data in.
* Enter directory name: [Default is /pvfs2-storage-space]:
Writing fs config file... Done.
Writing 8 server config file(s)... Done.
# ls /etc/pvfs2-*
/etc/pvfs2-fs.conf /etc/pvfs2-server.conf-compute-0-4
/etc/pvfs2-server.conf-compute-0-0 /etc/pvfs2-server.conf-compute-0-5
/etc/pvfs2-server.conf-compute-0-1 /etc/pvfs2-server.conf-compute-0-6
/etc/pvfs2-server.conf-compute-0-2 /etc/pvfs2-server.conf-<frontend>
/etc/pvfs2-server.conf-compute-0-3
# cluster-fork scp <frontend>:/etc/pvfs2-* /etc/
# cp /usr/local/src/pvfs2/examples/pvfs2-server.rc
/etc/rc.d/init.d/pvfs2-server
# chmod a+x /etc/rc.d/init.d/pvfs2-server
# /sbin/chkconfig pvfs2-server on
# cluster-fork scp <frontend>:/etc/rc.d/init.d/pvfs2-server
/etc/rc.d/init.d/pvfs2-server
# cluster-fork /sbin/chkconfig pvfs2-server on
# /opt/pvfs2/sbin/pvfs2-server /etc/pvfs2-fs.conf
/etc/pvfs2-server.conf-<frontend> -f
[D 01:57:06.387921] PVFS2 Server version 1.3.2 starting.
# chmod 700 /pvfs2-storage-space
# chown nobody.nobody /pvfs2-storage-space
# ssh c0-0 '/opt/pvfs2/sbin/pvfs2-server /etc/pvfs2-fs.conf
/etc/pvfs2-server.conf-compute-0-0 -f'
........... for all nodes the same but changing conf file name
# cluster-fork chmod 700 /pvfs2-storage-space
# cluster-fork chown nobody.nobody /pvfs2-storage-space
# /opt/pvfs2/sbin/pvfs2-server /etc/pvfs2-fs.conf
/etc/pvfs2-server.conf-nasrcluster
[D 02:21:45.672665] PVFS2 Server version 1.3.2 starting.
# ssh c0-0 '/opt/pvfs2/sbin/pvfs2-server /etc/pvfs2-fs.conf
/etc/pvfs2-server.conf-compute-0-0'
[D 02:22:21.886236] PVFS2 Server version 1.3.2 starting.
.................. for all nodes the same but changing conf file name
# mkdir /pvfs2
# chmod 700 /pvfs2
# chown nobody.nobody /pvfs2
# cluster-fork mkdir /pvfs2
# cluster-fork chmod 700 /pvfs2
# cluster-fork chown nobody.nobody /pvfs2
# vi /etc/pvfs2tab
for frontend
added the following line:
tcp://compute-0-0:3334/pvfs2-fs /pvfs2 pvfs2 default,noauto 0 0
# chmod a+r /etc/pvfs2tab
# touch /etc/pvfs2tab
# cluster-fork scp <frontend>:/etc/pvfs2tab /etc/pvfs2tab
for compute nodes the same file copied but changing compute-0-0 to represent
the io server. Every compute node has its own server.
TESTING
# cd /opt/pvfs2/bin
# ./pvfs2-ping -m /pvfs2
(1) Parsing tab file...
(2) Initializing system interface...
(3) Initializing each file system found in tab file: /etc/pvfs2tab...
/pvfs2: Ok
(4) Searching for /pvfs2 in pvfstab...
PVFS2 servers: tcp://nasrcluster:3334
Storage name: pvfs2-fs
Local mount point: /pvfs2
meta servers:
tcp://nasrcluster:3334
data servers:
tcp://compute-0-0:3334
tcp://compute-0-1:3334
tcp://compute-0-2:3334
tcp://compute-0-3:3334
tcp://compute-0-4:3334
tcp://compute-0-5:3334
tcp://compute-0-6:3334
(5) Verifying that all servers are responding...
meta servers:
tcp://nasrcluster:3334 Ok
data servers:
tcp://compute-0-0:3334 Ok
tcp://compute-0-1:3334 Ok
tcp://compute-0-2:3334 Ok
tcp://compute-0-3:3334 Ok
tcp://compute-0-4:3334 Ok
tcp://compute-0-5:3334 Ok
tcp://compute-0-6:3334 Ok
(6) Verifying that fsid 762700499 is acceptable to all servers...
Ok; all servers understand fs_id 762700499
(7) Verifying that root handle is owned by one server...
Root handle: 1048576
Ok; root handle is owned by exactly one server.
=============================================================
The PVFS filesystem at /pvfs2 appears to be correctly configured.
# ./pvfs2-ls /pvfs2/
lost+found
# ./pvfs2-cp -t /usr/testpvfs2 /pvfs2/testing
Wrote 22 bytes in 0.001024 seconds. 0.020491 MB/seconds
# ./pvfs2-ls /pvfs2/
lost+found
testing
# ./pvfs2-ls -alh /pvfs2/
drwxrwxrwx 1 root root 4.0K 2006-09-16 03:15 .
drwxrwxrwx 1 root root 4.0K 2006-09-16 03:15 .. (faked)
drwxrwxrwx 1 root root 4.0K 2006-09-16 01:57 lost+found
-rw-r--r-- 1 root root 22 2006-09-16 03:15 testing
# ./pvfs2-cp -t /pvfs2/testing /tmp/testing-out
Wrote 22 bytes in 0.044148 seconds. 0.000475 MB/seconds
# insmod /lib/modules/2.6.9-22.EL/kernel/fs/pvfs2/pvfs2.ko
# lsmod | grep pvfs2
pvfs2 110948 0
# cluster-fork 'insmod /lib/modules/2.6.9-22.EL/kernel/fs/pvfs2/pvfs2.ko'
# cluster-fork 'lsmod | grep pvfs2'
compute-0-0:
pvfs2 110948 0
.......
...............
# /sbin/chkconfig --list | grep pvfs2
pvfs2-server 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# cluster-fork "/sbin/chkconfig --list | grep pvfs2"
compute-0-0:
pvfs2-server 0:off 1:off 2:on 3:on 4:on 5:on 6:off
...........
......................
# /opt/pvfs2/sbin/pvfs2-client -p /opt/pvfs2/sbin/pvfs2-client-core
# cluster-fork "/opt/pvfs2/sbin/pvfs2-client -p
/opt/pvfs2/sbin/pvfs2-client-core"
# mount -t pvfs2 tcp://compute-0-0:3334/pvfs2-fs /pvfs2
# mount | grep pvfs2
tcp://compute-0-0:3334/pvfs2-fs on /pvfs2 type pvfs2 (rw)
# ssh c0-0 "mount -t pvfs2 tcp://compute-0-0:3334/pvfs2-fs /pvfs2"
# ssh c0-1 "mount -t pvfs2 tcp://compute-0-1:3334/pvfs2-fs /pvfs2"
# ssh c0-2 "mount -t pvfs2 tcp://compute-0-2:3334/pvfs2-fs /pvfs2"
# ssh c0-3 "mount -t pvfs2 tcp://compute-0-3:3334/pvfs2-fs /pvfs2"
# ssh c0-4 "mount -t pvfs2 tcp://compute-0-4:3334/pvfs2-fs /pvfs2"
# ssh c0-5 "mount -t pvfs2 tcp://compute-0-5:3334/pvfs2-fs /pvfs2"
# ssh c0-6 "mount -t pvfs2 tcp://compute-0-6:3334/pvfs2-fs /pvfs2"
-------------------------------------------------------------------------------------------------------------------------
Sorry for that long description. cluster-fork command is used to push the
files to the compute nodes.
_________________________________________________________________
Find love online with MSN Personals.
http://match.msn.com.my/match/mt.cfm?pg=channel
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users