My system consists of:
3 pvfs2-io servers, each with:
quad-core 1.6Ghz Dell PowerEdge 1950's with PERC/6e cards
MD1000 with 16 750GB SATA hard drives
The storage is in a hardware RAID-6 configuration, about 9.5TB per server.
PVFS config files are:
[r...@pvfs2-io-0-0 ~]# cat /etc/pvfs2-fs.conf
<Defaults>
UnexpectedRequests 50
EventLogging none
LogStamp datetime
BMIModules bmi_tcp
FlowModules flowproto_multiqueue
PerfUpdateInterval 1000
ServerJobBMITimeoutSecs 30
ServerJobFlowTimeoutSecs 30
ClientJobBMITimeoutSecs 300
ClientJobFlowTimeoutSecs 300
ClientRetryLimit 5
ClientRetryDelayMilliSecs 2000
StorageSpace /mnt/pvfs2
LogFile /var/log/pvfs2-server.log
</Defaults>
<Aliases>
Alias pvfs2-io-0-0 tcp://pvfs2-io-0-0:3334
Alias pvfs2-io-0-1 tcp://pvfs2-io-0-1:3334
Alias pvfs2-io-0-2 tcp://pvfs2-io-0-2:3334
</Aliases>
<Filesystem>
Name pvfs2-fs
ID 62659950
RootHandle 1048576
<MetaHandleRanges>
Range pvfs2-io-0-0 4-715827885
Range pvfs2-io-0-1 715827886-1431655767
Range pvfs2-io-0-2 1431655768-2147483649
</MetaHandleRanges>
<DataHandleRanges>
Range pvfs2-io-0-0 2147483650-2863311531
Range pvfs2-io-0-1 2863311532-3579139413
Range pvfs2-io-0-2 3579139414-4294967295
</DataHandleRanges>
<StorageHints>
TroveSyncMeta yes
TroveSyncData no
</StorageHints>
</Filesystem>
[r...@pvfs2-io-0-0 ~]# cat pvfs2-server.conf-pvfs2-io-0-0
StorageSpace /mnt/pvfs2
HostID "tcp://pvfs2-io-0-0:3334"
LogFile /var/log/pvfs2-server.log
<note: the /mnt/pvfs2 above is NOT a pvfs-mounted filesystem, but an
XFS-formatted 8.9TB RAID>
Each of these compute nodes has dual-Gig-E into the gig-E switch for
the cluster. They're running alb ethernet bonding, providing about
2GB/s potential throughput each into the switch.
The compute nodes for the cluster consist of 24 PowerEdge 1950's with
dual-quad core 2.3Ghz Intel processors and 8GB ram each. They each
have a single Gig-E connection to the switch.
My head node is a PowerEdge 2950 with identical motherboard/ram as the
compute nodes, 1Gig-E into the ethernet switch and 1 Gig-E to the
outside world. It has a PERC/5i and a 1.5TB usable RAID5 for /home,
etc.
The storage nodes are running a more or less pristine CentOS 5
install; the cluster is running ROCKS 5.1. I am NOT using ANYTHING
for pvfs2 provided by rocks, I've compiled my own rocks clients and
done my config on the cluster for the pvfs2 stuff.
--Jim
On Tue, Jul 28, 2009 at 9:15 AM, Kyle Schochenmaier<[email protected]> wrote:
> Do you have anything blocked on IO?
> While I was working on the infiniband stuff, I often ran into problems
> where things could get blocked on IO/network-traffic and cause servers
> to spin (polling for new data which never comes).
> There were also some issues that I came across where the kernel module
> would hang because an MD server was hung/spinning/dead.
> Usually restarting the server processes cleaned this up on the client-side.
>
> If you cannot kill with a -9, IMO: you are blocked on IO or inside
> kernel land which implies a bug at some level.
>
> Im not sure if you've described it in detail already - I havent been
> in #pvfs2 for a while - but can you describe your setup in gross
> verbosity?
> Also, untarring on-the-fly on pvfs2 can be pretty brutal, and I seem
> to recall we highly recommended not running binaries from the pvfs2
> filesystem - not sure if that has changed ?
>
> ~Kyle
>
> Kyle Schochenmaier
>
>
>
> On Tue, Jul 28, 2009 at 11:08 AM, Emmanuel Florac<[email protected]>
> wrote:
>> Le Tue, 28 Jul 2009 08:58:09 -0700
>> Jim Kusznir <[email protected]> écrivait:
>>
>>> How do I fix this problem without replacing pvfs2?
>>
>> How do you access the cluster? are you using the kernel module, the
>> FUSE module, or else? Do you have any RAM usage problems?
>>
>> --
>> ----------------------------------------
>> Emmanuel Florac | Intellique
>> ----------------------------------------
>>
>>
>> _______________________________________________
>> Pvfs2-users mailing list
>> [email protected]
>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users