Here is my conf file:
<Defaults>
UnexpectedRequests 50
EventLogging none
EnableTracing no
LogStamp datetime
BMIModules bmi_tcp
FlowModules flowproto_multiqueue
PerfUpdateInterval 1000
ServerJobBMITimeoutSecs 30
ServerJobFlowTimeoutSecs 30
ClientJobBMITimeoutSecs 300
ClientJobFlowTimeoutSecs 300
ClientRetryLimit 5
ClientRetryDelayMilliSecs 2000
PrecreateBatchSize 512
PrecreateLowThreshold 256
StorageSpace /mnt/pvfs0/vfs/
LogType syslog
TCPBindSpecific yes
</Defaults>
<Aliases>
Alias crystal0 tcp://crystal0:51886
Alias crystal1 tcp://crystal1:51886
Alias meth0 tcp://meth0:51886
Alias meth1 tcp://meth1:51886
Alias pot tcp://pot:51886
</Aliases>
<ServerOptions>
Server meth1
StorageSpace /mnt/pvfs1/vfs/
</ServerOptions>
<ServerOptions>
Server crystal1
StorageSpace /mnt/pvfs1/vfs/
</ServerOptions>
<ServerOptions>
Server pot
StorageSpace /mnt/metadata
</ServerOptions>
<Filesystem>
Name pvfs2-fs
ID 995821984
RootHandle 1048576
FileStuffing no
<MetaHandleRanges>
Range pot 3-1844674407370955162
</MetaHandleRanges>
<DataHandleRanges>
Range crystal0 1844674407370955163-3689348814741910322
Range crystal1 3689348814741910323-5534023222112865482
Range meth0 5534023222112865483-7378697629483820642
Range meth1 7378697629483820643-9223372036854775802
</DataHandleRanges>
<StorageHints>
TroveSyncMeta yes
TroveSyncData no
TroveMethod alt-aio
</StorageHints>
</Filesystem>
On Fri, Jun 4, 2010 at 7:45 AM, Kevin Harms <[email protected]> wrote:
> James,
>
> can you provide the .conf file?
>
> kevin
>
> On Jun 4, 2010, at 2:47 AM, James Gao wrote:
>
> > I'm having an extremely puzzling problem with my attempted PVFS setup
> right now. Here's the intended setup: three computers run the pvfs cluster,
> named pot, crystal, and meth. Pot holds the metadata, while crystal and meth
> hold data and are both connected to two gigabit ethernet ports. In effect,
> the network sees five computers: pot, meth0, meth1, crystal0, crystal1. I'm
> running two pvfs2-server instances each on meth and crystal, one for each
> gigabit port.
> >
> > In my first attempt, I just assigned meth0 and meth1 two different ports.
> PVFS came up, and I could write files to it -- it was successful. However,
> if I transferred a large file, it was clear that it only used a single port
> each on crystal and meth, as seen in /proc/net/dev.
> >
> > For my second attempt, I tried enabling TCPBindSpecific, and using the
> same port. As I understand, this forces pvfs to only accept data over the
> port to which it is assigned. Now, the really strange problem: meth1,
> crystal0, and crystal1 are all fine and accessible. meth0 is not. I've
> double checked that all the config files are the same. I've tried launching
> the meth0 server instance first after a reboot, but meth0 still refuses
> connections. All firewalls are off, and the logs on meth show nothing. The
> logs on pot are filled with "Warning: msgpair failed to tcp://meth0:3334,
> will retry: Connection refused".
> >
> > My ultimate goal is to be able to use the full 4 gbps of bandwidth
> connected to the file servers.
> > Any ideas? Thanks for you time!
> >
> > -James
> > _______________________________________________
> > Pvfs2-users mailing list
> > [email protected]
> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users