You are correct. As long as the command "hostname" comes up as server1 and server2 on the respective machines, you don't have to specify the alias.
Please try pvfs2-viewdist -f on each file from each machine and send me the output. Becky On Mon, Oct 3, 2011 at 11:16 AM, Miltiadis Koutsokeras < [email protected]> wrote: > ** > Hi Becky, > > > On 10/03/2011 03:57 PM, Becky Ligon wrote: > > Did you create the storage space on each system: > server1:pvfs2-server -f fs.conf -a server1 > > server2:pvfs2-server -f fs.conf -a servers > > Do you mean > > server2:pvfs2-server -f fs.conf -a server2 > > in the second command? > > I have created the initial filesystem with the command > > pvfs2-server -f fs.conf > > without the -a for alias on both servers. Have you seen the > configuration I've posted? Is -a option required in my case? > I would try it, if my users did not populate the filesystem already. > > > What happens if you issue pvfs2-ping from either server? > > server1:pvfs2-ping -m /mnt/pvfs2 > > user@server1:~# pvfs2-ping -m /mnt/pvfs2 > > (1) Parsing tab file... > > (2) Initializing system interface... > > (3) Initializing each file system found in tab file: /etc/mtab... > > PVFS2 servers: tcp://server1:3334 > Storage name: pvfs2-fs > Local mount point: /mnt/pvfs2 > /mnt/pvfs2: Ok > > (4) Searching for /mnt/pvfs2 in pvfstab... > > PVFS2 servers: tcp://server1:3334 > Storage name: pvfs2-fs > Local mount point: /mnt/pvfs2 > > meta servers: > tcp://server1:3334 > tcp://server2:3334 > > data servers: > tcp://server1:3334 > tcp://server2:3334 > > (5) Verifying that all servers are responding... > > meta servers: > tcp://server1:3334 Ok > tcp://server2:3334 Ok > > data servers: > tcp://server1:3334 Ok > tcp://server2:3334 Ok > > (6) Verifying that fsid 1034394795 is acceptable to all servers... > > Ok; all servers understand fs_id 1034394795 > > (7) Verifying that root handle is owned by one server... > > Root handle: 1048576 > Ok; root handle is owned by exactly one server. > > ============================================================= > > The PVFS2 filesystem at /mnt/pvfs2 appears to be correctly configured. > > > server2:pvfs2-ping -m /mnt/pvfs2 > > user@server2:~# pvfs2-ping -m /mnt/pvfs2 > > (1) Parsing tab file... > > (2) Initializing system interface... > > (3) Initializing each file system found in tab file: /etc/mtab... > > PVFS2 servers: tcp://server2:3334 > Storage name: pvfs2-fs > Local mount point: /mnt/pvfs2 > /mnt/pvfs2: Ok > > (4) Searching for /mnt/pvfs2 in pvfstab... > > PVFS2 servers: tcp://server2:3334 > Storage name: pvfs2-fs > Local mount point: /mnt/pvfs2 > > meta servers: > tcp://server1:3334 > tcp://server2:3334 > > data servers: > tcp://server1:3334 > tcp://server2:3334 > > (5) Verifying that all servers are responding... > > meta servers: > tcp://server1:3334 Ok > tcp://server2:3334 Ok > > data servers: > tcp://server1:3334 Ok > tcp://server2:3334 Ok > > (6) Verifying that fsid 1034394795 is acceptable to all servers... > > Ok; all servers understand fs_id 1034394795 > > (7) Verifying that root handle is owned by one server... > > Root handle: 1048576 > Ok; root handle is owned by exactly one server. > > ============================================================= > > The PVFS2 filesystem at /mnt/pvfs2 appears to be correctly configured. > > > > Becky > > On Mon, Oct 3, 2011 at 7:59 AM, Miltiadis Koutsokeras < > [email protected]> wrote: > >> Hello everyone, >> >> I have a setup of 2 servers sharing their available free space through the >> OrangeFS filesystem. Both servers have the filesystem mounted for client >> use. A user noticed today that sometimes, not always, the files created in >> one >> server are not accessible from the mount point in the other. The servers >> are >> setup so that users have the same uid and gid in both. The problematic >> files are perfectly accessible from the mount point on the localhost. The >> strange thing is that logs of pvfs2-server and pvfs2-client are clear of >> any >> errors on both servers. >> >> My setup in more detail: >> >> 2 servers X86_64 with Debian GNU/Linux unstable (sid), kernel >> 3.0.0-1-amd64 >> >> OrangeFS 2.8.4 build from CVS repository version 141223, using kernel >> module for client access >> pvfs2-server --version: 2.8.4-orangefs-2011-09-08-141223 (mode: >> aio-threaded) >> pvfs2-client --version: 2.8.4-orangefs-2011-09-08-141223 >> >> Mount command: >> mount -t pvfs2 tcp://`hostname`:3334/pvfs2-fs /mnt/pvfs2 >> >> Servers configuration file: >> <Defaults> >> UnexpectedRequests 50 >> EventLogging none >> EnableTracing no >> LogStamp datetime >> BMIModules bmi_tcp >> FlowModules flowproto_multiqueue >> PerfUpdateInterval 1000 >> ServerJobBMITimeoutSecs 30 >> ServerJobFlowTimeoutSecs 30 >> ClientJobBMITimeoutSecs 300 >> ClientJobFlowTimeoutSecs 300 >> ClientRetryLimit 5 >> ClientRetryDelayMilliSecs 2000 >> PrecreateBatchSize 0,32,512,32,32,32,0 >> PrecreateLowThreshold 0,16,256,16,16,16,0 >> >> DataStorageSpace /pvfs2-storage-space >> MetadataStorageSpace /pvfs2-storage-space >> >> LogFile /var/log/pvfs2-server.log >> </Defaults> >> >> <Aliases> >> Alias server1 tcp://server1:3334 >> Alias server2 tcp://server2:3334 >> </Aliases> >> >> <Filesystem> >> Name pvfs2-fs >> ID 1034394795 >> RootHandle 1048576 >> FileStuffing yes >> <MetaHandleRanges> >> Range server1 3-2305843009213693953 >> Range server2 2305843009213693954-4611686018427387904 >> </MetaHandleRanges> >> <DataHandleRanges> >> Range server1 4611686018427387905-6917529027641081855 >> Range server2 6917529027641081856-9223372036854775806 >> </DataHandleRanges> >> <StorageHints> >> TroveSyncMeta yes >> TroveSyncData no >> TroveMethod alt-aio >> </StorageHints> >> </Filesystem> >> >> >> The problem explained with example command line prompts: >> user@server1:~# some_program_creating_files >> user@server1:~# ls -l /mnt/pvfs2/myfiles >> -rw-r--r-- 1 user group 441975 Oct 3 12:50 file1 >> -rw-r--r-- 1 user group 441975 Oct 3 12:51 file2 >> -rw-r--r-- 1 user group 400873 Oct 3 12:52 file3 >> >> user@server2:~# ls -l /mnt/pvfs2/myfiles >> ls: cannot access /mnt/pvfs2/myfiles/file1: Input/output error >> ls: cannot access /mnt/pvfs2/myfiles/file3: Input/output error >> >> ?????????? ? ? ? ? ? file1 >> -rw-r--r-- 1 user group 441975 Oct 3 12:51 file2 >> ?????????? ? ? ? ? ? file3 >> >> >> Thank you in advance fro any replies. >> >> -- >> Koutsokeras Miltiadis M.Sc. >> Software Engineer >> Biovista Inc. >> >> US Offices >> 2421 Ivy Road >> Charlottesville, VA 22903 >> USA >> T: +1.434.971.1141 >> F: +1.434.971.1144 >> >> European Offices >> 34 Rodopoleos Street >> Ellinikon, Athens 16777 >> GREECE >> T: +30.210.9629848 >> F: +30.210.9647606 >> >> www.biovista.com >> >> Biovista is a privately held biotechnology company that finds novel uses >> for existing drugs, and profiles their side effects using their mechanism of >> action. Biovista develops its own pipeline of drugs in CNS, oncology, >> auto-immune and rare diseases. Biovista is collaborating with >> biopharmaceutical companies on indication expansion and de-risking of their >> portfolios and with the FDA on adverse event prediction. >> >> >> _______________________________________________ >> Pvfs2-users mailing list >> [email protected] >> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users >> > > > > -- > Becky Ligon > OrangeFS Support and Development > Omnibond Systems > Anderson, South Carolina > > > > > -- > Koutsokeras Miltiadis M.Sc. > Software Engineer > Biovista Inc. > > US Offices > 2421 Ivy Road > Charlottesville, VA 22903 > USA > T: +1.434.971.1141 > F: +1.434.971.1144 > > European Offices > 34 Rodopoleos Street > Ellinikon, Athens 16777 > GREECE > T: +30.210.9629848 > F: +30.210.9647606 > www.biovista.com > > Biovista is a privately held biotechnology company that finds novel uses for > existing drugs, and profiles their side effects using their mechanism of > action. Biovista develops its own pipeline of drugs in CNS, oncology, > auto-immune and rare diseases. Biovista is collaborating with > biopharmaceutical companies on indication expansion and de-risking of their > portfolios and with the FDA on adverse event prediction. > > > -- Becky Ligon OrangeFS Support and Development Omnibond Systems Anderson, South Carolina
_______________________________________________ Pvfs2-users mailing list [email protected] http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
