On 03/16/2015 02:10 PM, Wei-keng Liao wrote:
I am not sure what file that is?

when you (or your cluster admin) built pvfs2/orangefs, 'configure' produced a config.log

pvfs, more than most programs, will run many tests against a linux kernel to see what features are and are not available, and it's possible that on your system one of the tests gave a false positive or negative.

config.log will have all the details.

==rob


Wei-keng

On Mar 16, 2015, at 2:04 PM, Becky Ligon wrote:

Now send me a copy of your server config file.



On Mon, Mar 16, 2015 at 2:37 PM, Wei-keng Liao <[email protected]> 
wrote:
The requested 2 files can be found in the following URLs.

http://www.ece.northwestern.edu/~wkliao/config.log
http://www.ece.northwestern.edu/~wkliao/pvfs2-server

Wei-keng

On Mar 16, 2015, at 1:29 PM, Becky Ligon wrote:

Can you also send me your config.log file from when you compiled the source?

Becky

On Mon, Mar 16, 2015 at 2:28 PM, Becky Ligon <[email protected]> wrote:
If you are experimenting with OrangeFS, then having one metadata and 4 data 
servers is fine.

Can you send me your pvfs2-server init file, the one used with the 
/sbin/service command?

Becky

On Mon, Mar 16, 2015 at 1:27 PM, Wei-keng Liao <[email protected]> 
wrote:
On Mar 16, 2015, at 12:02 PM, Becky Ligon wrote:

Wei-Keing:

Did you umount and mount the filesystem?  If not, umount the filesystem, 
restart the client core, and then mount the filesystem again.

Yes. My restart command ran "/sbin/servive pvfs2-server restart"
the script contains both client and server commands, include client's umount 
and mount.


I also suggest that you define your environment so that each of your machines 
(bigdata, bigdata1, bigdata2, bigdata3) have their pvfs servers configured to 
handle both I/O and metadata. To do this, you will have to recreate the 
filesystem.

My Orangefs is newly created and the test program is the first one run in 
parallel on it.
Isn't my configuration legit (one metadata sever and 4 data servers) for 
orangefs setup?

Again, my mpi test program ran 2 processes locally on the metadata server which 
is both data server and client.

Wei-keng


Becky

On Mon, Mar 16, 2015 at 11:30 AM, Wei-keng Liao <[email protected]> 
wrote:
HI, Becky

I tried the command option "-a 0 -n 0" and restart the client/server, but the 
same issue persists.
pvfs2-ping command shows one metadata server and 4 data servers.
I ran my test program on the metadata server.

    meta servers:
    tcp://bigdata:3334 Ok

    data servers:
    tcp://bigdata:3334 Ok
    tcp://bigdata1:3334 Ok
    tcp://bigdata2:3334 Ok
    tcp://bigdata3:3334 Ok


Wei-keng

On Mar 16, 2015, at 8:26 AM, Becky Ligon wrote:

Caching is still an issue if you have servers on more than one machine and 
those servers provide metadata.  Even in a one-server environment, it could 
make a difference.

The "ls" command uses the kernel module and client core, which in turn use the 
caches, while the pvfs2-ls command does not.

If you don't have the appropriate sudo permissions to modify the /proc 
filesystem, then you can start the client with the caches turned off.

Example:

./pvfs2-client -a 0  -n 0


If you execute pvfs2-client --help, you will see these options.


Becky

Sent from my iPhone

On Mar 15, 2015, at 5:05 PM, Wei-keng Liao <[email protected]> wrote:

I assume after 60 seconds, the client will flush the cache.
Please note I am running orangefs client and server on the same machine.
In this case, should caching become an issue?

Long after 60 seconds of the file creation, command ls still could not find the 
file.

I got permission denied when running the two echo commands you suggested.
I DO have sudo permission. I also tried vi those files but got error of
"/proc/sys/pvfs2/acache/timeout-msecs" E667: Fsync failed

Also, how do I set this automatically after system reboot?

Wei-keng

On Mar 15, 2015, at 2:16 PM, Becky Ligon wrote:

Wei-keng:

This is most likely a caching issue with the client.  By default, we set the 
cache to timeout after 60 seconds, which may be too high in your environment.  
Or, you have deleted and redefined a file using the same name outside of the 
client where you are seeing the question marks, in which case, the cache would 
be wrong for that file.

To verify, turn off caching to see if this resolves your problem:

As root on your client machine:

echo "0" > /proc/sys/pvfs2/acache/timeout-msecs
echo "0" > /proc/sys/pvfs2/ncache/timeout-msecs

If this change fixes your problem, try setting the timeout-msecs to something 
more appropriate for your environment.

Becky

On Sun, Mar 15, 2015 at 11:43 AM, Wei-keng Liao <[email protected]> 
wrote:
Hi

I am having problems with OrangeFS 2.9.1 and MPICH 3.1.4.

Here is my system settings:
Linux Kernel 2.6.32
Berkeley DB version 6.1.19

A simple MPI test program that calls MPI_File_open and MPI_File_write_all
is used and ran two processes on the same host.

The MPI commands I used with prefix file names to force ADIO drivers:
mpiexec -n 2 coll_write /orangefs/wkliao/testfile
mpiexec -n 2 coll_write pvfs2:/orangefs/wkliao/testfile.pvfs2
mpiexec -n 2 coll_write ufs:/orangefs/wkliao/testfile.ufs
The first two will use the pvfs2 driver and the 3rd the ufs driver.

Here is what I see when running "ls -l" and "pvfs2-ls -l" commands.

% ls -l /orangefs/wkliao/
ls: cannot access /orangefs/wkliao/testfile: No such file or directory
ls: cannot access /orangefs/wkliao/testfile.pvfs2: No such file or directory
total 31252
?????????? ? ?      ?            ?            ? testfile
?????????? ? ?      ?            ?            ? testfile.pvfs2
-rw------- 1 wkliao users 32000000 Mar 13 18:55 testfile.ufs

% pvfs2-ls -l /orangefs/wkliao/
-rw-r--r--    1 wkliao   users       31000000 2015-03-13 18:55 testfile
-rw-------    1 wkliao   users       32000000 2015-03-13 18:55 testfile.ufs
-rw-r--r--    1 wkliao   users       31000000 2015-03-13 18:55 testfile.pvfs2

My config.log file for building orangefs can be found in this URL
http://www.ece.northwestern.edu/~wkliao/config.log

Wei-keng

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users





--
Becky Ligon
Research Associate
Clemson University
Clemson, SC




--
Becky Ligon
Research Associate
Clemson University
Clemson, SC



--
Becky Ligon
Research Associate
Clemson University
Clemson, SC




--
Becky Ligon
Research Associate
Clemson University
Clemson, SC


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users


--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to