Hamza Kaya wrote:
Hi Murali,

I found these functions from a previous post in this mail list [pvfs2 native api]. I will attach the code too. As you guess they use the system interface to handle file operations.

Yeah, we don't support that code.  Use at your own risk :).

I used the unix calls (fopen etc.) instead of the pvfs_* calls. It works correctly. Another thing I tried is to use fork() instead of pthreads. The pseudo code is as follows:

main:
        for 1 to n
        {
              fork ();
              child {
                   PVFS_util_init_defaults ();
                   thread_func ();
                   PVFS_sys_finalize ();
              }
        }
        parent {
              waitforchilds;
        }

thread_func:
       filesize=1GB
       offset= rand (0 - 1GB)
       for i to m
      {
             PVFS_sys_read (100KB, offset);
       }

this code works correctly.

When I used fork and unix calls I got approximately 3 thread_func calls per second. However with fork and system interface I got approx. 400 thread_func calls per second. What may cause this difference? In the first case I observed that the client machine and the server machine contacts with the same port number. In other words client use the same port number to connect the servers 3334th port. In the second case this port number changed. I observed many different port numbers connecting to pvfs2 servers port. [Raw sockets used to listen the network] May be the reason is this.

The pvfs2-client uses a single TCP connection to communicate with the server. Multiple processes started in the way you describe will use multiple connections (because they have no way to share a single one). That might be one reason for the difference in performance -- with only one connection to the server, there is more potential for serialization.

Given the errors that you're getting, it's quite possible that the system interface version just isn't working right. Have you checked to see if you are getting correct results?

We haven't spent a lot of time tweaking multiprocess per node performance, yet.

Thanks,

Rob

_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to