I got the point. All three nodes are both data servers and clients. So
results in "Group 6" case are expected too. Thank you very much.

--
Hamza

On 11/23/05, Rob Ross <[EMAIL PROTECTED]> wrote:
> Hi Hamza,
>
> Running extra I/O applications on the same node is going to result in
> decreased apparent performance for each process: you're doing N times as
> much with with the same client and network connection.  The CPU
> scheduler gives each one a fair share of time, so they all end up
> getting the processor 1/Nth of the time and so take roughly N times as long.
>
> For the multi-node cases, you did get a performance increase overall in
> the "Group 2" case.  Three processes wrote data in about the time it
> would have taken two to write if they had done so on one processor.
> Likewise for reading in "Group 6" case.
>
> Are you using nodes as both servers and clients?
>
> Thanks,
>
> Rob
>
> Hamza KAYA wrote:
> > When I tried on different machines I got better results but again
> > worse than one execution. Results are given below. So file access
> > performances decrease linearly with the number of accesses. Is there a
> > configuration that will make pvfs2 scale better for concurrent
> > accesses? Why pvfs2 behave like this?
> >
> > -- GROUP - 1---
> > time -p ./write /pvfs2/t1 100 => 26.44 [executed on master]
> > time -p ./write /pvfs2/t2 100 => 26.72 [executed on master]
> > time -p ./write /pvfs2/t3 100 => 26.57 [executed on master]
> >
> > --- GROUP - 2---
> > time -p ./write /pvfs2/t4 100 => 17.69 [executed on node1]
> > time -p ./write /pvfs2/t5 100 => 17.55 [executed on node2]
> > time -p ./write /pvfs2/t6 100 => 16.91 [executed on node3]
> >
> > --- GROUP - 3---
> > time -p ./write /pvfs2/t7 100 => 9.42 [executed on master]
> >
> > --- GROUP - 4---
> > time -p ./write /pvfs2/t8 100 => 9.12 [executed on node1]
> >
> > --- GROUP - 5---
> > time -p ./read /pvfs2/t4 ./t4 => 26.17 [executed on master]
> > time -p ./read /pvfs2/t5 ./t5 => 27.00 [executed on master]
> > time -p ./read /pvfs2/t6 ./t6 => 26.92 [executed on master]
> >
> > --- GROUP - 6---
> > time -p ./read /pvfs2/t4 ./t4 => 14.64 [executed on node1]
> > time -p ./read /pvfs2/t5 ./t5 => 16.62 [executed on node2]
> > time -p ./read /pvfs2/t6 ./t6 => 15.55 [executed on node3]
> >
> > --- GROUP - 7---
> > time -p ./read /pvfs2/t4 ./t4 => 9.89 [executed on master]
> >
> > --- GROUP - 8---
> > time -p ./read /pvfs2/t4 ./t4 => 10.02 [executed on node1]
> >
> >
> > On 11/22/05, Rob Ross <[EMAIL PROTECTED]> wrote:
> >
> >>The numbers make absolute sense for four executions on the same machine.
> >>
> >>Rob
> >>
> >>Hamza KAYA wrote:
> >>
> >>>Yes. I'll try them on different machines too. I'll inform you as soon
> >>>as possible.
> >>>Thanks very much.
> >>>
> >>>--
> >>>Hamza
> >>>
> >>>On 11/21/05, Rob Ross <[EMAIL PROTECTED]> wrote:
> >>>
> >>>
> >>>>Hi,
> >>>>
> >>>>Are you running all those processes on the same machine?
> >>>>
> >>>>Rob
> >>>>
> >>>>Hamza KAYA wrote:
> >>>>
> >>>>
> >>>>>Thanks very much. Another problem I observed is about simultaneous
> >>>>>accesses. Normally a copying a 100MB file to pvfs takes approximately
> >>>>>10secs. in my system.
> >>>>>
> >>>>>time -p ./copy testfile /pvfs2/testfile -> 9.76sec.
> >>>>>
> >>>>>However copying 4 files simultaneously gives the following results:
> >>>>>time -p ./copy testfile1 /pvfs2/testfile1 -> approx. 36sec.
> >>>>>time -p ./copy testfile2 /pvfs2/testfile2 -> approx. 36sec.
> >>>>>time -p ./copy testfile3 /pvfs2/testfile3 -> approx. 36sec.
> >>>>>time -p ./copy testfile4 /pvfs2/testfile4 -> approx. 36sec.
> >>>>>
> >>>>>[here 'copy' is a programme which uses the system calls. However same
> >>>>>result occurs while using coreutils 'cp' and a simple program which
> >>>>>makes consecutive fread and fwrite calls.]
> >>>>>
> >>>>>All of the files used are 100MB. And most of the operations
> >>>>>overlapped. Another point is the CPU consumption of pvfs2-client-co.
> >>>>>When multiple accesses to one file occurs, it consumes approx. %40 of
> >>>>>the CPU.
> >>>>>e.g.
> >>>>>      cp /pvfs2/test test1
> >>>>>      cp /pvfs2/test test2
> >>>>>      cp /pvfs2/test test3
> >>>>>      cp /pvfs2/test test4
> >>>>>
> >>>>>What may be the problem? Or is this situation is a problem?
> >>>>>Thanks,
> >>>>>
> >>>>>--
> >>>>>Hamza
> >>>>>
> >>>>>On 11/18/05, Robert Latham <[EMAIL PROTECTED]> wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>>>On Thu, Nov 17, 2005 at 05:08:45PM +0000, Number Cruncher wrote:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>>I noticed Rob's post at
> >>>>>>>http://lists.gnu.org/archive/html/bug-coreutils/2005-11/msg00068.html
> >>>>>>>which discusses a patch to cp. Has this been accepted? Where can I get 
> >>>>>>>a
> >>>>>>>copy (excuse the pun!)
> >>>>>>
> >>>>>>The attached patch, based largely on earlier efforts by Neill Miller,
> >>>>>>to coreutils CVS will make copy behave better with pvfs2.
> >>>>>>Unfortunately, this patch also modifies lib/Makefile.am, so you'll
> >>>>>>need fairly recent versions of autotools/automake/autowhatever.   It
> >>>>>>should apply ok against coreutils-5.92
> >>>>>>
> >>>>>>I don't know if this will make it into coreutils-6.0, but i'll keep
> >>>>>>bugging the maintainers...
> >>>>>>
> >>>>>>==rob
> >>>>>>
> >>>>>>--
> >>>>>>Rob Latham
> >>>>>>Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
> >>>>>>Argonne National Labs, IL USA                B29D F333 664A 4280 315B
> >>>>>>
> >>>>>>
> >>>>>>_______________________________________________
> >>>>>>PVFS2-users mailing list
> >>>>>>[email protected]
> >>>>>>http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>>_______________________________________________
> >>>>>PVFS2-users mailing list
> >>>>>[email protected]
> >>>>>http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> >>>>>
> >>>>
> >
>

_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to