Hi Rob
Well.. after the answer of Sam, I ran a new iozone test over my PVFS2
cluster, and in this time, following the advices from Sam about to run only
the read tests without the -U option (mount/unmount volume), and the results
were satisfactory; the test ended succesfuly and I was able to finish the
benchamrking over pvfs2. Below I will specify the iozone command, including
the options used:

# iozone -Rab /home/pvfs2/Desktop/salida-redhat04-test05.xls -g 4G -i 1 -i 2
-i 3 -i 5 -i 7 -i 10 -i 12

  -R : print a excel type report.
  -a : full automatic mode: produce tests using a wide ranging of record and
file sizes.
  -b : it create a binary file format file in Excel compatible output
report.
  -g : set maximum file size for -a mode. In this case -g 4G.
  -i : it specify which test to run. In this case only the read type tests.
    1 : read/reread.
    2 : random read/write.
    3 : read-backwards.
    5 : stride-read.
    7 : fread/Re-fread
    10 : pread/Re-pread
    12 : preadv/Re-preadv

I still don't analyze the results, but they are in a expected range.
I thank the help from Sam Lang and Rob Ross, and from the entirely pvfs2
user community
Kind Regards

Gonzalo


On Nov 5, 2007 10:56 AM, Rob Ross <[EMAIL PROTECTED]> wrote:

> Thanks Gonzalo. While we're gathering this info, are you running GigE?
> We'll want to know, once things are working correctly, to know if
> performance is in-line.
>
> Rob
>
> Gonzalo Soto Subiabre wrote:
> > Hello.
> >
> > I'm using:
> > - pvfs-2.6.3.
> > - kernel 2.6.9-5.EL.
> > - i686 CPU architecture and
> > - Ethernet TCP/IP.
> >
> >
> >
> > On Nov 4, 2007 7:46 PM, Rob Ross < [EMAIL PROTECTED]
> > <mailto:[EMAIL PROTECTED]>> wrote:
> >
> >     Hi,
> >
> >     Can you tell us what version of PVFS, what kernel, what CPU
> >     architecture, what network you're using?
> >
> >     Thanks,
> >
> >     Rob
> >
> >     Gonzalo Soto Subiabre wrote:
> >      > Hi to everyone.
> >      >
> >      > This is my first post and regretfully is to ask for help.
> >      > I'm doing a performance evaluation of Distributed Parallel
> >     Filesystems,
> >      > using iozone. The filesystems tested are Lustre, PVFS2 and
> GlusterFS.
> >      > This evaluation has an aim: to choose a parallel filesystem for a
> >     video
> >      > streamig system storage server.
> >      > But I've a problem; when iozone test is running over pvfs2 it
> >     fails at a
> >      > point where the file size is 2 GB and the Record Lenght 64 Kb.
> >      >
> >      >
> >      > My tested is this:
> >      >
> >      > [EMAIL PROTECTED] ~]# pvfs2-statfs -h -m /mnt/pvfs2/
> >      > aggregate statistics:
> >      > ---------------------------------------
> >      >
> >      >         fs_id: 1965017224
> >      >         total number of servers (meta and I/O): 4
> >      >         handles available (meta and I/O):       4294967286
> >      >         handles total (meta and I/O):           4294967290
> >      >         bytes available:                        111.3G
> >      >         bytes total:                             111.3G
> >      >
> >      > NOTE: The aggregate total and available statistics are calculated
>
> >     based
> >      > on an algorithm that assumes data will be distributed evenly;
> thus
> >      > the free space is equal to the smallest I/O server capacity
> >      > multiplied by the number of I/O servers.  If this number seems
> >      > unusually small, then check the individual server statistics
> below
> >      > to look for problematic servers.
> >      >
> >      > meta server statistics:
> >      > ---------------------------------------
> >      >
> >      > server: tcp://redhat02:3334
> >      >         RAM total        : 756.9M
> >      >         RAM free         : 372.1M
> >      >         uptime           : 2 hours, 12 minutes
> >      >         load averages    : 0 0 0
> >      >         handles available: 1717986912
> >      >         handles total    : 1717986916
> >      >         bytes available  : 28.2G
> >      >         bytes total      : 34.3G
> >      >         mode: serving both metadata and I/O data
> >      >
> >      >
> >      > I/O server statistics:
> >      > ---------------------------------------
> >      >
> >      > server: tcp://redhat02:3334
> >      >         RAM total        : 756.9M
> >      >         RAM free         : 372.1M
> >      >         uptime           : 2 hours, 12 minutes
> >      >         load averages    : 0 0 0
> >      >         handles available: 1717986912
> >      >         handles total    : 1717986916
> >      >         bytes available  : 28.2G
> >      >         bytes total      : 34.3G
> >      >         mode: serving both metadata and I/O data
> >      >
> >      > server: tcp://redhat03:3334
> >      >         RAM total        : 488.2M
> >      >         RAM free         : 10.4M
> >      >         uptime           : 2 hours, 09 minutes
> >      >         load averages    : 2784 10592 5536
> >      >         handles available: 858993458
> >      >         handles total    : 858993458
> >      >         bytes available  : 27.8G
> >      >         bytes total      : 33.9G
> >      >         mode: serving only I/O data
> >      >
> >      > server: tcp://redhat04:3334
> >      >         RAM total        : 1003.6M
> >      >         RAM free         : 605.3M
> >      >         uptime           : 2 hours, 07 minutes
> >      >         load averages    : 4096 1312 224
> >      >         handles available: 858993458
> >      >         handles total    : 858993458
> >      >         bytes available  : 64.3G
> >      >         bytes total      : 70.4G
> >      >         mode: serving only I/O data
> >      >
> >      > server: tcp://redhat05:3334
> >      >         RAM total        : 488.2M
> >      >         RAM free         : 96.7M
> >      >         uptime           : 2 hours, 05 minutes
> >      >         load averages    : 0 0 0
> >      >         handles available: 858993458
> >      >         handles total    : 858993458
> >      >         bytes available  : 27.8G
> >      >         bytes total      : 33.9G
> >      >         mode: serving only I/O data
> >      >
> >      >
> >
> ###############################################################################
> >      >
> >      > Therefore my cluster has 1 metadata server, 4 IO servers and 1
> >     client.
> >      >
> >      > The iozone test command is this:
> >      > [EMAIL PROTECTED] ~]# iozone -Rab
> >      > /home/pvfs2/Desktop/salida-redhat02-test03.xls -g 4G -f
> >      > /mnt/pvfs2/iozone- file.tmp -U /mnt/pvfs2
> >      >
> >      > and the message error is this:
> >      > fsync: Bad file descriptor
> >      >
> >      >
> >
> ###############################################################################
>
> >      >
> >      >
> >      > Why I did chose IOzone? because his variety of tests and the
> excel
> >      > output options.
> >      > If anyone can help me telling me why i've this message error and
> >     how can
> >      > i resolve this, it would be great.
> >      > Thanks a lot, cheers.
> >      >
> >      >
> >      > --
> >      > Gonzalo Soto Subiabre
> >      > Computer Engineer. FVT Chile Ltda.
> >      > [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> >     <mailto: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
> >      > msn: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> >     <mailto:[EMAIL PROTECTED] <mailto: [EMAIL PROTECTED]>>
> >      >
> >      >
> >      >
> >
> ------------------------------------------------------------------------
> >
> >      >
> >      > _______________________________________________
> >      > Pvfs2-users mailing list
> >      > [email protected]
> >     <mailto:[email protected]>
> >      > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> >
> >
> >
> >
> > --
> > Gonzalo Soto Subiabre
> > [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
> > msn: [EMAIL PROTECTED] <mailto: [EMAIL PROTECTED]>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to