Gonzalo,

I haven't been able to reproduce your results. iozone runs to completion for me with a similar setup and iozone parameters.

One thing about the iozone command line you're using though, the -U option will remount the filesystem after every test run, which for PVFS isn't necessary. Also, I doubt that it matches any of the workloads you intend. I'm not sure that's the cause of the problem you're seeing, but could you try it without the -U?

On the workload topic, it sounds like you're going to be doing a lot of read-heavy I/O operations (video-streaming is pretty much read- only, with large block sizes). Keep in mind that while iozone is a good overall filesystem benchmark, it won't be a good benchmark for your large, read-heavy workloads, unless you specify to only run tests that do reads, such as 1,7, or 10.

Also, while PVFS doesn't do client-side caching normally (Lustre does), in our upcoming release we will have a flag that can be set on a file, making it immutable. This might be especially useful for you, where you can set the immutable flag once the video has been uploaded, allowing caching to take place on the clients. I can keep you appraised on the status of that release and how to set the immutable flag if you're interested in that.

-sam


On Nov 2, 2007, at 6:43 PM, Gonzalo Soto Subiabre wrote:

Hi to everyone.

This is my first post and regretfully is to ask for help.
I'm doing a performance evaluation of Distributed Parallel Filesystems, using iozone. The filesystems tested are Lustre, PVFS2 and GlusterFS. This evaluation has an aim: to choose a parallel filesystem for a video streamig system storage server. But I've a problem; when iozone test is running over pvfs2 it fails at a point where the file size is 2 GB and the Record Lenght 64 Kb.


My tested is this:

[EMAIL PROTECTED] ~]# pvfs2-statfs -h -m /mnt/pvfs2/
aggregate statistics:
---------------------------------------

        fs_id: 1965017224
        total number of servers (meta and I/O): 4
        handles available (meta and I/O):       4294967286
        handles total (meta and I/O):           4294967290
        bytes available:                        111.3G
        bytes total:                            111.3G

NOTE: The aggregate total and available statistics are calculated based
on an algorithm that assumes data will be distributed evenly; thus
the free space is equal to the smallest I/O server capacity
multiplied by the number of I/O servers.  If this number seems
unusually small, then check the individual server statistics below
to look for problematic servers.

meta server statistics:
---------------------------------------

server: tcp://redhat02:3334
        RAM total        : 756.9M
        RAM free         : 372.1M
        uptime           : 2 hours, 12 minutes
        load averages    : 0 0 0
        handles available: 1717986912
        handles total    : 1717986916
        bytes available  : 28.2G
        bytes total      : 34.3G
        mode: serving both metadata and I/O data


I/O server statistics:
---------------------------------------

server: tcp://redhat02:3334
        RAM total        : 756.9M
        RAM free         : 372.1M
        uptime           : 2 hours, 12 minutes
        load averages    : 0 0 0
        handles available: 1717986912
        handles total    : 1717986916
        bytes available  : 28.2G
        bytes total      : 34.3G
        mode: serving both metadata and I/O data

server: tcp://redhat03:3334
        RAM total        : 488.2M
        RAM free         : 10.4M
        uptime           : 2 hours, 09 minutes
        load averages    : 2784 10592 5536
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 27.8G
        bytes total      : 33.9G
        mode: serving only I/O data

server: tcp://redhat04:3334
        RAM total        : 1003.6M
        RAM free         : 605.3M
        uptime           : 2 hours, 07 minutes
        load averages    : 4096 1312 224
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 64.3G
        bytes total      : 70.4G
        mode: serving only I/O data

server: tcp://redhat05:3334
        RAM total        : 488.2M
        RAM free         : 96.7M
        uptime           : 2 hours, 05 minutes
        load averages    : 0 0 0
        handles available: 858993458
        handles total    : 858993458
        bytes available  : 27.8G
        bytes total      : 33.9G
        mode: serving only I/O data

###################################################################### #########

Therefore my cluster has 1 metadata server, 4 IO servers and 1 client.

The iozone test command is this:
[EMAIL PROTECTED] ~]# iozone -Rab /home/pvfs2/Desktop/salida-redhat02- test03.xls -g 4G -f /mnt/pvfs2/iozone-file.tmp -U /mnt/pvfs2

and the message error is this:
fsync: Bad file descriptor

###################################################################### #########

Why I did chose IOzone? because his variety of tests and the excel output options. If anyone can help me telling me why i've this message error and how can i resolve this, it would be great.
Thanks a lot, cheers.


--
Gonzalo Soto Subiabre
Computer Engineer. FVT Chile Ltda.
[EMAIL PROTECTED]
msn: [EMAIL PROTECTED]
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to