John:

Using any of the linux utilities, like find/ls do not necessarily perform
well with OrangeFS either.  We have custom programs that bypass the kernel
module, and Walt has converted some of these utilities to use the OrangeFS
library, i.e., ofs_rm and ofs_ls.

Will let you know how it goes with the benchmark.  We just completed a set
of tests using nvme's.  As you already discovered, with OrangeFS, the best
configuration was one server per nvme.  We didn't have time to try some
tuning with the filesystems on these nvme's.  We discovered just running
two DD's at the same time against two different nvme-backed filesystems
didn't increase the throughput!  We must be doing something wrong.  Ugh!

Thanks for thinking of us!
Becky

On Sat, Oct 28, 2017 at 11:52 AM, John Bent <[email protected]> wrote:

> Thanks Becky.  Excited that you are looking into this.  Nice hearing from
> you and looking forward to seeing Walt.
>
> If you look into it and find the benchmark too hard to get running, please
> do me a favor and let us know what is difficult.  :)
>
> We have been working hard to get it easier to run.  Currently we are
> discovering that the "find" command can be intractably slow.  On Lustre
> anyway.  Maybe OrangeFS won't have that problem.  :)  So we have been
> working today to add a -stonewall feature to the parallel find that we are
> providing.  Remember however that you can use whatever custom program you
> want to satisfy the "find" requirements.
>
> Thanks,
>
> John
>
> On Sat, Oct 28, 2017 at 9:44 AM, Rebecca Ligon <[email protected]>
> wrote:
>
>> Thanks, John!
>>
>> We will look into doing this.  Walt will be the only one of the OrangeFS
>> team at SC; hopefully, you two will have a chance to talk.
>>
>> Hope you are doing well!
>>
>> Becky Ligon
>>
>> Sent from my iPad
>>
>> On Oct 28, 2017, at 12:26 AM, John Bent <[email protected]> wrote:
>>
>> Hello OrangeFS community,
>>
>> After BoFs at last year's SC and the last two ISC's, the IO-500 is
>> formalized and is now accepting submissions in preparation for our first
>> IO-500 list at this year's SC BoF:
>> http://sc17.supercomputing.org/presentation/?id=bof108&sess=sess319
>>
>> The goal of the IO-500 is simple: to improve parallel file systems by
>> ensuring that sites publish results of both "hero" and "anti-hero" runs and
>> by sharing the tuning and configuration they applied to achieve those
>> results.
>>
>> After receiving feedback from a few trial users, the framework is
>> significantly improved:
>> > git clone https://github.com/VI4IO/io-500-dev
>> > cd io-500-dev
>> > ./utilities/prepare.sh
>> > ./io500.sh
>> > # tune and rerun
>> > # email results to [email protected]
>>
>> This, perhaps with a bit of tweaking and please consult our 'doc'
>> directory for troubleshooting, should get a very small toy problem up and
>> running quickly.  It then does become a bit challenging to tune the problem
>> size as well as the underlying file system configuration (e.g. striping
>> parameters) to get a valid, and impressive, result.
>>
>> The basic format of the benchmark is to run both a "hero" and "antihero"
>> IOR test as well as a "hero" and "antihero" mdtest.  The write/create phase
>> of these tests must last for at least five minutes to ensure that the test
>> is not measuring cache speeds.
>>
>> One of the more challenging aspects is that there is a requirement to
>> search through the metadata of the files that this benchmark creates.
>> Currently we provide a simple serial version of this test (i.e. the GNU
>> find command) as well as a simple python MPI parallel tree walking
>> program.  Even with the MPI program, the find can take an extremely long
>> amount of time to finish.  You are encouraged to replace these provided
>> tools with anything of your own devise that satisfies the required
>> functionality.  This is one area where we particularly hope to foster
>> innovation as we have heard from many file system admins that metadata
>> search in current parallel file systems can be painfully slow.
>>
>> Now is your chance to show the community just how awesome we all know
>> OrangeFS to be.  We are excited to introduce this benchmark and foster this
>> community.  We hope you give the benchmark a try and join our community if
>> you haven't already.  Please let us know right away in any of our various
>> communications channels (as described in our documentation) if you
>> encounter any problems with the benchmark or have questions about tuning or
>> have suggestions for others.
>>
>> We hope to see your results in email and to see you in person at the SC
>> BoF.
>>
>> Thanks,
>>
>> IO 500 Committee
>> John Bent, Julian Kunkel, Jay Lofstead
>>
>> _______________________________________________
>> Pvfs2-users mailing list
>> [email protected]
>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>
>>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to