[ceph-users] Call for Submission for the IO500 List

2019-09-12 Thread John Bent
*Call for SubmissionDeadline: 10 November 2019 AoEThe IO500
 is now accepting and encouraging submissions for the
upcoming 5th IO500 list revealed at SC19 in Denver, Colorado. Once again,
we are also accepting submissions to the 10 Node I/O Challenge to encourage
submission of small scale results. The new ranked lists will be announced
at our SC19 BoF [2]. We hope to see you, and your results, there. We have
updated our submission rules [3]. This year, we will have a new list for
the Student Cluster Competition as IO500 is used for extra points during
this competitionThe benchmark suite is designed to be easy to run and the
community has multiple active support channels to help with any questions.
Please submit and we look forward to seeing many of you at SC19! Please
note that submissions of all sizes are welcome; the site has customizable
sorting so it is possible to submit on a small system and still get a very
good per-client score for example. Additionally, the list is about much
more than just the raw rank; all submissions help the community by
collecting and publishing a wider corpus of data. More details
below.Following the success of the Top500 in collecting and analyzing
historical trends in supercomputer technology and evolution, the IO500
 was created in 2017, published its first list at SC17,
and has grown exponentially since then. The need for such an initiative has
long been known within High-Performance Computing; however, defining
appropriate benchmarks had long been challenging. Despite this challenge,
the community, after long and spirited discussion, finally reached
consensus on a suite of benchmarks and a metric for resolving the scores
into a single ranking.The multi-fold goals of the benchmark suite are as
follows: 1. Maximizing simplicity in running the benchmark suite2.
Encouraging complexity in tuning for performance3. Allowing submitters to
highlight their “hero run” performance numbers4. Forcing submitters to
simultaneously report performance for challenging IO patterns.Specifically,
the benchmark suite includes a hero-run of both IOR and mdtest configured
however possible to maximize performance and establish an upper-bound for
performance. It also includes an IOR and mdtest run with highly prescribed
parameters in an attempt to determine a lower-bound. Finally, it includes a
namespace search as this has been determined to be a highly sought-after
feature in HPC storage systems that have historically not been
well-measured. Submitters are encouraged to share their tuning insights for
publication.The goals of the community are also multi-fold: 1. Gather
historical data for the sake of analysis and to aid predictions of storage
futures2. Collect tuning information to share valuable performance
optimizations across the community3. Encourage vendors and designers to
optimize for workloads beyond “hero runs”4. Establish bounded expectations
for users, procurers, and administrators10 Node I/O ChallengeAt SC, we will
continue the 10 Node Challenge. This challenge is conducted using the
regular IO500 benchmark, however, with the rule that exactly 10 computes
nodes must be used to run the benchmark (one exception is the find, which
may use 1 node). You may use any shared storage with, e.g., any number of
servers. We will announce the result in a separate derived list and in the
full list but not on the ranked IO500 list at io500.org
.Birds-of-a-featherOnce again, we encourage you to submit
[1], to join our community, and to attend our BoF “The IO500 and the
Virtual Institute of I/O” at SC19, November 19th, 12:15-1:15pm, room
205-207, where we will announce the new IO500 list, the 10 node challenge
list, and the Student Cluster Competition list. We look forward to
answering any questions or concerns you might have. - [1]
http://io500.org/submission - [2]
https://www.vi4io.org/io500/bofs/sc19/start
- [3]
https://www.vi4io.org/io500/rules/submission
 The IO500 committee*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] IO500 @ ISC19

2019-04-18 Thread John Bent
Call for Submission

*Deadline*: 10 June 2019 AoE

The IO500 is now accepting and encouraging submissions for the upcoming 4th
IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once
again, we are also accepting submissions to the 10 node I/O challenge to
encourage submission of small scale results. The new ranked lists will be
announced at our ISC19 BoF [2]. We hope to see you, and your results, there.

The benchmark suite is designed to be easy to run and the community has
multiple active support channels to help with any questions. Please submit
and we look forward to seeing many of you at ISC 2019! Please note that
submissions of all size are welcome; the site has customizable sorting so
it is possible to submit on a small system and still get a very good
per-client score for example. Additionally, the list is about much more
than just the raw rank; all submissions help the community by collecting
and publishing a wider corpus of data. More details below.

Following the success of the Top500 in collecting and analyzing historical
trends in supercomputer technology and evolution, the IO500 was created in
2017, published its first list at SC17, and has grown exponentially since
then. The need for such an initiative has long been known within
High-Performance Computing; however, defining appropriate benchmarks had
long been challenging. Despite this challenge, the community, after long
and spirited discussion, finally reached consensus on a suite of benchmarks
and a metric for resolving the scores into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

   1. Maximizing simplicity in running the benchmark suite
   2. Encouraging complexity in tuning for performance
   3. Allowing submitters to highlight their “hero run” performance numbers
   4. Forcing submitters to simultaneously report performance for
   challenging IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and
mdtest configured however possible to maximize performance and establish an
upper-bound for performance. It also includes an IOR and mdtest run with
highly prescribed parameters in an attempt to determine a lower-bound.
Finally, it includes a namespace search as this has been determined to be a
highly sought-after feature in HPC storage systems that has historically
not been well-measured. Submitters are encouraged to share their tuning
insights for publication.

The goals of the community are also multi-fold:

   1. Gather historical data for the sake of analysis and to aid
   predictions of storage futures
   2. Collect tuning information to share valuable performance
   optimizations across the community
   3. Encourage vendors and designers to optimize for workloads beyond
   “hero runs”
   4. Establish bounded expectations for users, procurers, and
   administrators

Edit
10 Node I/O Challenge

At ISC, we will announce our second IO-500 award for the 10 Node Challenge.
This challenge is conducted using the regular IO-500 benchmark, however,
with the rule that exactly *10 computes nodes* must be used to run the
benchmark (one exception is find, which may use 1 node). You may use any
shared storage with, e.g., any number of servers. When submitting for the
IO-500 list, you can opt-in for “Participate in the 10 compute node
challenge only”, then we won't include the results into the ranked list.
Other 10 compute node submission will be included in the full list and in
the ranked list. We will announce the result in a separate derived list and
in the full list but not on the ranked IO-500 list at io500.org.
Edit
Birds-of-a-feather

Once again, we encourage you to submit [1], to join our community, and to
attend our BoF “The IO-500 and the Virtual Institute of I/O” at ISC 2019
[2] where we will announce the fourth IO500 list and second 10 node
challenge list. The current list includes results from BeeGPFS, DataWarp,
IME, Lustre, Spectrum Scale, and WekaIO. We hope that the next list has
even more.

We look forward to answering any questions or concerns you might have.

   - [1] http://io500.org/submission
   - [2] The BoF schedule will be announced soon

Edit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] IO500 CFS for SC18

2018-10-25 Thread John Bent
Dear all,

The IO500 is now accepting and encouraging submissions for the upcoming
IO500 list revealed at Supercomputing 2018 in Dallas, Texas.  We also
announce the 10 compute node I/O challenge to encourage submission of
small-scale results. The new ranked lists will be announced at our SC18 BOF
on Wednesday, November 14th at 5:15pm. We hope to see you, and your
results, there.

Deadline: 10 November 2018 AoE

The benchmark suite is designed to be easy to run and the community has
multiple active support channels to help with any questions.  Please submit
and we look forward to seeing many of you at SC 2018!  Please note that
submissions of all size are welcome; the site has customizable sorting so
it is possible to submit on a small system and still get a very good
per-client score for example. Additionally, the list is about much more
than just the raw rank; all submissions help the community by collecting
and publishing a wider corpus of data.  More details below.

Following the success of the Top500 in collecting and analyzing historical
trends in supercomputer technology and evolution, the IO500 was created in
2017 and published its first list at SC17. The need for such an initiative
has long been known within High-Performance Computing; however, defining
appropriate benchmarks had long been challenging. Despite this challenge,
the community, after long and a spirited discussion finally reached
consensus on a suite of benchmarks and a metric for resolving the scores
into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

* Maximizing simplicity in running the benchmark suite
* Encouraging complexity in tuning for performance
* Allowing submitters to highlight their “hero run” performance numbers
* Forcing submitters to simultaneously report performance for challenging
IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and
mdtest configured, however, possible to maximize performance and establish
an upper-bound for performance. It also includes an IOR and mdtest run with
highly prescribed parameters in an attempt to determine a lower-bound.
Finally, it includes a namespace search as this has been determined to be a
highly sought-after feature in HPC storage systems that have historically
not been well measured.  Submitters are encouraged to share their tuning
insights for publication.

The goals of the community are also multi-fold:

* Gather historical data for the sake of analysis and to aid predictions of
storage futures
* Collect tuning information to share valuable performance optimizations
across the community
* Encourage vendors and designers to optimize for workloads beyond “hero
runs”
* Establish bounded expectations for users, procurers, and administrators

10 Compute Node I/O Challenge

At SC, we will announce another IO-500 award for the "10 Compute Node I/O
Challenge". This challenge is conducted using the regular IO-500 benchmark,
however, with the rule that exactly 10 computes nodes must be used to run
the benchmark (one exception is find, which may use 1 node). You may use
any shared storage with, e.g., any number of servers. When submitting for
the IO-500 list, you can opt-in for “Participate in the 10 compute node
challenge only”, then we won't include the results into the ranked list.
Other 10 compute node submission will be included in the full list and in
the ranked list.  We will announce the result in a separate derived list
and in the full list but not on the ranked IO-500 list at io500.org.

Birds-of-a-feather

Once again, we encourage you to submit [1], to join our community, and to
attend our BoF “The IO-500 and the Virtual Institute of I/O” at SC 2018 [2]
where we will announce the second ever IO500 list. The current list
includes results from BeeGPFS, DataWarp, IME, Lustre, and Spectrum Scale.
We hope that the next list has even more.

We look forward to answering any questions or concerns you might have.

[1] http://io500.org/submission
[2] https://sc18.supercomputing.org/presentation/?id=bof134=sess390
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] IO500 Call for Submissions for ISC 2018

2018-05-23 Thread John Bent
IO500 Call for Submission
Deadline: 23 June 2018 AoE

The IO500 is now accepting and encouraging submissions for the upcoming
IO500 list revealed at ISC 2018 in Frankfurt, Germany. The benchmark suite
is designed to be easy to run and the community has multiple active support
channels to help with any questions. Please submit and we look forward to
seeing many of you at ISC 2018! Please note that submissions of all size
are welcome; the site has customizable sorting so it is possible to submit
on a small system and still get a very good per-client score for example.
Additionally, the list is about much more than just the raw rank; all
submissions help the community by collecting and publishing a wider corpus
of data. More details below.

Following the success of the Top500 in collecting and analyzing historical
trends in supercomputer technology and evolution, the IO500 was created in
2017 and published its first list at SC17. The need for such an initiative
has long been known within High Performance Computing; however, defining
appropriate benchmarks had long been challenging. Despite this challenge,
the community, after long and spirited discussion, finally reached
consensus on a suite of benchmarks and a metric for resolving the scores
into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

* Maximizing simplicity in running the benchmark suite
* Encouraging complexity in tuning for performance
* Allowing submitters to highlight their “hero run” performance numbers
* Forcing submitters to simultaneously report performance for challenging
IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and
mdtest configured however possible to maximize performance and establish an
upper-bound for performance. It also includes an IOR and mdtest run with
highly prescribed parameters in an attempt to determine a lower-bound.
Finally, it includes a namespace search as this has been determined to be a
highly sought-after feature in HPC storage systems that has historically
not been well-measured. Submitters are encouraged to share their tuning
insights for publication.

The goals of the community are also multi-fold:

* Gather historical data for the sake of analysis and to aid predictions of
storage futures
* Collect tuning information to share valuable performance optimizations
across the community
* Encourage vendors and designers to optimize for workloads beyond “hero
runs”
* Establish bounded expectations for users, procurers, and administrators

Once again, we encourage you to submit (see http://io500.org/submission),
to join our community, and to attend our BoF “The IO-500 and the Virtual
Institute of I/O” at ISC 2018 where we will announce the second ever IO500
list. The current list includes results from BeeGPFS, DataWarp, IME,
Lustre, and Spectrum Scale. We hope that the next list has even more!

We look forward to answering any questions or concerns you might have.

Thank you!

IO500 Committee
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] IO-500 now accepting submissions

2017-10-27 Thread John Bent
Hello Ceph community,

After BoFs at last year's SC and the last two ISC's, the IO-500 is
formalized and is now accepting submissions in preparation for our first
IO-500 list at this year's SC BoF:
http://sc17.supercomputing.org/presentation/?id=bof108=sess319

The goal of the IO-500 is simple: to improve parallel file systems by
ensuring that sites publish results of both "hero" and "anti-hero" runs and
by sharing the tuning and configuration they applied to achieve those
results.

After receiving feedback from a few trial users, the framework is
significantly improved:
> git clone https://github.com/VI4IO/io-500-dev
> cd io-500-dev
> ./utilities/prepare.sh
> ./io500.sh
> # tune and rerun
> # email results to sub...@io500.org

This, perhaps with a bit of tweaking and please consult our 'doc' directory
for troubleshooting, should get a very small toy problem up and running
quickly.  It then does become a bit challenging to tune the problem size as
well as the underlying file system configuration (e.g. striping parameters)
to get a valid, and impressive, result.

The basic format of the benchmark is to run both a "hero" and "antihero"
IOR test as well as a "hero" and "antihero" mdtest.  The write/create phase
of these tests must last for at least five minutes to ensure that the test
is not measuring cache speeds.

One of the more challenging aspects is that there is a requirement to
search through the metadata of the files that this benchmark creates.
Currently we provide a simple serial version of this test (i.e. the GNU
find command) as well as a simple python MPI parallel tree walking
program.  Even with the MPI program, the find can take an extremely long
amount of time to finish.  You are encouraged to replace these provided
tools with anything of your own devise that satisfies the required
functionality.  This is one area where we particularly hope to foster
innovation as we have heard from many file system admins that metadata
search in current parallel file systems can be painfully slow.

Now is your chance to show the community just how awesome we all know Ceph
to be.  We are excited to introduce this benchmark and foster this
community.  We hope you give the benchmark a try and join our community if
you haven't already.  Please let us know right away in any of our various
communications channels (as described in our documentation) if you
encounter any problems with the benchmark or have questions about tuning or
have suggestions for others.

We hope to see your results in email and to see you in person at the SC BoF.

Thanks,

IO 500 Committee
John Bent, Julian Kunkle, Jay Lofstead
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com