First of all, thanks for this client side module Chen. I believe it
might be interesting for quite a lot of people doing performance
testing.

Here are my remarks found during the review of your code:

On Mon, Sep 21, 2009 at 7:01 AM, Cao, Chen <k...@redhat.com> wrote:
> - Using NPB OpenMP implementaion.
>
> Changes from the original patch:
> Remove the variant 'extra_params += smp 2', and add a restriction
> (no autotest.npb) under variant 'up', according to mgoldish's advice.
> Nevertheless, user can run this test with smp=1, if he/she wants to.
>
> Signed-off-by: Cao, Chen <k...@redhat.com>
> ---
>  client/tests/kvm/kvm_tests.cfg.sample   |    4 +
>  client/tests/npb/control                |   28 ++++
>  client/tests/npb/enable-all-tests.patch |  233 
> +++++++++++++++++++++++++++++++
>  client/tests/npb/npb.py                 |   95 +++++++++++++
>  4 files changed, 360 insertions(+), 0 deletions(-)
>  create mode 100644 client/tests/npb/control
>  create mode 100644 client/tests/npb/enable-all-tests.patch
>  create mode 100644 client/tests/npb/npb.py
>
> diff --git a/client/tests/kvm/kvm_tests.cfg.sample 
> b/client/tests/kvm/kvm_tests.cfg.sample
> index 285a38f..b03b7cd 100644
> --- a/client/tests/kvm/kvm_tests.cfg.sample
> +++ b/client/tests/kvm/kvm_tests.cfg.sample
> @@ -90,6 +90,9 @@ variants:
>             - disktest:
>                 test_name = disktest
>                 test_control_file = disktest.control
> +            - npb:
> +                test_name = npb
> +                test_control_file = npb.control
>
>     - linux_s3:     install setup
>         type = linux_s3
> @@ -567,6 +570,7 @@ linux_s3:
>
>  variants:
>     - @up:
> +        no autotest.npb
>     - smp2:
>         extra_params += " -smp 2"
>
> diff --git a/client/tests/npb/control b/client/tests/npb/control
> new file mode 100644
> index 0000000..d6c2adb
> --- /dev/null
> +++ b/client/tests/npb/control
> @@ -0,0 +1,28 @@
> +NAME = "NAS Parallel Benchmarks"
> +AUTHOR = "Cao, Chen <k...@redhat.com>"
> +TEST_TYPE = "CLIENT"
> +TEST_CLASS = "HARDWARE"
> +TEST_CATEGORY = "BENCHMARK"
> +TIME = "MEDIUM"
> +DOC = """\
> +Using NPB, OpenMP implementation.
> +
> +See http://www.nas.nasa.gov/Software/NPB/
> +"""
> +
> +# Supported tests (benchmarks):
> +#   bt.A bt.B bt.C bt.D bt.E bt.S bt.W
> +#   cg.A cg.B cg.C cg.S cg.W
> +#   dc.A dc.B dc.S dc.W
> +#   ep.A ep.B ep.C ep.D ep.E ep.S ep.W
> +#   ft.A ft.B ft.S ft.W
> +#   is.A is.B is.C is.S is.W
> +#   lu.A lu.B lu.C lu.S lu.W
> +#   mg.A mg.B mg.S mg.W
> +#   sp.A sp.B sp.C sp.D sp.E sp.S sp.W
> +#   ua.A ua.B ua.C ua.S ua.W
> +#
> +# Please refer to npb.py for more infomation about
> +# the arguments.
> +job.run_test(url='npb', tests='ep.A ep.B', ratio=0.5)
> +
> diff --git a/client/tests/npb/enable-all-tests.patch 
> b/client/tests/npb/enable-all-tests.patch
> new file mode 100644
> index 0000000..f08a9d3
> --- /dev/null
> +++ b/client/tests/npb/enable-all-tests.patch
> @@ -0,0 +1,233 @@
> +diff --git a/NPB3.3-OMP/config/make.def b/NPB3.3-OMP/config/make.def
> +new file mode 100644
> +index 0000000..afffe7d
> +--- /dev/null
> ++++ b/NPB3.3-OMP/config/make.def
> +@@ -0,0 +1,161 @@
> ++#---------------------------------------------------------------------------
> ++#
> ++#                SITE- AND/OR PLATFORM-SPECIFIC DEFINITIONS.
> ++#
> ++#---------------------------------------------------------------------------
> ++
> ++#---------------------------------------------------------------------------
> ++# Items in this file will need to be changed for each platform.
> ++#---------------------------------------------------------------------------
> ++
> ++#---------------------------------------------------------------------------
> ++# Parallel Fortran:
> ++#
> ++# For CG, EP, FT, MG, LU, SP, BT and UA, which are in Fortran, the following
> ++# must be defined:
> ++#
> ++# F77        - Fortran compiler
> ++# FFLAGS     - Fortran compilation arguments
> ++# F_INC      - any -I arguments required for compiling Fortran
> ++# FLINK      - Fortran linker
> ++# FLINKFLAGS - Fortran linker arguments
> ++# F_LIB      - any -L and -l arguments required for linking Fortran
> ++#
> ++# compilations are done with $(F77) $(F_INC) $(FFLAGS) or
> ++#                            $(F77) $(FFLAGS)
> ++# linking is done with       $(FLINK) $(F_LIB) $(FLINKFLAGS)
> ++#---------------------------------------------------------------------------
> ++
> ++#---------------------------------------------------------------------------
> ++# This is the fortran compiler used for Fortran programs
> ++#---------------------------------------------------------------------------
> ++F77 = gfortran
> ++# This links fortran programs; usually the same as ${F77}
> ++FLINK = $(F77)
> ++
> ++#---------------------------------------------------------------------------
> ++# These macros are passed to the linker
> ++#---------------------------------------------------------------------------
> ++F_LIB  =
> ++
> ++#---------------------------------------------------------------------------
> ++# These macros are passed to the compiler
> ++#---------------------------------------------------------------------------
> ++F_INC =
> ++
> ++#---------------------------------------------------------------------------
> ++# Global *compile time* flags for Fortran programs
> ++#---------------------------------------------------------------------------
> ++FFLAGS        = -O -fopenmp
> ++
> ++#---------------------------------------------------------------------------
> ++# Global *link time* flags. Flags for increasing maximum executable
> ++# size usually go here.
> ++#---------------------------------------------------------------------------
> ++FLINKFLAGS = -O -fopenmp
> ++
> ++
> ++#---------------------------------------------------------------------------
> ++# Parallel C:
> ++#
> ++# For IS and DC, which are in C, the following must be defined:
> ++#
> ++# CC         - C compiler
> ++# CFLAGS     - C compilation arguments
> ++# C_INC      - any -I arguments required for compiling C
> ++# CLINK      - C linker
> ++# CLINKFLAGS - C linker flags
> ++# C_LIB      - any -L and -l arguments required for linking C
> ++#
> ++# compilations are done with $(CC) $(C_INC) $(CFLAGS) or
> ++#                            $(CC) $(CFLAGS)
> ++# linking is done with       $(CLINK) $(C_LIB) $(CLINKFLAGS)
> ++#---------------------------------------------------------------------------
> ++
> ++#---------------------------------------------------------------------------
> ++# This is the C compiler used for C programs
> ++#---------------------------------------------------------------------------
> ++CC = cc
> ++# This links C programs; usually the same as ${CC}
> ++CLINK = $(CC)
> ++
> ++#---------------------------------------------------------------------------
> ++# These macros are passed to the linker
> ++#---------------------------------------------------------------------------
> ++C_LIB  = -lm
> ++
> ++#---------------------------------------------------------------------------
> ++# These macros are passed to the compiler
> ++#---------------------------------------------------------------------------
> ++C_INC =
> ++
> ++#---------------------------------------------------------------------------
> ++# Global *compile time* flags for C programs
> ++# DC inspects the following flags (preceded by "-D"):
> ++#
> ++# IN_CORE - computes all views and checksums in main memory (if there is
> ++# enough memory)
> ++#
> ++# VIEW_FILE_OUTPUT - forces DC to write the generated views to disk
> ++#
> ++# OPTIMIZATION - turns on some nonstandard DC optimizations
> ++#
> ++# _FILE_OFFSET_BITS=64
> ++# _LARGEFILE64_SOURCE - are standard compiler flags which allow to work with
> ++# files larger than 2GB.
> ++#---------------------------------------------------------------------------
> ++CFLAGS        = -O
> ++
> ++#---------------------------------------------------------------------------
> ++# Global *link time* flags. Flags for increasing maximum executable
> ++# size usually go here.
> ++#---------------------------------------------------------------------------
> ++CLINKFLAGS = -O
> ++
> ++
> ++#---------------------------------------------------------------------------
> ++# Utilities C:
> ++#
> ++# This is the C compiler used to compile C utilities.  Flags required by
> ++# this compiler go here also; typically there are few flags required; hence
> ++# there are no separate macros provided for such flags.
> ++#---------------------------------------------------------------------------
> ++UCC   = cc
> ++
> ++
> ++#---------------------------------------------------------------------------
> ++# Destination of executables, relative to subdirs of the main directory. .
> ++#---------------------------------------------------------------------------
> ++BINDIR        = ../bin
> ++
> ++
> ++#---------------------------------------------------------------------------
> ++# The variable RAND controls which random number generator
> ++# is used. It is described in detail in README.install.
> ++# Use "randi8" unless there is a reason to use another one.
> ++# Other allowed values are "randi8_safe", "randdp" and "randdpvec"
> ++#---------------------------------------------------------------------------
> ++RAND   = randi8
> ++# The following is highly reliable but may be slow:
> ++# RAND   = randdp
> ++
> ++
> ++#---------------------------------------------------------------------------
> ++# The variable WTIME is the name of the wtime source code module in the
> ++# common directory.
> ++# For most machines,       use wtime.c
> ++# For SGI power challenge: use wtime_sgi64.c
> ++#---------------------------------------------------------------------------
> ++WTIME  = wtime.c
> ++
> ++
> ++#---------------------------------------------------------------------------
> ++# Enable if either Cray (not Cray-X1) or IBM:
> ++# (no such flag for most machines: see common/wtime.h)
> ++# This is used by the C compiler to pass the machine name to common/wtime.h,
> ++# where the C/Fortran binding interface format is determined
> ++#---------------------------------------------------------------------------
> ++# MACHINE     =       -DCRAY
> ++# MACHINE     =       -DIBM
> ++
> ++
> +diff --git a/NPB3.3-OMP/config/suite.def b/NPB3.3-OMP/config/suite.def
> +new file mode 100644
> +index 0000000..7342195
> +--- /dev/null
> ++++ b/NPB3.3-OMP/config/suite.def
> +@@ -0,0 +1,60 @@
> ++# config/suite.def
> ++# This file is used to build several benchmarks with a single command.
> ++# Typing "make suite" in the main directory will build all the benchmarks
> ++# specified in this file.
> ++# Each line of this file contains a benchmark name and the class.
> ++# The name is one of "cg", "is", "dc", "ep", mg", "ft", "sp",
> ++#  "bt", "lu", and "ua".
> ++# The class is one of "S", "W", "A" through "E"
> ++# (except that no classes C,D,E for DC and no class E for IS and UA).
> ++# No blank lines.
> ++# The following example builds sample sizes of all benchmarks.
> ++ft    A
> ++ft    B
> ++ft    S
> ++ft    W
> ++mg    A
> ++mg    B
> ++mg    S
> ++mg    W
> ++sp    A
> ++sp    B
> ++sp    C
> ++sp    S
> ++sp    W
> ++lu    A
> ++lu    B
> ++lu    C
> ++lu    S
> ++lu    W
> ++bt    A
> ++bt    B
> ++bt    C
> ++bt    S
> ++bt    W
> ++is    A
> ++is    B
> ++is    C
> ++is    S
> ++is    W
> ++ep    A
> ++ep    B
> ++ep    C
> ++ep    D
> ++ep    E
> ++ep    S
> ++ep    W
> ++cg    A
> ++cg    B
> ++cg    C
> ++cg    S
> ++cg    W
> ++ua    A
> ++ua    B
> ++ua    C
> ++ua    S
> ++ua    W
> ++dc    A
> ++dc    B
> ++dc    S
> ++dc    W
> diff --git a/client/tests/npb/npb.py b/client/tests/npb/npb.py
> new file mode 100644
> index 0000000..7eb22d0
> --- /dev/null
> +++ b/client/tests/npb/npb.py
> @@ -0,0 +1,95 @@
> +import os, shutil, logging, re
> +from autotest_lib.client.bin import test, utils
> +from autotest_lib.client.common_lib import error
> +
> +class npb(test.test):
> +    """
> +    Run NAS Parallel Benchmarks as client test.
> +
> +    NOTE: Since we use gfortran to complie these benchmarks,
> +    this test might not be able to run on old OSes.
> +    """
> +    version = 1
> +
> +    # http://www.nas.nasa.gov/Resources/Software/npb.html

The above could be moved to the @see session of the class docstring.

> +    def setup(self, tarball='NPB3.3.tar.gz', tests='', ratio=0.5):
> +        # get the parameters for run_once()
> +        self.tests = tests
> +        self.ratio = ratio

Since ratio can be auto evaluated (1/n_cpus), we can remove it from
the test. Also, it's better to move class attributes definition to the
initialize() method.

> +        tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
> +        utils.extract_tarball_to_dir(tarball, self.srcdir)
> +        os.chdir(self.srcdir)
> +
> +        # prepare the makefile and benchmarks to generate.
> +        utils.system('patch -p1 < ../enable-all-tests.patch')
> +        utils.system('cd NPB3.3-OMP && make suite', timeout=1800)

I generally dislike the idea of placing timeouts on those commands
because they make the client test less generic, since:

 * It's hard to actually predict how long the system is going to take
to execute this particular command (remember, users other than KVM
users may want to use it)
 * This is the sort of command that you just don't expect to hang
while being executed.

I propose removing pretty much all the timeouts there, for the sake of
making the module a generic client side test.

> +
> +    def run_once(self):
> +        """
> +        Run each benchmark twice, with different number of threads, and
> +        compare the durations.
> +        """
> +        os.chdir(self.srcdir)
> +
> +        # get the tests to run
> +        tests_list = self.tests.split()
> +        if len(tests_list) == 0:
> +            logging.warn('No tests (benchmarks) provided. exit.')
> +            return
> +
> +        # run all tests
> +        #
> +        #   first with the default OMP_NUM_THREADS = cpu_nr of target 
> machine.
> +        #   then, with OMP_NUM_THREADS = 1
> +        #
> +        #   comparing the duration (Time in seconds) of the two runs.
> +        #   if duration(cpu_nr) / duration(1) < cpu_nr * (1 - ratio) \
> +        #           or duration(cpu_nr) / duration(1) > cpu_nr * (1 + ratio):
> +        #       raise error

Move the above comment to the docstring of the method.

> +        for itest in tests_list:
> +            itest_cmd = os.path.join('NPB3.3-OMP/bin/', itest)
> +            result = utils.run(itest_cmd, timeout=1800, ignore_status=True)

* A better flow for this procedure of executing the commands would be
to remove ignore_status and put all the command executions inside a
try...except block.
* Also, since it's a generic client test *benchmark*, it'd be
tremendously useful to have the main performance figures written to a
keyval file.
* For benchmarks that involve several different sorts of tests, it's
better to hold the exception throwing to the end of the test. It'd be
good to introduce a failure counter and fail the test only when all
benchmarks were attempted and any failure was found.

> +            if result.exit_status != 0:
> +                raise error.TestFail('%s failed\noutput:\n%s' % \
> +                                                (itest_cmd, result.stdout))
> +            logging.info(result.stdout)
> +
> +            # get cpu number of current machine,
> +            # which is default thread number.
> +            m = re.search('Total threads\s*=\s*(.*)\n', result.stdout)
> +            # We will use this integer with float point vars later.
> +            full_thrds = float(m.groups()[0])

A nice sanity check would be to compare the number of threads that was
detected by the benchmark with the number of cpus reported by the test
API utils.count_cpus().

> +            # get duration for full_threads running.
> +            m = re.search('Time in seconds\s*=\s*(.*)\n', result.stdout)
> +            time_full_thrds = float(m.groups()[0])
> +
> +            # repeat the execution with single thread.
> +            itest_cmd = ''.join(['OMP_NUM_THREADS=1 ', itest_cmd])
> +            result = utils.run(itest_cmd, timeout=1800, ignore_status=True)
> +            if result.exit_status != 0:
> +                raise error.TestFail('%s failed\noutput:\n%s' % \
> +                                                (itest_cmd, result.stdout))
> +            logging.info(result.stdout)
> +
> +            m = re.search('Time in seconds\s*=\s*(.*)\n', result.stdout)
> +            time_one_thrd = float(m.groups()[0])
> +
> +            # check durations
> +            ratio = self.ratio
> +            if time_one_thrd / time_full_thrds > full_thrds * (1 + ratio) or 
> \
> +                time_one_thrd / time_full_thrds < full_thrds * (1 - ratio):
> +                raise error.TestFail('NPB failed, durations violates 
> threads')

The above calculation can be broke down on smaller pieces with
meanings associated, and variables could be print as debug statements
to make it easier to figure exactly what happened in case of a
failure.

> +            logging.info('%s succeeded.' % itest)
> +
> +
> +    def cleanup(self):
> +        """
> +        Cleans up source directory.
> +        """
> +        if os.path.isdir(self.srcdir):
> +            shutil.rmtree(self.srcdir)

By default the framework already cleans up source directory. If for
some reason you want to *preserve* the source code dir, you can put a:

preserve_srcdir = True

At the beginning of the test class.

I've posted an updated patch with my comments. Please verify to see if
you agree with it!

-- 
Lucas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to