All those changes sound good to me. I just took the original unixbench
control file and cleaned up a bit. Thanks for taking care of this and
I'll remember the binary rule in the future, so as not to clog up the
mailing list. :)

- dale

On Sun, Jan 16, 2011 at 3:24 PM, Lucas Meneghel Rodrigues
<[email protected]> wrote:
> On Fri, 2011-01-14 at 19:25 -0800, Dale Curtis wrote:
>> Apologies, I sent the wrong patch earlier. v2 below.
>
> Hi Dale, thanks for your patch. Some comments below:
>
>> Signed-off-by: Dale Curtis <[email protected]>
>> ---
>>  client/tests/unixbench5/Makefile.patch      |   11 ++
>>  client/tests/unixbench5/control             |   26 +++
>>  client/tests/unixbench5/unixbench-5.1.3.tgz |  Bin 0 -> 140695 bytes
>>  client/tests/unixbench5/unixbench5.py       |  226 
>> +++++++++++++++++++++++++++
>>  4 files changed, 263 insertions(+), 0 deletions(-)
>>  create mode 100644 client/tests/unixbench5/Makefile.patch
>>  create mode 100644 client/tests/unixbench5/control
>>  create mode 100644 client/tests/unixbench5/unixbench-5.1.3.tgz
>>  create mode 100644 client/tests/unixbench5/unixbench5.py
>>
>> diff --git a/client/tests/unixbench5/Makefile.patch
>> b/client/tests/unixbench5/Makefile.patch
>> new file mode 100644
>> index 0000000..f38438c
>> --- /dev/null
>> +++ b/client/tests/unixbench5/Makefile.patch
>> @@ -0,0 +1,11 @@
>> +--- Makefile.bak     2011-01-14 10:45:12.000000000 -0800
>> ++++ Makefile 2011-01-14 10:46:54.000000000 -0800
>> +@@ -52,7 +52,7 @@
>> + # COMPILER CONFIGURATION: Set "CC" to the name of the compiler to use
>> + # to build the binary benchmarks.  You should also set "$cCompiler" in the
>> + # Run script to the name of the compiler you want to test.
>> +-CC=gcc
>> ++CC?=gcc
>> +
>> + # OPTIMISATION SETTINGS:
>> +
>> diff --git a/client/tests/unixbench5/control 
>> b/client/tests/unixbench5/control
>> new file mode 100644
>> index 0000000..862a521
>> --- /dev/null
>> +++ b/client/tests/unixbench5/control
>> @@ -0,0 +1,26 @@
>> +NAME = 'Unix Bench 5'
>> +AUTHOR = '[email protected]'
>> +TIME = 'MEDIUM'
>> +PURPOSE = 'Measure system level performance.'
>> +CRITERIA = 'This test is a benchmark.'
>> +TEST_CLASS = 'Kernel'
>> +TEST_CATEGORY = 'Benchmark'
>> +TEST_TYPE = 'client'
>> +DOC = """
>> +This test measure system wide performance by running the following tests:
>> +  - Dhrystone - focuses on string handling.
>> +  - Whetstone - measure floating point operations.
>> +  - Execl Throughput - measure the number of execl calls per second.
>> +  - File Copy
>> +  - Pipe throughput
>> +  - Pipe-based context switching
>> +  - Process creation - number of times a process can fork and reap
>> +  - Shell Scripts - number of times a process can start and reap a script
>> +  - System Call Overhead - estimates the cost of entering and leaving the
>> +    kernel.
>> +
>> +For more information visit:
>> +http://code.google.com/p/byte-unixbench/
>> +"""
>> +
>
> About the tarball, you don't need to include this particular one because
> I can download it from the google code page. Also, git seemed unable to
> reconstruct the tarball from the binary diff somehow.
>
>>
>> diff --git a/client/tests/unixbench5/unixbench5.py
>> b/client/tests/unixbench5/unixbench5.py
>> new file mode 100644
>> index 0000000..95f1057
>> --- /dev/null
>> +++ b/client/tests/unixbench5/unixbench5.py
>> @@ -0,0 +1,226 @@
>> +import os, re
>> +from autotest_lib.client.bin import test, utils
>> +from autotest_lib.client.common_lib import error
>> +
>> +
>> +class unixbench5(test.test):
>> +    version = 1
>> +
>> +    def initialize(self):
>> +        self.job.require_gcc()
>> +        self.err = None
>
> ^ Isn't it better to initialize self.err with an empty string? This way
> you can simplify the code that looks for errors an appends them to
> self.err.
>
>> +
>> +
>> +    # 
>> http://code.google.com/p/byte-unixbench/downloads/detail?name=unixbench-5.1.3.tgz&can=2&q=
>
> ^ http://byte-unixbench.googlecode.com/files/unixbench-5.1.3.tgz
>
>> +    def setup(self, tarball = 'unixbench-5.1.3.tgz'):
>> +        tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
>> +        utils.extract_tarball_to_dir(tarball, self.srcdir)
>> +        os.chdir(self.srcdir)
>> +
>> +        utils.system('patch -p0 < ../Makefile.patch')
>> +        utils.make()
>> +
>> +
>> +    def run_once(self, args=''):
>> +        vars = 'UB_TMPDIR="%s" UB_RESULTDIR="%s"' % (self.tmpdir,
>> +                                                     self.resultsdir)
>> +        os.chdir(self.srcdir)
>> +        self.report_data = utils.system_output(vars + ' ./Run ' + args)
>
> ^ People might want to use the raw output produced by the benchmark to
> run their own analysis/postprocess scripts, so I'd write a
> raw_output_[iteration] file on the results dir like this:
>
>        self.results_path = os.path.join(self.resultsdir,
>                                         'raw_output_%s' % self.iteration)
>        utils.open_write_close(self.results_path, self.report_data)
>
> I thought we could also pass retain_output=True to the
> utils.system_output() call, because that'd allow the results to be
> written to the test DEBUG logs, but on a second thought, better to make
> the test less space intensive.
>
> If you think it's OK to change the URL and write the raw results to a
> file, please let me know, I can make the changes and commit unixbench5.
>
> Cheers,
>
> Lucas
>
>
_______________________________________________
Autotest mailing list
[email protected]
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

Reply via email to