On 10/15/2016 09:14 PM, Menaka Mohan wrote:
I have been able to successfully run the IOZone bench test on the
Gluster setup. Kindly find the results here .
But for the smallfile distributed I/O benchmark, the test fails. Since
we are taking 5 samples in each operation, the " --operation create "
fails reporting that the file already exits from the second
sample [1462.758118, None, None, None, None] . Necessary changes are to
be made such that the --operation create happens only once and others (
--operation read and --operation ls-l ) run as usual. Kindly correct me,
If I am wrong.
I am wondering why  is not getting triggered and cleaning up the
files in your test run. It seems to be doing the job in my setup.
Possibly something to check/debug when you get a chance.
Assuming that as the reason, I have made the necessary changes to the
code [sample _size for create operation is 1] and ran the bench test.
Kindly find the results here .
This should be fine for the exercise. The interest in running a test
more than once is to ensure we do not have any noise from a single run
that gives us incorrect numbers as a result. So for now, running this
once is fine, unless you get some time to debug as above.
I will further work on it as a single script and post the output as
mentioned in the GitHub page. Since this is a very time consuming
process, I will simultaneously work on the next task which is to
investigate the tools to use and analyze the system resource usage
across the setup.
I would encourage looking at the next part (i.e tools to analyze the
system) of this task, as that can provide some more meaningful data for
 GlusterBench line that should cleanup small files:
Gluster-devel mailing list