I just remember something - the windows' implementation of ANSI / POSIX timing routines are especially poor - e.g.https://stackoverflow.com/questions/18346879/timer-accuracy-c-clock-vs-winapis-qpc-or-timegettime So unfortunately if you are trying to measure time on Windows accurately, you might have to do something differently from ANSI C . If you search for "poor timer on Windows" or just "highres timer os" on most search engines, there are various discussions about it.
On Saturday, 16 September 2023 at 17:21:49 BST, Ahmet Göksu <ah...@goksu.in> wrote: Hello, -I have changed the * and the sentence -changed the links to relative I already changed the working way of the timing. I only start the benchmark at beginning and stop at the end. i mean, it times chunks, not single iteration.timer starts at the beginning of the chunk and stop at the end (then divide the results size of a chunk). because of it does not time single iteration, it is already a bulk test. BTW, I suggest that you add another sentence, explaining *why* there are two values at all. actually, i didnt get the reason well but it may differ even with same flags. i need help in this case. as i said before, i run the benchmark in mac. it uses this if clause. return 1E6 * (double)clock() / (double)CLOCKS_PER_SEC; the code seems producing more accurate results after splitting the results into chunks. are results seem satisfactory in your machine? Best,Goksugoksu.inOn 12 Sep 2023 18:17 +0300, Werner LEMBERG <w...@gnu.org>, wrote: If a value in the 'Iterations' column is given as '*x* | *y*', values *x* and *y* give the number of iterations in the baseline and the benchmark test, respectively.