Am 23.01.2014 21:15, schrieb Martin Frb:
On 23/01/2014 20:04, Florian Klämpfl wrote:
Am 23.01.2014 20:52, schrieb Martin Frb:
On 23/01/2014 19:35, Florian Klämpfl wrote:

I think this is hard to achive as well.

Why?
I consider it as complicated and it covers only cases one can forsee.
Some statistical analysis of benchmark timings and procedure sizes is
imo much more general.


Ok, so we were talking about 2 different targets.

You were talking (if I understand correct) about a general test for any
and all forms of regressions (with regards to speed or size / not with
regards to function) in code generation.

Yes.

This is indeed hard to test.
Size may be do able by comparing to a known size that was once archived
/ size may increase or decrease, but then tests need to be fixed
(decrease must be fixed, so that increase from the new optimum will then
be detected.)
Speed is indeed very hard. Since even a benchmark may vary.

Yes, but having n=5 and doing daily benchmarking should one enable to identify trends.



I was talking about checking for specific/known code snippets that are
known to be inefficient (so anything that the peephole generator
can/could detect). This is only a small subset of the possible
speed/size regressions, but it is at least something.

Yes and no. It is extra code and extra code is always bad ;) and it requires a separate compiler run. I wouldn't waste effort in it.
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

Reply via email to