Hi, Egor! On Tue, Dec 2, 2008 at 5:16 PM, Egor Pasko <[EMAIL PROTECTED]> wrote: > Aleksey, is there a combined solution, where I push the red button, > which makes the silver bullet to fire? :) If the gun's license was granted to Schrödinger... :)
> My concern is: is it effective to look through configurations one by > one to find issues in this compiler? Of course not! My original thought was like following: automatize the check process first, then: a. Slide to the next bug. b. Reproduce it: c1. Is this a bug? Mark it with the assert() or something. c2. Is this crash "normal"? Throw some meaningful message. d. Re-run all tests and see which fall into new category, mark all emconfs as faulty within this category. e. Exclude new category bugs from the pool and go to (a). It's more like Sieve of Eratosthenes :) > However, there is one idea .. why are you classifying the > configurations based only by end result status? clusters are obviously > too big. I would also take the configurations as a parameter for > clustering failures. Yes, you'll need a fair amount of machine > learning efforts to cluster them. But that may pay off really > well. Yeah! Glad we think in one direction -- try to cluster the bugs. I was solving the similar issue in my school project with Kohonen's maps, hopefully I'll reproduce the project some day for this occasion. We can also look into Matlab ;) Anyway, this sounds like a good student project "just-for-fun". Thanks, Aleksey.
