On Feb 18, 2008 2:52 PM, Jason Warner <[EMAIL PROTECTED]> wrote: > > > > On Feb 18, 2008 2:46 PM, Viet Nguyen <[EMAIL PROTECTED]> wrote: > > Yo Joe, > > > > This is a really subjective problem that we're trying to tackle. > > There's not any type of barometer that will be able to tell us how > > well the Diagnostic Utility is working until we have significant > > participation from end-users. > > > > I think a good way to implement something like this is to have broad > > categories for different types of issues. There was a mention of using > > an md5 hash to map one person's problem to a wiki page. I do not think > > this is the best solution because it will be too issue-specific. > > > > If we imagine each wiki page as a "bucket" and each problem reported > > as an element which will belong to 1+ buckets, then we can see that > > when we have 1000 buckets each with 1 element, then it will be really > > hard to diagnose anything. However, if we have only 100 buckets > > (meaning broader categories) each with 10 elements, the problem will > > be (hopefully) easier to detect and fix. > > > > Use exception package names to create a reporting structure?
That, but also the method calls in the stack trace should be somewhat similar (e.g. similar function calls from the same classes). I think it should be a mixture of things, not just one. > > > > Maybe someone with more data mining experience can help out (because I > > have none), but I think we should break down a reported issue into > > certain keywords, and use those as a search criteria for the wiki. > > > > I think I got off topic...but I like Jason's answer. > > > > Thanks, > > Viet > > > > Also, I think this post is related to the topic here too: > > > http://www.nabble.com/exception-handling---first-failure-diagnostic-capture-to15505337.html#a15505337. > > > > > > -- > ~Jason Warner
