Yes, I hoped I was making it clear that this wasn't a real-world test, it was a stress test to exaggerate the inefficiencies. This makes them easier to measure. Stress tests have their place. While my code clearly wouldn't ever see a case like this, we would see something similar: When many users are on line, many threads will be sorting many smaller datasets. In this case, I still want the sort to run as efficiently as possible. This is why I was measuring the efficiency of the PropertyResolver, and why I still feel there's a faster way to do this.
That said, I have accepted the Wicket team's claim that we shouldn't use PropertyResolver in our own code. I have developed my own resolver, which, as I expected, runs much more efficiently. (We tried OGNL and MVEL, but neither was really suitable.) Eelco Hillenius wrote: > >> The dataset I was sorting on was generated specifically to test the >> speed of >> different comparators, and had a dataset of 100,000 elements, and I was >> sorting using the standard sort method from the collections package, >> accessed from Arrays.sort(). However, I now have data that times >> individual >> calls of the Comparator's compare() method: > > You would never load a data set of 100,000 elements (or even a 1,000 > for that matter) in Wicket models, simply because you wouldn't display > that many components (your browser probably wouldn't be able to handle > that for instance). Furthermore, even if you would find the need to do > such sorting in-memory using Java, you would do that upfront, before > arriving at the user interface layer. > > Eelco > > ----- There are 10 kinds of people: Those who know binary and those who don't. -- View this message in context: http://www.nabble.com/PropertyResolver-redesign-tp16495644p17067504.html Sent from the Wicket - Dev mailing list archive at Nabble.com.
