Hi Curtis, thanks for the offer. Setting up a performance test framework would be fantastic. We may not be ready right now, but I am sure that if someone builds it, they will come. -- Matthias
On Jan 22, 2013, at 12:25 AM, Curtis Dutton wrote: > I've been using racket now for about 4 years now. I use it for everything > that I can and I love it. It is really an awesome system, and I just can't > say "THANKS" enough to all of you for racket. > > That being said, I'd like to become more active with the development process. > In a past life, I worked for Microsoft as a development tools engineer. Most > of what I did was develop and operate large scale automated testing systems > for very large teams. I once had a room full of 500 or so machines at my > disposal, which I tried hard to keep busy. I've maintained rolling build > systems, that included acceptance tests, a performance testing system, a > stress testing system and a security fuzzing system. > > I'm not sure how people feel about automated systems like this, part of this > email is just to see what people think. But used in the right way they can be > used to shape and control the directions that a project evolves in. > > > An example of the type of system that I'd like to see for racket, would be a > performance measuring system that would work in principle like so... > > I have an exampled I'll use. I'm concerned about the racket/openssl transfer > speeds. > > The test: > • Create 2 places. 1 with a client. 1 with a server. > • Establish an ssl session. > • Output a "start time event". > • transfer 1MB of random data. > • output an "end time event" > > Now once I write that test, and commit it, the performance system picks it up > from the repository. And it runs that test for every commit that is made > there after. That establishes a "baseline" for the performance of that test. > If a commit is made, and suddenly that test takes longer, it generates an > alert. At which point, we either investigate to find out why the test slowed > down and fix it, or due to circumstances we can't control (which does happen) > we tell the system that its acceptable and to accept it as a new baseline. > Now of course if there is a marked improvement, we sound out a pat on the > back too! > > Now as a user of this system, I can monitor the performance characteristics > of racket that I care about. People can write "tests" just to track racket's > performance over time, and catch unexpected regressions. They can also add > these tests before they begin on a campaign of improving their pet > measurements. > > > That is the gist of the type of system I wish I had with racket. > > I can go more into how a stress test works, and perhaps fuzzing tests, etc... > > > Now I'm willing to build it and I'm willing to host it with a number of > machines. I have pieces and parts of code lying around and I already have a > decent harness implementation that collects statistics about a racket process > as it runs. > > > What do you think? If could have something like this, would you want it? > (Does something like this exist already?) What would it look like? How would > it work, etc.... > > > I'd like to collect a list of desired "tests" that this system would monitor > for us. If you already have code that you run on your own, even better! > Detailed examples would be welcome, as I need to gather some ideas about what > people would want to do with this thing. > > Racket is so awesome! I'd like to help improve it, and I think this is > something that I can offer to help get us there. > > Thanks, > Curt > > > > > > > > > > > _________________________ > Racket Developers list: > http://lists.racket-lang.org/dev _________________________ Racket Developers list: http://lists.racket-lang.org/dev

