>>Under the looser Wikipedia definition, the current mustella test suite (note >>that I have not called it a unit testing framework on this mailing list) has >>tests for the major classes in the framework. When you finally see >>the >>code, there is a class called UnitTester that contains a set of tests. Mike >>will probably rename that class shortly after I check it in :-)
Just to clarify, I think mustella is a good thing. I think having these tests is absolutely critical and they do a good job. I want to see those continue to exist and evolve, however, to me unit tests are also important. Not only do they allow precise understanding of where code breaks after a change, they actually serve as a guide to me as to when we have code that is way too coupled. Also, imagine how nice it might be if people could propose a patch with a unit test that shows it works. The mustella tests actually need to restart the VM a number of times in the source of their running because of state. I am not saying that makes them bad, I am just saying it doesn't make them unit tests. To me they are somewhere between big integration tests and functional tests. >>The SDK code is the way it is because it started out in AS2 in a VM where >>object creation was very expensive. We got our performance improvements by >>having large monolithic classes. When we ported to AS3, we >>could not be >>sure of its performance characteristics, so we did a straight port and kept >>those large classes. AS3 was much faster, but as I said, in my attempt to do >>a major refactoring, there was a performance hit for >>small apps, and with >>mobile being a important target, we opted not to pursue a refactoring. I think all of that is fair and I also think we just need to figure out where we can and cannot compromise. >>I'm not a big fan of doing things to the byte code such that you lose the >>ability to map the code back to the source code, but I still believe we can >>make the right trade-offs and get a better framework. I am still >>planning >>on giving up on strict backward compatibility and starting over. I understand your position, however, I really think that some of the byte code approaches could get us very maintainable code and speed of execution. That's why I am so interested in this approach. Honestly, with a really good optimizing compiler it is difficult to take final byte code and trace it back to original source as well. When we see falcon someday, my goal would be to slowly add those type of optimizations into it in either case as I think there are huge gains to be made at that level. So, since I am all for screwing up your source to byte code relationship in the name of those optimizations, I am just asking that we consider screwing it up even earlier in the name of maintainable code and efficiency. Frankly, I think we could keep most backward compatibility in this model and still achieve our goals. Notice: This transmission is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged or confidential. Any dissemination, distribution or copying of this transmission by anyone other than the intended recipient is strictly prohibited. If you have received this transmission in error, please notify the sender immediately by e-mail or telephone and delete the original transmission. Thank you.