> Gotcha.  But if you don't compile multiple times, how do you prevent
> optimizations that occur in one test method from impacting ones that occur
> in another test method?

I can't. I do this by limiting the scope optimization should work on
(see MethodInlineTest)...

> By the way, I've already got a big patch out that Bob is reviewing that is a
> huge refactor to JavaToJavaScriptCompiler, and we can continue to refactor
> further to support this use case in the best way possible.  I can't think of
> a fundamental reason that compiles should be slow for small test cases.

I'll be happy to take a look at JTJSC and tests as soon as you finish
the refactoring.

>> You mean storing diffs instead of expected data?
>
> Something like that.

As a second thought, I don't feel that will help much. If you change a
compiler, in a way, that changes "interesting" code - you'll be in the
same position. And it's much more difficult to look at diffs than at
expected text.

Probably we could limit expected test scope? Say to a specified method(s)?

>
>>
>> >> > - Textual comparison seems like a great first step, but a bit brittle
>> >> > going
>> >> > forward.  I'd worry that changes in the compiler totally unrelated to
>> >> > an
>> >> > optimization pass would tend to unnecessarily break tests.
>> >>
>> >> That's a valid point and I saw this in the fast. I don't know a
>> >> solution for this. Writing manual tree-based checks is even more
>> >> fragile.  However I do believe this is better than nothing.
>> >
>> > I know.  But I would like to think of a way forward.  Maybe we could
>> > take a
>> > "before" and "after" string snapshot of the program, and validate only
>> > the
>> > sections of the code that are different.
>>
> Scott
>



-- 
Regards,
Mike

--~--~---------~--~----~------------~-------~--~----~
http://groups.google.com/group/Google-Web-Toolkit-Contributors
-~----------~----~----~----~------~----~------~--~---

Reply via email to