If the implementation of automated testing is broken why not fix it or work 
around the issues rather than abandon the whole idea?  Isn't it useful to 
have all non-interactive tests run automatically on dozens of machines?  If 
it isn't useful then I agree, we should stop the automated testing.

Chris


On Tuesday 07 December 2004 5:25 am, Ferenc Wagner wrote:
> Jakob Eriksson <[EMAIL PROTECTED]> writes:
> > Do you agree, should we stop using winrash?
> >
> > "Dmitry Timoshkov" <[EMAIL PROTECTED]> wrote:
> >> "Jakob Eriksson" <[EMAIL PROTECTED]> wrote:
> >>> Well, I tried now marking the service as interactive,
> >>> but that didn't make any difference.
> >>>
> >>> So, what follows, deprecate Winrash?
> >>
> >> If it really breaks the tests then definitely yes.
>
> I think it does provide much useful information which would
> be largely lost if we resorted to manual testing.  What's
> more, winetest is not really up to that, as it doesn't ask
> for a tag but relies on a command line option which people
> would tend to forget.  What I propose: make winetest detect
> whether it's running on an interactive desktop or not, and
> include this info in the header just like bRunningUnderWine.
> Meanwhile add the tag dialog to winetest and separate or
> mark the different reports on the webpage for easier
> reference.  That would bring us the best of both worlds.
>
> Or possibly tweak the sensitive tests (how many are there?)
> to make clear in the output that they were not run for this
> reason...  That would probably require a new field in the
> final report (ie. success-failure-todo-skipped or similar).
> This could also signal WINETEST_INTERACTIVE tests so that
> they aren't forgotten about...
>
> That said, I don't know much of this desktop business, so
> I'm not sure how to put the detection logic in.  The rest
> should not be hard, I hope I'll find the time for them if we
> decide to go this way.

Reply via email to