On 10/28/2012 08:25 PM, Ami Fischman wrote:

    We can live in one of two worlds:
    1) LayoutTests that concern themselves with specific
    network/loading concerns need to use unique URLs to refer to
    static data; or
    2) DRT clears JS-visible state between tests.
    The pros/cons seem clear to me:
    Pro#1: loading/caching code is coincidentally tested by (unknown)
    tests that reuse URLs among themselves.
    Con#1: requires additional cognitive load for all webkit
    developers; the only way to write a test that won't be affected
    by future addition of unrelated tests is to use unique URLs
    Pro#2: principle of least-surprise is maintained; understanding
    DRT & reading a test (and not every other test) is enough to
    understand its behavior
    Con#2: loading/caching code needs to be tested explicitly.
    IMO (Pro#2 + -Con#1) >> (Pro#1 + -Con#2).
    Are you saying you believe the inequality goes a different way,
    or am I missing some other feature of your thesis?
    Yes, this is a fair description.


I'm going to assume you mean that yes, you believe the inequality goes the other way: (Pro#2 + -Con#1) << (Pro#1 + -Con#2)

    This accidental testing is not something to be neglected


I'm not neglecting it, I'm evaluating its benefit to be less than its cost.

To make concrete the cost/benefit tradeoff, would you add a random sleep() into DRT execution to detect timing-related bugs? It seems like a crazy thing to do, to me, but it would certainly catch timing-related bugs quite effectively. If you don't think we should do that, can you describe how you're evaluating cost/benefit in each of the cases and why you arrive at different conclusions?

(of course, adding such random sleeps under default-disabled flag control for bug investigation could make a lot of sense; but here I'm talking about what we do on the bots & by default)

    It's not humanly possible to have tests for everything in advance.


Of course. But we should at least make it humanly possible to understand our tests as written :) Making understanding our tests not humanly possible isn't the way to make up for the not-humanly-possible nature of testing everything in every way. It just means we push off not knowing how much coverage we really have, and derive a false sense of security from the fact that bugs have been found in the past.

    I completely agree with Maciej's idea that we should think about
    ways to make non-deterministic failures easier to work with, so
    that they would lead to discovering the root cause more directly,
    and without the costs currently associated with it.


I have no problem with that, but I'm not sure how it relates to this thread unless one takes an XOR approach, in which case I guess I have low faith that the bigger problem Maciej highlights will be solved in a reasonable timeframe (weeks/months).

    Memory allocator state. Computer's real time clock. Hard drive's
    head position if you have a spinning hard drive, or SSD
    controller state if you have an SSD. HTTP cookies. Should I
    continue the list?
    These things are all outside of webkit.
    Yes, they are outside WebKit, but not outside WebKit control, if
    needed.
    Did you intend that to be an objection?


I imagine Balazs was pointing out that you included items that are not JS-visible in an answer to my question about things that are JS-visible. But that was part of an earlier fork of this thread that went nowhere, so let's let it go.

I was just meaning that it is not feasible to force every external dependency to reset it's state, neither we want it. We just trust in them. But the cache is in WebKit, and we can reset it's state. So either resetting the cache is a good or a bad idea, I think it has nothing to do with the fact that we cannot "reset" the OS and the hardware (and external libs of course).

_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev

Reply via email to