Re: [webkit-dev] Some stderr output missing when using run-webkit-tests
That would still be a bug, and still new to me :) -- Dirk On Mon, Oct 29, 2012 at 4:04 PM, Dana Jansens wrote: > On Mon, Oct 29, 2012 at 6:59 PM, Dirk Pranke wrote: >> If that's the case, it's a bug, and new to me. > > The output was present on the results page, but it would only include > the first, maybe, 60 lines or so. > > - Dana > >> >> -- Dirk >> >> On Mon, Oct 29, 2012 at 3:42 PM, Terry Anderson >> wrote: >>> I was actually noticing that some of the stderr output was missing from a >>> failing test, not a passing one. >>> >>> Terry >>> >>> >>> On Sun, Oct 28, 2012 at 8:42 PM, Dirk Pranke wrote: As Balazs said, we don't save the stderr output from tests that pass. So, you don't have to crash, but your tests have to at least fail. It wouldn't be hard to change that somehow ... -- Dirk On Sun, Oct 28, 2012 at 4:29 PM, Terry Anderson wrote: > Hi webkit-dev, > > When I include fprintf(stderr, ...) statements in WebKit code that I > expect > to be executed when running a set of layout tests, the summary page of > run-webkit-tests will sometimes only show a subset of these statements. > However, when I add a CRASH() somewhere in the code, the "missing" > stderr > output will appear on the summary page. Has anyone else experienced this > issue? Is there a way to force run-webkit-tests to display all stderr > output > without needing to force a crash at a particular point in the code? > > Terry > > ___ > webkit-dev mailing list > webkit-dev@lists.webkit.org > http://lists.webkit.org/mailman/listinfo/webkit-dev > >>> >>> >> ___ >> webkit-dev mailing list >> webkit-dev@lists.webkit.org >> http://lists.webkit.org/mailman/listinfo/webkit-dev ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] Some stderr output missing when using run-webkit-tests
On Mon, Oct 29, 2012 at 6:59 PM, Dirk Pranke wrote: > If that's the case, it's a bug, and new to me. The output was present on the results page, but it would only include the first, maybe, 60 lines or so. - Dana > > -- Dirk > > On Mon, Oct 29, 2012 at 3:42 PM, Terry Anderson > wrote: >> I was actually noticing that some of the stderr output was missing from a >> failing test, not a passing one. >> >> Terry >> >> >> On Sun, Oct 28, 2012 at 8:42 PM, Dirk Pranke wrote: >>> >>> As Balazs said, we don't save the stderr output from tests that pass. >>> So, you don't have to crash, but your tests have to at least fail. It >>> wouldn't be hard to change that somehow ... >>> >>> -- Dirk >>> >>> On Sun, Oct 28, 2012 at 4:29 PM, Terry Anderson >>> wrote: >>> > Hi webkit-dev, >>> > >>> > When I include fprintf(stderr, ...) statements in WebKit code that I >>> > expect >>> > to be executed when running a set of layout tests, the summary page of >>> > run-webkit-tests will sometimes only show a subset of these statements. >>> > However, when I add a CRASH() somewhere in the code, the "missing" >>> > stderr >>> > output will appear on the summary page. Has anyone else experienced this >>> > issue? Is there a way to force run-webkit-tests to display all stderr >>> > output >>> > without needing to force a crash at a particular point in the code? >>> > >>> > Terry >>> > >>> > ___ >>> > webkit-dev mailing list >>> > webkit-dev@lists.webkit.org >>> > http://lists.webkit.org/mailman/listinfo/webkit-dev >>> > >> >> > ___ > webkit-dev mailing list > webkit-dev@lists.webkit.org > http://lists.webkit.org/mailman/listinfo/webkit-dev ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] Performance Tests
Thanks a lot for your work :) This is a huge improvement to our perf. test infrastructure. On Mon, Oct 29, 2012 at 3:01 PM, Zoltan Horvath wrote: > Hi there, > > In the past few weeks I made some refactoring work on the PageLoad tests > of the PerformanceTests, so now for your information, what were accessible > under PerformanceTests/PageLoad/* now accessible only under > PerformanceTests/SVG. We have now a platform independent solution for the > 'PageLoad' tests to measure the JSHeap and (Fast)Malloc memory usage from > JS without DRT modifications. Cool! > > You can check the results on our WebKitPerformance site: > http://webkit-perf.appspot.com > > I made some modifications on the Performance Tests wiki page as well, you > can find it under its new location or from the track main page: > http://trac.webkit.org/wiki/Performance%20Tests > > Have fun, > > > > ___ > webkit-dev mailing list > webkit-dev@lists.webkit.org > http://lists.webkit.org/mailman/listinfo/webkit-dev > > ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] Some stderr output missing when using run-webkit-tests
If that's the case, it's a bug, and new to me. -- Dirk On Mon, Oct 29, 2012 at 3:42 PM, Terry Anderson wrote: > I was actually noticing that some of the stderr output was missing from a > failing test, not a passing one. > > Terry > > > On Sun, Oct 28, 2012 at 8:42 PM, Dirk Pranke wrote: >> >> As Balazs said, we don't save the stderr output from tests that pass. >> So, you don't have to crash, but your tests have to at least fail. It >> wouldn't be hard to change that somehow ... >> >> -- Dirk >> >> On Sun, Oct 28, 2012 at 4:29 PM, Terry Anderson >> wrote: >> > Hi webkit-dev, >> > >> > When I include fprintf(stderr, ...) statements in WebKit code that I >> > expect >> > to be executed when running a set of layout tests, the summary page of >> > run-webkit-tests will sometimes only show a subset of these statements. >> > However, when I add a CRASH() somewhere in the code, the "missing" >> > stderr >> > output will appear on the summary page. Has anyone else experienced this >> > issue? Is there a way to force run-webkit-tests to display all stderr >> > output >> > without needing to force a crash at a particular point in the code? >> > >> > Terry >> > >> > ___ >> > webkit-dev mailing list >> > webkit-dev@lists.webkit.org >> > http://lists.webkit.org/mailman/listinfo/webkit-dev >> > > > ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] Some stderr output missing when using run-webkit-tests
I was actually noticing that some of the stderr output was missing from a failing test, not a passing one. Terry On Sun, Oct 28, 2012 at 8:42 PM, Dirk Pranke wrote: > As Balazs said, we don't save the stderr output from tests that pass. > So, you don't have to crash, but your tests have to at least fail. It > wouldn't be hard to change that somehow ... > > -- Dirk > > On Sun, Oct 28, 2012 at 4:29 PM, Terry Anderson > wrote: > > Hi webkit-dev, > > > > When I include fprintf(stderr, ...) statements in WebKit code that I > expect > > to be executed when running a set of layout tests, the summary page of > > run-webkit-tests will sometimes only show a subset of these statements. > > However, when I add a CRASH() somewhere in the code, the "missing" stderr > > output will appear on the summary page. Has anyone else experienced this > > issue? Is there a way to force run-webkit-tests to display all stderr > output > > without needing to force a crash at a particular point in the code? > > > > Terry > > > > ___ > > webkit-dev mailing list > > webkit-dev@lists.webkit.org > > http://lists.webkit.org/mailman/listinfo/webkit-dev > > > ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
[webkit-dev] Performance Tests
Hi there, In the past few weeks I made some refactoring work on the PageLoad tests of the PerformanceTests, so now for your information, what were accessible under PerformanceTests/PageLoad/* now accessible only under PerformanceTests/SVG. We have now a platform independent solution for the 'PageLoad' tests to measure the JSHeap and (Fast)Malloc memory usage from JS without DRT modifications. Cool! You can check the results on our WebKitPerformance site: http://webkit-perf.appspot.com I made some modifications on the Performance Tests wiki page as well, you can find it under its new location or from the track main page: http://trac.webkit.org/wiki/Performance%20Tests Have fun, ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] DRT/WTR should clear the cache at the beginning of each test?
On Sun, Oct 28, 2012 at 8:26 PM, Alexey Proskuryakov wrote: > when you run the same set of tests over and over, you get the same > behavior. > Not the way n-w-r-t does it; in particular the time each test takes to finish affects which process it gets run in, which gets you flakiness. Which is what started me down this path in the first place. I think that you are hugely overstating this. Adding a random query to a > URL does not make a test incomprehensible. > Of course not. The problem is a test that does *not* randomize its URLs may pass for a long time, and appear completely comprehensible, until an entirely unrelated test is added that happens to use the same URL, and then make the first test flaky. My point is that any state that is shared between tests needs to be known/understood to/by all test writers. > A good test is usable in the aforementioned scenarios, and thus does not > need special tricks in WebKitTestRunner. > ...and yet we expect a repro'ing browser to not be running extensions/plugins/etc that might mess with the repro case. Clearing cookies/cache is probably the most common instruction included with repro requests, followed by trying the case in a new/clean profile/installation. > I completely agree with Maciej's idea that we should think about ways to >> make non-deterministic failures easier to work with, so that they would >> lead to discovering the root cause more directly, and without the costs >> currently associated with it. >> > I have no problem with that, but I'm not sure how it relates to this > thread unless one takes an XOR approach, in which case I guess I have low > faith that the bigger problem Maciej highlights will be solved in a > reasonable timeframe (weeks/months). > > We have all the time in the world. There is no pressing problem that must > be solved in months. > My attention span is not nearly as long as yours apparently is. Gating fixing what appears to be an obvious bug to me (and many others) on fundamental improvements to testing methodology that might take years is madness. Cheers, -a ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] DRT/WTR should clear the cache at the beginning of each test?
On Mon, Oct 29, 2012 at 4:17 AM, Maciej Stachowiak wrote: > > On Oct 28, 2012, at 3:30 PM, Antti Koivisto wrote: > >> We could clear the cache between tests but run each test twice in a row. >> Second run will then happen with deterministically pre-populated cache. That >> would both make things more predictable and improve our test coverage for >> cached cases. Unfortunately it would also slow down testing significantly, >> though less than 2x. > > I actually really like this idea. Doing it this way would effectively run > each test both completely uncached, and fully cached, which would be better > test coverage than our current approach. Can we get an estimate on what this > would cost if applied to our whole test suite? Could we do it for just a > subset of the tests? > > (BTW I think this is better than the "virtual test suite" approach suggested > by Dirk; running the test with all its resources cached from having loaded it > immediately before is more reliable and better test coverage than running it > later as part of some sequence that doesn't clear the cache.) > > Does anyone strongly object to this approach? It seems way better to me than > other options discussed on this thread. > I would like to understand the proposal and thinking behind it a little better. First, I'm not quite sure what the intent is here ... are you thinking that maybe this would help ensure tests get more isolated environments and yet we still exercise the cache all the time? Second, today you could get close by saying NRWT --repeat-each=2 --batch-size=2; that would run each test twice in succession and then restart DRT. Are you suggesting that we'd change something such that we'd clear the cache rather than restarting DRT? Third, it seems like it would probably introduce a *lot* of extra redundant test-running. I would imagine the very large majority of our tests are not sensitive to the cache contents at all and so running them twice doesn't buy you anything. An advantage of the virtual test suite approach is that you can specify which subsets of tests you'd like to run; perhaps you could combine that with running them twice? Lastly, it seems like this is focusing on the wrong thing; wouldn't it be better to try and write specific tests to provoke different configurations and code paths through the cache? Are you suggesting we'd run all the tests twice as an interim measure until we had better specific test suites? -- Dirk > Regards, > Maciej > ___ > webkit-dev mailing list > webkit-dev@lists.webkit.org > http://lists.webkit.org/mailman/listinfo/webkit-dev ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] DRT/WTR should clear the cache at the beginning of each test?
On Mon, Oct 29, 2012 at 5:48 AM, Maciej Stachowiak wrote: > > On Oct 28, 2012, at 10:09 PM, Dirk Pranke wrote: > >> >> On Sun, Oct 28, 2012 at 6:32 AM, Maciej Stachowiak wrote: >>> >>> I think the nature of loader and cache code is that it's very hard to make >>> tests which always fail deterministically when regressions are introduced, >>> as opposed to randomly. The reason for this is that bugs in these areas are >>> often timing-dependent. I think it's likely this tendency to fail randomly >>> will be the case whether or not the tests are trying to explicitly test the >>> cache or are just incidentally doing so in the course of other things. >>> >> >> I am not familiar with the loader and caching code in webkit, but I >> know enough about similar problem spaces to be puzzled by why it's >> impossible to write tests that can adequately test the code. > > Has anyone claimed that? I think "impossible to write tests that can > adequately test the code" is not a position that anyone in this thread has > taken, certainly not me above. > > My claim is only that many classes of loader and cache bugs, when first > introduced, are likely to cause nondeterministic test failures. And further, > this is likely to be the case even if tests are written to target that > subsystem. That's not the same as saying adequate tests are impossible. I'm sorry, I didn't mean "impossible" literally. Please strike that, as it sounds like it has just made a confusing situation worse. But, you did claim that it would be "very hard to make tests that always fail deterministically", and I don't see why that's true? Testing things that are timing-dependent only require that you be able to control or simulate time. It may be that this is hard to do with layout tests, but it's pretty straightforward with unit tests that allow you to control the layers above and below the cache. > It just means to have good testing of some areas of the code, we need a good > way of dealing with nondeterministic failures. This is backwards. If you *don't* have good testing, more of your failures are likely to show up sporadically, which leads you to want to build tools for them. Randomized testing is a helpful tool to use *alongside* focused testing to ensure coverage, but should not be used as a replacement. >> >>> What I personally would most wish for is good tools to catch when a test >>> starts failing nondeterministically, and to identify the revision where the >>> failures began. The reason we hate random failures is that they are hard to >>> track down and diagnose. But some types of bugs are unlikely to manifest in >>> a purely deterministic way. It would be good if we had a reliable and >>> useful way to catch those types of bugs. >> >> This is a fine idea -- and I'm always happy to talk about ways we can >> improve our test tooling, please feel free to start a separate thread >> on these issues -- but I don't want to lose sight of the main issue >> here. > > I think the problem I identified -- that it's overly hard to track down and > diagnose regressions that cause tests to fail only part of the time -- is > more important and more fundamental than any of the three problems that you > cite below. Our test infrastructure ultimately exists to help us notice and > promptly fix regressions, and for some types of regressions, namely those > that do not manifest 100% of the time, it is not working so well. The > problems you mention are all secondary consequences of that fundamental > problem, in my opinion. First of all, this isn't an either/or situation. We should be capable of addressing all of these issues in parallel. Second, I don't see how the existence of bugs in the code, the lack of test isolation, or the lack of good test coverage for certain layers of the code follow from not having good tools to triage intermittent failures? That seems like putting the cart before the horse. Third, are you familiar with the flakiness dashboard? http://test-results.appspot.com/dashboards/flakiness_dashboard.html#group=%40ToT%20-%20webkit.org&builder=Apple%20Lion%20Debug%20WK1%20(Tests) Does it not do exactly what you're describing? Are there things that you would like added? If it would be helpful for us to have a meeting or something to help explain how this works, I'm sure we could set one up. > > - Maciej > >> >> It sounds like we've identified three existing problems - please >> correct me if I'm misstating them: >> >> 1. There appears to be a bug in the caching code that is causing tests >> for other parts of the system to fail randomly. >> >> 2. DRT and WTR on some ports are implemented in a way that is causing >> the system to be more fragile than some of us would like it to be, and >> there doesn't seem to be an a priori need for this to be the case; >> indeed some ports already don't do this. >> >> 3. We don't apparently have dedicated test coverage for caching and >> the loader that people think is good enoug
Re: [webkit-dev] question about jsc
Michael, Thanks for your reply, now I understand it. :~) 于 2012年10月29日 23:16, Michael Saboff 写道: The output of "undefined" is normal. It is the result of the expression you entered. jsc is basically returning the result of the expressions you enter. Both var and print themselves evaluate to "undefined". If you try "x = 1;" you'll get 1 as that expression returns 1. Concerning testing, If you have successfully built testapi, from the webkit top directory you can run Tools/Scripts/run-javascriptcore-tests. This script will first run testapi, which tests the JavaScript APIs and then run a collection of javascript tests. If you have successfully built testRegExp, you can run run-regexp-tests which will test the regular expression engine. Both testapi and testRegExp should be built along with jsc when you run the script Tools/Scripts/build-jsc. - Michael On Oct 29, 2012, at 2:39 AM, yuqing cai wrote: hi, all, I try to port the webkit to a new platform(the platform is linux based run with glibc & glib, but not gtk), now I have build the jsc project successfully, but when I run the jsc program, something happend show as below: qing@HAHA:/data/project/webOS/WebKit/Source/JavaScriptCore/JavaScriptCore.catwalk$ ./jsc var string="hello world :)"; undefined print(string); hello world :) undefined now I have 2 questions: 1. Why the word "undefined" comes up? 2. How to test the jsc program? qing 2012-10-29 ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] question about jsc
The output of "undefined" is normal. It is the result of the expression you entered. jsc is basically returning the result of the expressions you enter. Both var and print themselves evaluate to "undefined". If you try "x = 1;" you'll get 1 as that expression returns 1. Concerning testing, If you have successfully built testapi, from the webkit top directory you can run Tools/Scripts/run-javascriptcore-tests. This script will first run testapi, which tests the JavaScript APIs and then run a collection of javascript tests. If you have successfully built testRegExp, you can run run-regexp-tests which will test the regular expression engine. Both testapi and testRegExp should be built along with jsc when you run the script Tools/Scripts/build-jsc. - Michael On Oct 29, 2012, at 2:39 AM, yuqing cai wrote: > hi, all, I try to port the webkit to a new platform(the platform is > linux based run with glibc & glib, but not gtk), now I have build the > jsc project successfully, but when I run the jsc program, something > happend show as below: > > qing@HAHA:/data/project/webOS/WebKit/Source/JavaScriptCore/JavaScriptCore.catwalk$ > ./jsc >> var string="hello world :)"; > undefined >> print(string); > hello world :) > undefined >> > > now I have 2 questions: > 1. Why the word "undefined" comes up? > 2. How to test the jsc program? > > qing > 2012-10-29 > ___ > webkit-dev mailing list > webkit-dev@lists.webkit.org > http://lists.webkit.org/mailman/listinfo/webkit-dev ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] On returning mutable pointers from const methods
On Oct 29, 2012, at 3:47 PM, Antti Koivisto wrote: > I don't think the original proposal was meant to apply to the basic container > types. Would this be a sensible rule to adopt for WebCore only for example? > > Like all our "blanket rules", this one should be ignored when it doesn't make > sense. If that kind of cases are expected to be very rare then their > existence shouldn't be a show stopper for adopting the rule. At the moment, I can't think of any obvious counter-examples to the rule other than basic container types. I don't have a problem with the rule in general as long as we acknowledge the exceptions. If we wanted to enforce the rule mechanically, then we could just whitelist the relevant basic data structure types. The same rule should probably also apply to references (and references to pointers). I think when describing the rule, we should also identify the underlying motivation: "don't expose mutable state from a const member function" in addition to the concrete method used to avoid that goal. That would help avoid misunderstanding over time about the purpose of the rule. Cheers, Maciej ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] On returning mutable pointers from const methods
I don't think the original proposal was meant to apply to the basic container types. Would this be a sensible rule to adopt for WebCore only for example? Like all our "blanket rules", this one should be ignored when it doesn't make sense. If that kind of cases are expected to be very rare then their existence shouldn't be a show stopper for adopting the rule. antti ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
[webkit-dev] How to modify JsHTMLMediaElement.cpp to add STOP fucntionality
Hi All, I'm trying to add stop functionality in my application which is using webkit on WIN port. currently Play and Pause functions are available but no STOP function is present , so in JsHTMLMediaElement.cpp I have added a new function: jsHTMLMediaElementPrototypeFunctionStop(ExecState* exec) After this function and couple of changes in my application , I'm able to perform STOP functionality. But I guess I can not modify this file. So can someone please let me know , how to add this function ?? Regards, Ankit ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] DRT/WTR should clear the cache at the beginning of each test?
On Oct 28, 2012, at 10:09 PM, Dirk Pranke wrote: > > On Sun, Oct 28, 2012 at 6:32 AM, Maciej Stachowiak wrote: >> >> I think the nature of loader and cache code is that it's very hard to make >> tests which always fail deterministically when regressions are introduced, >> as opposed to randomly. The reason for this is that bugs in these areas are >> often timing-dependent. I think it's likely this tendency to fail randomly >> will be the case whether or not the tests are trying to explicitly test the >> cache or are just incidentally doing so in the course of other things. >> > > I am not familiar with the loader and caching code in webkit, but I > know enough about similar problem spaces to be puzzled by why it's > impossible to write tests that can adequately test the code. Has anyone claimed that? I think "impossible to write tests that can adequately test the code" is not a position that anyone in this thread has taken, certainly not me above. My claim is only that many classes of loader and cache bugs, when first introduced, are likely to cause nondeterministic test failures. And further, this is likely to be the case even if tests are written to target that subsystem. That's not the same as saying adequate tests are impossible. It just means to have good testing of some areas of the code, we need a good way of dealing with nondeterministic failures. > >> What I personally would most wish for is good tools to catch when a test >> starts failing nondeterministically, and to identify the revision where the >> failures began. The reason we hate random failures is that they are hard to >> track down and diagnose. But some types of bugs are unlikely to manifest in >> a purely deterministic way. It would be good if we had a reliable and useful >> way to catch those types of bugs. > > This is a fine idea -- and I'm always happy to talk about ways we can > improve our test tooling, please feel free to start a separate thread > on these issues -- but I don't want to lose sight of the main issue > here. I think the problem I identified -- that it's overly hard to track down and diagnose regressions that cause tests to fail only part of the time -- is more important and more fundamental than any of the three problems that you cite below. Our test infrastructure ultimately exists to help us notice and promptly fix regressions, and for some types of regressions, namely those that do not manifest 100% of the time, it is not working so well. The problems you mention are all secondary consequences of that fundamental problem, in my opinion. - Maciej > > It sounds like we've identified three existing problems - please > correct me if I'm misstating them: > > 1. There appears to be a bug in the caching code that is causing tests > for other parts of the system to fail randomly. > > 2. DRT and WTR on some ports are implemented in a way that is causing > the system to be more fragile than some of us would like it to be, and > there doesn't seem to be an a priori need for this to be the case; > indeed some ports already don't do this. > > 3. We don't apparently have dedicated test coverage for caching and > the loader that people think is good enough, and getting such tests > might be "hard". P.S. I do think your problem statements are somewhat tendentious and not really supported by evidence provided in the thread. But even granting them as written, I don't think any of these is the "main issue". ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] DRT/WTR should clear the cache at the beginning of each test?
On Oct 28, 2012, at 3:30 PM, Antti Koivisto wrote: > We could clear the cache between tests but run each test twice in a row. > Second run will then happen with deterministically pre-populated cache. That > would both make things more predictable and improve our test coverage for > cached cases. Unfortunately it would also slow down testing significantly, > though less than 2x. I actually really like this idea. Doing it this way would effectively run each test both completely uncached, and fully cached, which would be better test coverage than our current approach. Can we get an estimate on what this would cost if applied to our whole test suite? Could we do it for just a subset of the tests? (BTW I think this is better than the "virtual test suite" approach suggested by Dirk; running the test with all its resources cached from having loaded it immediately before is more reliable and better test coverage than running it later as part of some sequence that doesn't clear the cache.) Does anyone strongly object to this approach? It seems way better to me than other options discussed on this thread. Regards, Maciej ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
[webkit-dev] question about jsc
hi, all, I try to port the webkit to a new platform(the platform is linux based run with glibc & glib, but not gtk), now I have build the jsc project successfully, but when I run the jsc program, something happend show as below: qing@HAHA:/data/project/webOS/WebKit/Source/JavaScriptCore/JavaScriptCore.catwalk$ ./jsc > var string="hello world :)"; undefined > print(string); hello world :) undefined > now I have 2 questions: 1. Why the word "undefined" comes up? 2. How to test the jsc program? qing 2012-10-29 ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev
Re: [webkit-dev] On returning mutable pointers from const methods
On Sun, Oct 28, 2012 at 11:16 PM, Maciej Stachowiak wrote: > On Oct 28, 2012, at 10:09 PM, Peter Kasting wrote: > > On Sun, Oct 28, 2012 at 6:12 AM, Maciej Stachowiak wrote: > >> I am not sure a blanket rule is correct. If the Foo* is logically related >> to the object with the foo() method and effectively would give access to >> mutable internal state, then what you say is definitely correct. But if the >> const object is a mere container that happens to hold pointers, there's no >> particular reason it should turn them into const pointers. For example, it >> is fine for const methods of HashSet or Vector to return non-const pointers >> if that happens to be the template parameter type. In such cases, constness >> of the container should only prevent you from mutating the container >> itself, not from mutating anything held in the container, or else const >> containers of non-const pointers (or non-const types in general) would be >> useless. >> > > IMO const containers that vend non-const pointers _are_ nearly useless. > > I consider logical constness to include not only "this statement has no > observable side effect" but also "this statement does not allow me to issue > a subsequent statement with observable side effects". > > > Surely that's not quite correct as a definition of logical constness. You > can always pass a const reference to another object's setter to cause an > observable side effect. The scope of side effects under consideration has > to be limited to the object itself plus anything else that could be > considered part of its state. In brief, a const method on object O should > neither mutate O nor expose the possibility of mutating the state of O, but > it has no responsibility for its involvement in mutation of objets > Sorry, I left out words. Consider the phrase "on the state of the object" to be appended to both those quoted phrases I originally said. I still think that it's extremely difficult to avoid "exposing the possibility of mutating the state of O" in most cases. The mechanism may be obscure, but it is frequently present -- frequently enough to make me advocate for hard-and-fast rules. Consider the following use case: > > - I have collected a an ordered list of pointers to objects of type T in a > Vector. > - I'd like to call a function that will operate on this list - it's > allowed to do anything it wants to the Ts, including mutating them, but it > can't alter the order or contents of the Vector. > (For example, I may want to pass the same list to another function). > > Currently one would express this by passing a const Vector&. I don't > see a good approach to this that strictly follows the suggested rule. You'd > have to either copy the Vector to a temporary just for the call, or abandon > const-correctness. > I think by "abandon const-correctness" you mean "pass a Vector*", which is indeed the route I'd go. I don't consider that an abandonment of const-correctness, in that you are indeed not violating logical constness. But yes, you lose the ability to convey the idea that "this function, in and of itself, doesn't modify the number or order of elements in the vector". On the other hand, you don't put yourself in a position where a caller of the function could then immediately use his returned T* to mutate the vector -- which is often a real possibility in real-world systems. There aren't any magic bullets here. Given that the compiler can very rarely use "const" to optimize anything anyway, const is effectively whatever we want it to be. I prefer hard guarantees that occur less frequently to less-ironclad guarantees that are more common. Reasonable people can disagree. I think your position -- that the hard guarantee usually makes sense for non-simple-containers but you'd prefer to allow a less-strict usage for simple containers -- is one that has some appeal even if it's not what I'd personally choose (and is more complex to explain/enforce). I certainly think we'd be in a better world if the codebase followed this policy, compared to today. PK ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo/webkit-dev