On Thu, Feb 17, 2011 at 12:57, Kevin Millikin <[email protected]>wrote:

> On Thu, Feb 17, 2011 at 12:44 PM, <[email protected]> wrote:
>
>> With the current preparsing there are syntax errors that we cannot catch
>> at preparse-time (e.g., "continue x" where x isn't the label of a loop,
>> because we record neither labels nor loops during preparsing). We can't
>> catch those until the function is parsed properly the first time, and if
>> it's lazy, that might be when it's attempted inlined.
>>
>>
> Yeah, that's why I made the tongue in cheek suggestion to replace the lazy
> (re)compile stub with a "throw syntax error" compilation stub.
>

It's not a bad idea.
The real problem is that the only proper solution is to extend the preparser
to capture all statically detectable syntax errors.
It's the proper thing to do, because as it is now, whether we get a syntax
error from an uncalled function depends on whether it's lazy or not, which
again depends on whether the file it's in is more than 1024 characters.
That's just ugly.

We will have to do that eventually.


> The simplest thing is probably to disable optimization on the
> SharedFunctionInfo for those functions.  That should render it
> non-inlineable as well.  Can you think of a reason this wouldn't work?
>

I tried that, but the only code I found that looked like it was disabling
optimization was setting a bit on the Code object. At this point, the Code
object of the function wasn't a function code object (as we assert it should
be), but the lazy compilation stub.
If there is another way to make an uncompiled function unoptimizable, that
should work fine.


>
>
>> Another option is to make uncompiled functions not optimizable.
>> Currently when we check if we can optimize a function, if it's
>> uncompiled we optimistically assume that it is.
>>
>
> We could also try that.  In that case we definitely do not have type
> feedback for them, so it might be reasonable from a performance standpoint
> to require them to go through the baseline compiler first.
>

Yes, the only problem I see is that when we later call the function (and it
turns out to be perfectly fine, optimizable and inlinable), we won't go back
and re-compile the calling function and get the benefit of inlining.

Inlining an uncompiled function is probably rare to begin with, so I don't
think this is going to affect performance of anything important (but it's
worth a check).

/L
-- 
Lasse R.H. Nielsen
[email protected]
'Faith without judgement merely degrades the spirit divine'
Google Denmark ApS - Frederiksborggade 20B, 1 sal - 1360 København K -
Denmark - CVR nr. 28 86 69 84

-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev

Reply via email to