I read your 
I read your 
I read your post quite a few times to ask other questions.

On Saturday, April 8, 2017 at 12:41:13 AM UTC+8, Daniel Vogelheim wrote:
>
> On Thu, Apr 6, 2017 at 4:38 AM, 阿炳 <[email protected] <javascript:>> wrote:
>
>> OK, I got it. 
>> Thank you for explaining so many details to me. 
>> The Code Cache V2 is exactly what I want, does this feature have a 
>> schedule? I think 1/3 JS time in WebApp startup is quite a lot of time.
>>
>
> Unfortunately, I don't really have a schedule yet. My original plan was to 
> start working on it now; but we've decided to first handle other items 
> first (around streaming parsing). That'll also help the page startup speed 
> (by 'hiding' more of the parse time in idle time). But it's a different 
> mechanism....
>
>
> On Fri, Apr 7, 2017 at 5:26 PM, 阿炳 <[email protected] <javascript:>> wrote:
>
>> Daniel, I've read your design doc several times, have a few more 
>> questions.
>> 1. you said the byte code generated by ignition never change, is that 
>> right?
>>
>
> Yes, that's correct.
>  
>
>> 2. whether the byte code remains in SFI after turbofan jit generated 
>> optimized code? or if we want to remain byte code after jit, how difficult 
>> it will be?
>>
>
> The bytecode will remain. (TurboFan needs it, because if the jit-ed code 
> needs to be "de-optimized" it will jump back to bytecode.)
>  
>
>> 3. are there any content in byte code point to jit result after some 
>> funtions were jited?
>>
>
> Not quite sure what you mean. There's several things:
>
> In addition to bytecode, there's the constant pool, which is referenced 
> from the bytecode. E.g., any number literal, string, or function reference, 
> etc. that's referenced in the source file. These may reference pretty much 
> anything that can reside on the heap. Some of these have additional 
> requirements (e.g. some strings need to be internalized ( == globally 
> unique), which requires some sort of fix-up pass after de-serializing it.
>
> Additionally, there's e.g. the feedback vectors which gather up 
> information ahead of time. These may reference an object's maps (i.e. the 
> description of the actual structure of the object at runtime), which are 
> context dependent (and therefore must not be serialized).
>
> Finally, there's a good bit of stuff 'outside' of the SharedFunctionInfos, 
> that the code cache needs to reconstruct. (E.g. the script object). There's 
> also references between those (e.g. a function's constant pool referencing 
> other functions; or the Script object maintaining a list of all known 
> SharedFunctionInfos with (stable) numeric ids).
>
>
> The plan is, roughly, to first serialize a 'skeleton', which contains 
> everything referenced from the Script object, but with "empty" 
> SharedFunctionInfos. Also, this must introduce some logic to remove all 
> context-dependent objects. That would be a modification of the existing 
> (de-)serializer.
>
> Then, per function, we'd store the bytecode + a serialization of the 
> constant pool. (The first part is easy, but the second part is (another) 
> modification of the existing (de-serializer). 
>
I'm a little confused about the difference between the code caching v1 and 
v2. Since the ignition is shipped under flags, then if I turn 
#enable-v8-future on, the code caching of js will be ignition version's 
byte code? 
If that is right, in code cache V1 we already implemented bytecode + 
constant pool serialization?
 

>
> Then, we need some sort of 'fixup' after de-serializing a function. I 
> don't think that'll be hard, but I'm not quite sure yet what exactly we 
> need.
>
>

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to