On Jan 9, 2013, at 10:04 PM, Ian Hickson <i...@hixie.ch> wrote:

> On Wed, 9 Jan 2013, Eric Seidel wrote:
>> 
>> The core goal is to reduce latency -- to free up the main thread for 
>> JavaScript and UI interaction -- which as you correctly note, cannot be 
>> moved off of the main thread due to the "single thread of execution" 
>> model of the web.
> 
> Parsing and (maybe to a lesser extent) compiling JS can be moved off the 
> main thread, though, right? That's probably worth examining too, if it 
> hasn't already been done.

100% agree.

However, the same problem I brought up about tokenization applies here: a lot 
of JS functions are super cheap to parse and compile already, and the latency 
of doing so on the main thread is likely to be lower than the latency of 
chatting with another core.  I suspect this could be alleviated by (1) 
aggressively pipelining the work, where during page load or during heavy JS use 
the compilation thread always has a non-empty queue of work to do; this will 
mean that the latency of communication is paid only when the first compilation 
occurs, and (2) allowing the main thread to steal work from the compilation 
queue.  I'm not sure how to make (2) work well.  For parsing it's actually 
harder since we rely heavily on the lazy parsing optimization: code is only 
parsed once we need it *right now* to run a function.  For compilation, it's 
somewhat easier: the most expensive compilation step is the third-tier 
optimizing JIT; we can delay this as long as we want, though the longer we dela
 y it, the longer we spend running slower code.

Hence, to make parsing concurrent, the main problem is figuring out how to do 
predictive parsing: have a concurrent thread start parsing something just 
before we need it.  Without predictive parsing, making it concurrent would be a 
guaranteed loss since the main thread would just be stuck waiting for the 
thread to finish.

To make optimized compiles concurrent without a regression, the main problem is 
ensuring that in those cases where we believe that the time taken to compile 
the function will be smaller than the time taken to awake the concurrent 
thread, we will instead just compile it on the main thread right away.  Though, 
if we could predict that a function was going to get hot in the future, we 
could speculatively tell a concurrent thread to compile it fully knowing that 
it won't wake up and do so until exactly when we would have otherwise invoked 
the compiler on the main thread (that is, it'll wake up and start compiling it 
once the main thread has executed the function enough times to get good 
profiling data).

Anyway, you're absolutely right that this is an area that should be explored.

-F


> 
> -- 
> Ian Hickson               U+1047E                )\._.,--....,'``.    fL
> http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
> _______________________________________________
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo/webkit-dev

_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev

Reply via email to