Yes, as in

main()
event_loop()
draw_event()
calculate_pixel()

If draw_event has a synchronous call, then main() and event_loop() must be
interpreted, so that we can pause and resume them all later. But the
calculate_pixel() call that is called, and has no chance of itself doing a
sync call or calling something that might do a sync call, can run at full
normal speed.

- Alon



On Tue, Dec 9, 2014 at 7:12 PM, Soeren Balko <[email protected]> wrote:

> Above means "before", no?
>
> On Wednesday, December 10, 2014 11:46:08 AM UTC+10, Alon Zakai wrote:
>>
>> I don't think a rollback like that is feasible. However, the reason
>> asyncify ends up causing huge code size increases is not relevant to the
>> emterpreter. Asyncify needs to add a lot of code to handle all the possible
>> paths to return to a sync call. In the emterpreter, on the other hand,
>> literally the code is a stream of binary data - we can just jump right in
>> to wherever we want. No code size increase, in fact it would be smaller
>> than JS.
>>
>> The interpreter can split up a codebase. Everything below a sync call
>> would need to be interpreted, and everything above could either be
>> interpreted or run at full speed. Hopefully a codebase could have event
>> loop management stuff below, and above, performance sensitive things like
>> pure computation, which are never done in an async matter, and run at full
>> speed. In that case, perf should be very good.
>>
>> - Alon
>>
>>
>> On Tue, Dec 9, 2014 at 4:15 PM, Soeren Balko <[email protected]> wrote:
>>
>>> Alon,
>>>
>>> In the emterpreter approach you propose, would the entire stack need to
>>> be run by emterpreter? Or could the code be split somehow such that a major
>>> (performance-critical) chunk still runs "natively". What I mean is this:
>>> what IMHO breaks the current asyncify approach for large, convoluted code
>>> bases is the fact that all code paths to an asynchronous function need to
>>> be split, causing the massive code bloat I saw. In my case, I've been
>>> trying to implement an asynchronous file i/o capability where every so
>>> often, we need to asynchronously fetch a chunk of data and mimic a blocking
>>> behavior to do so. When we asynchronous satisfy that file read, we
>>> speculatively cache data for subsequent reads, aiming to keep the number of
>>> asynchronous "break-outs" low. In effect, we rarely ever hit the
>>> asynchronous code path and can mostly satisfy file operations from the
>>> cached data. Adding to this, a lot of different code paths lead to the
>>> asynchronous function, contributing to the aforementioned code bloat.
>>>
>>> What I wonder is this: could emterpreter be used in some sort of
>>> speculative execution where we (1) first try to run the code natively
>>> (i.e., on the browser's JS runtime). Only if we (2) hit an asynchronous
>>> function, we "roll back" the state and (3) re-run that code path using
>>> emterpreter, which may then mimic the sync-async bridge? I am not entirely
>>> sure if (and how) the rollback behavior could be accomplished. Perhaps by
>>> immediately exiting the module (i.e., throw an JS exception) when we hit an
>>> asynchronous function and then invoke emterpreter with the (preserved)
>>> global state (HEAP8 and friends) from there. Again, I don't know how hard
>>> that would be, as it would essentially require native JS execution and
>>> emterpreter to share the same module (i.e., global state). Generators would
>>> definitely be the much cleaner option...
>>>
>>> On Tuesday, December 9, 2014 10:13:26 AM UTC+10, Alon Zakai wrote:
>>>>
>>>> Asyncify does seem limited in use on large codebases, due to the issues
>>>> you found. On small stuff it is good. So the asyncify experiment succeeded
>>>> at least for small things, and I agree we should continue to look for
>>>> options here.
>>>>
>>>> Generators could be done, if anyone is interested to try that out. It
>>>> wouldn't work with asm.js currently, but otherwise seems very feasible.
>>>>
>>>> The emterpreter is also designed to help with this. It compiles
>>>> emscripten output to a bytecode. It runs significantly slower, but does
>>>> allow in principle for sync/async stuff. It also reduces code size as a
>>>> side effect. Some more work would be needed on the emterpreter to get to
>>>> that point, but it can compile and execute eveyrthing we have in the test
>>>> suite - just the sync part would need to be added.
>>>>
>>>> I encourage people to experiment with either or both of those options.
>>>>
>>>> - Alon
>>>>
>>>>
>>>> On Sat, Dec 6, 2014 at 7:42 PM, Soeren Balko <[email protected]> wrote:
>>>>
>>>>> Folks,
>>>>>
>>>>> I would like to share some (not necessarily representative)
>>>>> experiences with the experimental "asyncify" feature. First of all: I
>>>>> really do like the ingenious idea behind the current implementation.
>>>>> Introducing a sync-async bridge is dearly needed for many inherently
>>>>> asynchronous features such as file i/o. Nonetheless, after experimenting
>>>>> with asyncify for a while, my enthusiasm has somewhat cooled down, reasons
>>>>> for that being two major issues:
>>>>>
>>>>> For one, compile time for the (admittedly very large and complex)
>>>>> legacy code base I used has multiplied and requires a lot more memory than
>>>>> it used to. I actually had to provision some hefty 12 GB to the Ubuntu VM
>>>>> that I run emscription in, in order to let it complete successfully. And
>>>>> secondly, the resulting code size is absurdly large (despite using -Oz),
>>>>> making asyncify impractical for me. In fact, I resorted to using the "main
>>>>> loop" functionality, despite the needed code refactorings.
>>>>>
>>>>> I saw that using Javascript generators was contemplated, but
>>>>> considered too slow. Has the recent Mozilla announcement changed anything
>>>>> in this assessment? AFAICS, Microsoft is also working on generators 
>>>>> support
>>>>> in IE and I would expect Google (and eventually, also Apple) to follow
>>>>> course. Obviously, the "yield" functionality of generators would not 
>>>>> suffer
>>>>> from the code bloat and compile time increase of the current asyncify
>>>>> approach of forcefully unwinding the call stack and reconstructing it upon
>>>>> resume. Any thoughts?
>>>>>
>>>>> Soeren
>>>>>
>>>>>
>>>>>  --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "emscripten-discuss" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "emscripten-discuss" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "emscripten-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to