Re: [webkit-dev] Request for position on Atomics.waitAsync

2020-08-28 Thread Filip Pizlo
I think it’s a good idea. It seems to be a decent fit for how WK handles this 
already internally. 

-Filip

> On Aug 28, 2020, at 3:52 AM, Marja Hölttä  wrote:
> 
> 
> Hi Webkit-Dev,
> 
> I would like to get an official position from Webkit on the Atomics.waitAsync 
> JavaScript feature (TC39 Stage 3). Chrome is looking into shipping it in the 
> near future.
> 
> - Specification: https://tc39.es/proposal-atomics-wait-async/
> - ChromeStatus: https://chromestatus.com/feature/6243382101803008
> 
> Regards,
> 
> Marja Hölttä / V8
> 
> -- 
> 
> Google Germany GmbH
> Erika-Mann-Straße 33
> 80636 München
> 
> Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg
> 
> Diese E-Mail ist vertraulich. Falls sie diese fälschlicherweise erhalten 
> haben sollten, leiten Sie diese bitte nicht an jemand anderes weiter, löschen 
> Sie alle Kopien und Anhänge davon und lassen Sie mich bitte wissen, dass die 
> E-Mail an die falsche Person gesendet wurde. 
>  
> This e-mail is confidential. If you received this communication by mistake, 
> please don't forward it to anyone else, please erase all copies and 
> attachments, and please let me know that it has gone to the wrong person.
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Accidental binary bloating via C/C++ class/struct + Objective-C

2020-01-13 Thread Filip Pizlo
Wow, that sounds like an awesome find!

-Filip

> On Jan 13, 2020, at 5:53 PM, Yusuke Suzuki  wrote:
> 
> Hello WebKittens,
> 
> I recently striped 830KB binary size in WebKit just by using a work-around.
> This email describes what happened so far, to prevent from happening again.
> 
> ## Problem
> 
> When C/C++ struct/class is included in field types and method types in 
> Objective-C, Objective-C compiler puts type-enconding-string which gathers 
> type information one-leve deep for C/C++ struct/class if
> 
> 1. The type is a pointer to C/C++ struct/class
> 2. The type is a value of C/C++ struct/class
> 3. The type is a reference to C/C++ struct/class
> 
> However, our WebKit C/C++ struct/class is typically very complex type using a 
> lot of templates. Unfortunately, Objective-C compiler includes expanded 
> template definition as a string and adds it as a type-enconding-string into 
> the release binary!
> 
> For example, https://trac.webkit.org/changeset/254152/webkit is removing 
> JSC::VM& from Objective-C signature, and it reduces 200KB binary size!
> Another example is https://trac.webkit.org/changeset/254241/webkit, which 
> removes a lot of WebCore::WebView* etc. from Objective-C method signature, 
> and reduces 630KB binary.
> 
> ## Solution for now
> 
> We can purge type-encoding-string if we use Objective-C NS_DIRECT feature 
> (which makes Objective-C function as C function calling convention, removing 
> metadata).
> However, this does not work universally: with NS_DIRECT, Objective-C override 
> does not work. This means we need to be extra-careful when using it.
> 
> So, as a simple, but effective work-around, in the above patch, we introduced 
> NakedRef / NakedPtr. This is basically raw pointer / raw reference to 
> T, with a wrapper class.
> This leverages the behavior of Objective-C compiler’s mechanism “one-level 
> deep type information collection”. Since NakedRef / NakedPtr introduces 
> one-level deep field,
> Objective-C compiler does not collect the type information of T if 
> NakedPtr is included in the fields / signatures, while the compiler 
> collects information when T* is used.
> 
> So, if you are using T& / T* C/C++ struct/class in Objective-C, let’s convert 
> it to NakedRef / NakedPtr. Then you could save much binary size 
> immediately without causing any performance problem.
> 
> ## Future work
> 
> We would like to avoid including such types accidentally in Objective-C. We 
> should introduce build-time hook script which detects such a thing.
> I uploaded the PoC script in https://bugs.webkit.org/show_bug.cgi?id=205968, 
> and I’m personally planning to introduce such a hook into a part of build 
> process.
> 
> 
> -Yusuke
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Tadeu and Robin are now WebKit reviewers

2019-06-10 Thread Filip Pizlo
Congrats!

-Filip

> On Jun 10, 2019, at 6:10 PM, Yusuke Suzuki  wrote:
> 
> Congrats! :D
> 
>> On Jun 10, 2019, at 3:49 PM, Saam Barati  wrote:
>> 
>> Hi folks,
>> 
>> Tadeu and Robin are both now WebKit reviewers. Join me in congratulating 
>> them. Please ask them to review your code! They both have a focus in JSC.
>> 
>> - Saam
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Concurrent JS and 32bit platforms

2019-04-11 Thread Filip Pizlo


> On Apr 11, 2019, at 7:03 AM, Xan  wrote:
> 
> Hi all,
> 
> as part of our work on improving 32bit support in JSC we at Igalia are 
> planning to have a look at enabling concurrent js

We don’t support concurrent JavaScript. 

We do have concurrent JITs and concurrent GC. 

> for these platforms. Before we dive in, though, we thought it would be better 
> to ask some preliminary questions:
> 
> - Was this feature only implemented for 64bit because that was the focus of 
> the implementors moving forward? Or is there any fundamental difficulty in 
> the current architecture? In particular we have seen some comments about 
> atomic updates of JSValues that suggest it could be hard (or impossible) to 
> get this done on 32bit with the current approach.

Can’t do atomic access to JSValues on 32-bit. That’s a showstopper. 

> 
> - Assuming this is doable right now, we'll get on with it. Assuming it's not: 
> would you be open to making the necessary changes to JSC to make concurrent 
> js an option on 32bits?

No. 

-Filip

> 
> Thanks,
> 
> Xan
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Style guideline on initializing non-POD types via member initialization

2019-03-14 Thread Filip Pizlo
I like to draw this distinction: is the initializer a constant?

It’s not a constant if it relies on arguments to the constructor. “This” is an 
argument to the constructor. 

It’s also not a constant if it involves reading the heap. 

So, like you, I would want to see this code done in the constructor. But I’m 
not sure that my general rule is the same as everyone’s. 

-Filip

> On Mar 14, 2019, at 12:59 PM, Simon Fraser  wrote:
> 
> I've seen some code recently that initializes non-POD members via 
> initializers. For example, SVGAElement has:
> 
>AttributeOwnerProxy m_attributeOwnerProxy { *this };
> 
> I find this a little disorientating, and would normally expect to see this in 
> the constructor as m_attributeOwnerProxy(*this), as it makes it easier to 
> find places to set breakpoints, and the ordering of initialization is easier 
> to see.
> 
> Are people OK with this pattern, or should we discourage it via the style 
> guidelines (and style checker)?
> 
> Simon
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Intent to land PlayStation port

2018-10-29 Thread Filip Pizlo

> On Oct 29, 2018, at 2:32 PM, don.olmst...@sony.com wrote:
> 
> The PlayStation is x64. No JIT in the initial implementation.

Music to my ears!

-Filip


>  
> From: fpi...@apple.com  
> Sent: Monday, October 29, 2018 2:29 PM
> To: Olmstead, Don 
> Cc: WebKit Development 
> Subject: Re: [webkit-dev] Intent to land PlayStation port
>  
>  
> On Oct 29, 2018, at 2:19 PM, don.olmst...@sony.com 
>  wrote:
>  
> Hello WebKittens,
>  
> We've been working hard on our WinCairo port of WebKit and would like to 
> begin landing our PlayStation port to trunk. We have opened a meta bug at 
> https://bugs.webkit.org/show_bug.cgi?id=191038 
>  for the work.
>  
> We would like to land patches individually for each component of WebKit 
> starting with JavaScriptCore and making our way through to WebKit and 
> LayoutTest support for the platform. Once we start landing we will be working 
> on BuildBot support for PlayStation internally with the intent of creating 
> public BuildBots and then look into EWS support. Until that time we will not 
> have an expectation that anyone's patches should keep the PlayStation green 
> and will fix any build errors ourselves. This would be a similar situation 
> that WinCairo had before we started adding build and test infrastructure.
>  
> We do not plan to abandon WinCairo after we land a PlayStation port. We plan 
> on continuing development on WinCairo to support our developers who use our 
> PlayStation WebKit port. For example a remote inspector implementation is 
> forthcoming from us and this would be used to communicate with the device.
>  
> I should have a patch for JSC this week and am able to address any concerns 
> before we land that patch.
>  
> 32-bit or 64-bit?
>  
> JIT or no JIT?
>  
> -Filip

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Intent to land PlayStation port

2018-10-29 Thread Filip Pizlo

> On Oct 29, 2018, at 2:19 PM, don.olmst...@sony.com wrote:
> 
> Hello WebKittens,
>  
> We've been working hard on our WinCairo port of WebKit and would like to 
> begin landing our PlayStation port to trunk. We have opened a meta bug at 
> https://bugs.webkit.org/show_bug.cgi?id=191038 
>  for the work.
>  
> We would like to land patches individually for each component of WebKit 
> starting with JavaScriptCore and making our way through to WebKit and 
> LayoutTest support for the platform. Once we start landing we will be working 
> on BuildBot support for PlayStation internally with the intent of creating 
> public BuildBots and then look into EWS support. Until that time we will not 
> have an expectation that anyone's patches should keep the PlayStation green 
> and will fix any build errors ourselves. This would be a similar situation 
> that WinCairo had before we started adding build and test infrastructure.
>  
> We do not plan to abandon WinCairo after we land a PlayStation port. We plan 
> on continuing development on WinCairo to support our developers who use our 
> PlayStation WebKit port. For example a remote inspector implementation is 
> forthcoming from us and this would be used to communicate with the device.
>  
> I should have a patch for JSC this week and am able to address any concerns 
> before we land that patch.

32-bit or 64-bit?

JIT or no JIT?

-Filip


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [jsc-dev] Proposal: Using LLInt Asm in major architectures even if JIT is disabled

2018-09-21 Thread Filip Pizlo


> On Sep 21, 2018, at 9:44 AM, Guillaume Emont  wrote:
> 
> Quoting Yusuke Suzuki (2018-09-21 10:10:59)
>> Yeah, I'm not planning to enable LLInt ASM interpreter on 32bit architectures
>> since no buildbot exists for this configuration.
> 
> I'm confused. Do you mean you don't want to enable LLint instead of
> CLoop, for the case when JIT is disabled on 32-bit architectures?

If you guys want to take responsibility for 32-bit then you can enable whatever 
LLInt config you want on 32-bit.

> FTR, the configuration LLInt(with offlineasm)+jit+dfg is tested in
> 32-bit testbots for at least mips, armv7 and x86.
> 
> 
>> And we should make 32bit architectures JSVALUE64, so LLInt JSVALUE32_64 
>> should
>> be removed in the future.
> 
> See what Filip and Michael were saying. We believe that we need
> JSVALUE32_64, and we are willing to maintain it, as the performance gap
> between LLInt or CLoop and JIT+DFG on 32-bit architectures is
> significant.

I’m saying we should remove JSVALUE32_64. That is my preference. I’m letting it 
stay in tree so long as someone maintains it, but honestly I’d prefer it if it 
wasn’t maintained and if we could let it die. 

I’d like to see the majority of JSC development move to 64-bit. I’d prefer if 
new features or enhancements were 64-bit only, since that means that it will 
take less time to develop and test them. I think that folks doing JSC 
development should be encouraged to land changes only for 64-bit since that’s 
our focus as a project.

-Filip

> 
> Guillaume
> 
>> 
>> On Fri, Sep 21, 2018 at 2:33 AM Michael Catanzaro 
>> wrote:
>> 
>>>On Thu, Sep 20, 2018 at 12:02 PM, Filip Pizlo  wrote:
>>> - Enable cloop/JSVALUE64 to work on 32-bit.  I don’t think it does
>>> right now, but that’s probably trivial to fix.
>>> - Switch Darwin ports to that configuration for 32-bit.
>>> - When changes land to support new features, make it mandatory to
>>> support JSVALUE64 and optional to support JSVALUE32_64.  Such changes
>>> should include whoever volunteers to maintain JSVALUE32_64 in CC.
>>> 
>>> If you guys consider JSVALUE32_64 to be critical, then you can go
>>> ahead and maintain it.  We’ll let JSVALUE32_64 stay in the tree so
>>> long as someone is maintaining it.
>> 
>>Yes that's fine with us. I think that's the previous agreement, anyway.
>>:)
>> 
>>Michael
>> 
>> 
>> 
>> 
>> --
>> Best regards,
>> Yusuke Suzuki
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [jsc-dev] Proposal: Using LLInt Asm in major architectures even if JIT is disabled

2018-09-20 Thread Filip Pizlo
I think that we should move to removing JSVALUE32_64, since it doesn’t get 
significant testing or maintenance anymore. I’d love it if 32-bit targets used 
the cloop with JSVALUE64, so that we can rip out the 32-bit jit and offlineasm 
backends, and remove the 32-bit representation code from the runtime. 

I’m fine with using asm llint on 64-bit platforms, but using it on 32-bit 
platforms seems like it’ll be short lived. 

-Filip

> On Sep 20, 2018, at 12:00 AM, Yusuke Suzuki  
> wrote:
> 
> I've just set up MacBook Pro to measure the effect on macOS.
> 
> The results are the followings.
> 
> VMs tested:
> "baseline" at /Users/yusukesuzuki/dev/WebKit/WebKitBuild/nojit/Release/jsc
> "patched" at 
> /Users/yusukesuzuki/dev/WebKit/WebKitBuild/nojit-llint/Release/jsc
> 
> Collected 2 samples per benchmark/VM, with 2 VM invocations per benchmark. 
> Emitted a call to gc() between sample
> measurements. Used 1 benchmark iteration per VM invocation for warm-up. Used 
> the jsc-specific preciseTime()
> function to get microsecond-level timing. Reporting benchmark execution times 
> with 95% confidence intervals in
> milliseconds.
> 
>baseline  patched  
>
> 
> ai-astar  1738.056+-49.666 ^
> 1568.904+-44.535^ definitely 1.1078x faster
> audio-beat-detection  1127.677+-15.749 ^ 
> 972.323+-23.908^ definitely 1.1598x faster
> audio-dft  942.952+-107.209  
> 919.933+-310.247 might be 1.0250x faster
> audio-fft  985.489+-47.414 ^ 
> 796.955+-25.476^ definitely 1.2366x faster
> audio-oscillator   967.891+-34.854 ^ 
> 801.778+-18.226^ definitely 1.2072x faster
> imaging-darkroom  1265.340+-114.464^
> 1099.233+-2.372 ^ definitely 1.1511x faster
> imaging-desaturate1737.826+-40.791 ?
> 1749.010+-167.969   ?
> imaging-gaussian-blur 7846.369+-52.165 ^
> 6392.379+-1025.168  ^ definitely 1.2275x faster
> json-parse-financial33.141+-0.473 
> 33.054+-1.058 
> json-stringify-tinderbox20.803+-0.901 
> 20.664+-0.717 
> stanford-crypto-aes401.589+-39.750   
> 376.622+-12.111  might be 1.0663x faster
> stanford-crypto-ccm245.629+-45.322   
> 228.013+-8.976   might be 1.0773x faster
> stanford-crypto-pbkdf2 941.178+-28.744   
> 864.462+-60.083  might be 1.0887x faster
> stanford-crypto-sha256-iterative   299.988+-47.729   
> 270.849+-32.356  might be 1.1076x faster
> 
>   1325.281+-2.613  ^
> 1149.584+-75.875^ definitely 1.1528x faster
> 
> Interestingly, the improvement is not so large. In Linux box, it was 2x. But 
> in macOS, it is 15%.
> But I think it is very nice if we can get 15% boost without any drawbacks.
> 
>> On Thu, Sep 20, 2018 at 3:08 PM Saam Barati  wrote:
>> Interesting! I must have not run this experiment correctly when I did it.
>> 
>> - Saam
>> 
>>> On Sep 19, 2018, at 7:31 PM, Yusuke Suzuki  
>>> wrote:
>>> 
 On Thu, Sep 20, 2018 at 12:54 AM Saam Barati  wrote:
 To elaborate: I ran this same experiment before. And I forgot to turn off 
 the RegExp JIT and got results similar to what you got. Once I turned off 
 the RegExp JIT, I saw no perf difference.
>>> 
>>> Yeah, I disabled JIT and RegExpJIT explicitly by using
>>> 
>>> export JSC_useJIT=false
>>> export JSC_useRegExpJIT=false
>>> 
>>> and I checked no JIT code is generated by running dumpDisassembly. And I 
>>> also put `CRASH()` in ExecutableAllocator::singleton() to ensure no 
>>> executable memory is allocated.
>>> The result is the same. I think `useJIT=false` disables RegExp JIT too.
>>> 
>>>baseline  
>>> patched  
>>> 
>>> ai-astar  3499.046+-14.772 ^
>>> 1897.624+-234.517   ^ definitely 1.8439x faster
>>> audio-beat-detection  1803.466+-491.965  
>>> 970.636+-428.051 might be 1.8580x faster
>>> audio-dft 1756.985+-68.710 ^ 
>>> 954.312+-528.406   ^ definitely 1.8411x faster
>>> audio-fft 1637.969+-458.129  
>>> 850.083+-449.228 might be 1.9268x faster
>>> audio-oscillator  1866.006+-569.581^ 
>>> 967.194+-82.521^ definitely 1.9293x faster
>>> imaging-darkroom  2156.526+-591.042^
>>> 1231.318+-187.297   ^ definitely 1.7514x faster
>>> imaging-desaturate3059.335+-284.740^

Re: [webkit-dev] Offline Assembler build step always computes hashes even when nothing changes

2018-09-17 Thread Filip Pizlo
Sorry, I should have asked: does it even rebuild when you change nothing?

That llint step really does depend on most headers in WTF and JSC, so if you 
change any of them then I would expect a rebuild of that file. It may be that 
the right solution is to make that step faster and to make it possible to run 
it in parallel to other steps. 

-Filip

> On Sep 17, 2018, at 10:01 AM, Darin Adler  wrote:
> 
> I don’t know. 
> 
> Sent from my iPhone
> 
>> On Sep 17, 2018, at 7:49 AM, Filip Pizlo  wrote:
>> 
>> 
>> 
>>> On Sep 16, 2018, at 8:48 PM, Darin Adler  wrote:
>>> 
>>>> On Sep 16, 2018, at 5:59 PM, Filip Pizlo  wrote:
>>>> 
>>>> Which offline assembler build step are you referring to?
>>> 
>>> The one that is the “Offline Assembler” target in Xcode, which runs this 
>>> command:
>>> 
>>> ruby JavaScriptCore/offlineasm/asm.rb 
>>> JavaScriptCore/llint/LowLevelInterpreter.asm 
>>> "${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor” LLIntAssembly.h
>>> 
>>> For a “nothing rebuild” of all of WebKit and all of Safari for iOS on my 
>>> iMac, it takes about 10 seconds out of a 30 second total “build" time.
>>> 
>>> Looking more carefully at the build log now, it seems that recompiling 
>>> LLIntOffsetExtractor.cpp is also taking multiple seconds. Not executing 
>>> generate_offset_extractor.rb, but compiling the output.
>> 
>> Does every build that you do rebuild LLIntOffsetExtractor.cpp?  Including a 
>> clean build?
>> 
>> -Filip
>> 
>>> 
>>> — Darin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Offline Assembler build step always computes hashes even when nothing changes

2018-09-17 Thread Filip Pizlo


> On Sep 16, 2018, at 8:48 PM, Darin Adler  wrote:
> 
>> On Sep 16, 2018, at 5:59 PM, Filip Pizlo  wrote:
>> 
>> Which offline assembler build step are you referring to?
> 
> The one that is the “Offline Assembler” target in Xcode, which runs this 
> command:
> 
> ruby JavaScriptCore/offlineasm/asm.rb 
> JavaScriptCore/llint/LowLevelInterpreter.asm 
> "${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor” LLIntAssembly.h
> 
> For a “nothing rebuild” of all of WebKit and all of Safari for iOS on my 
> iMac, it takes about 10 seconds out of a 30 second total “build" time.
> 
> Looking more carefully at the build log now, it seems that recompiling 
> LLIntOffsetExtractor.cpp is also taking multiple seconds. Not executing 
> generate_offset_extractor.rb, but compiling the output.

Does every build that you do rebuild LLIntOffsetExtractor.cpp?  Including a 
clean build?

-Filip

> 
> — Darin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Offline Assembler build step always computes hashes even when nothing changes

2018-09-16 Thread Filip Pizlo


> On Sep 16, 2018, at 4:03 PM, Darin Adler  wrote:
> 
> I noticed that the “Offline Assembler” build step was taking between 5 and 30 
> seconds every time I build. Really stands out in incremental builds. I 
> realized that this step does not do any dependency analysis. Every time, it 
> builds a hash of the input to see if it needs to recompute the assembly.

Yup, that’s quite intentional. 

> 
> That’s probably not the best pattern; normally we like to use file 
> modification dates to avoid doing any work when files haven’t changed.

I don’t totally remember the details, but it’s not that simple. I vaguely 
recall a previous attempt to fix this that had to be reverted because it 
resulted in too many broken builds.

Which offline assembler build step are you referring to?  There is more than 
one. I think it’s the step that generates the LLInt using the offset extractor 
binary as input.

Note that one problem is that this step is slower than it could be. We 
sometimes regress it a lot and then make it faster again. Maybe we regressed it 
recently. 

> 
> Is there someone who can help me fix this so we get faster incremental builds?

Sounds like Michael volunteered.

I would favor a sure-to-be-sound fix of just making that phase run faster, 
possibly by reducing the number of options that the llint uses. That might get 
you the outcome you want (faster builds) without the risk of bad builds. 

-Filip

> 
> — Darin
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] node-jsc: A node.js port to the JavaScriptCore engine and iOS

2018-09-16 Thread Filip Pizlo


> On Sep 16, 2018, at 2:09 AM, Koby Boyango  wrote:
> 
> Thanks for taking the time to look into the project :)
> 
> Filip - I would love to. Should I create one bug for all of the patches, or a 
> bug for each patch? 
> Also, there is an existing bug that I've reported a while ago, but worked 
> around it for now: https://bugs.webkit.org/show_bug.cgi?id=184232. It isn't 
> relevant in newer versions of node (it came from node's Buffer constructor, 
> which have changed since), but I'll still be happy to send a patch if needed.

I think that you want a parent bug that’s just an umbrella and then have bugs 
that block it for each patch.

-Filip

> 
> Yusuke - It's interesting to compare, especially on an iOS device. I will 
> also try to do some measurements :) Do you have a benchmark you recommend?
> But assuming it is worth it, enabling LLInt ASM without the JIT would be 
> great as it would probably reduce the binary size and compilation time by 
> quite a bit.  
> NativeScript is also using it without the JIT (and they link to an article 
> containing some benchmarks), so they would profit from this too.
> https://github.com/NativeScript/ios-runtime/commit/1528ed50f85998147b190c22a390b5eca36c5acb
> 
> Koby
> 
>> On Sat, Sep 15, 2018 at 2:51 AM Yusuke Suzuki  
>> wrote:
>> Really great!
>> 
>> node-jsc sounds very exciting to me. From the users' view, t is nice if we 
>> run app constructed in node.js manner in iOS devices.
>> In addition, from the JSC developers' view, it is also awesome. It allows us 
>> to easily run node.js libraries / benchmarks / tests on JSC, which is really 
>> great since,
>> 
>> 1. We can run tests designed for node.js, it makes our JSC implementation 
>> more solid.
>> 2. We can run benchmarks designed for node.js including JS libraries. JS 
>> libraries distributed in npm are more and more used in both node.js and 
>> browser world.
>> If we can have a way to run benchmarks in popular libraries on JSC easily, 
>> that offers great opportunities to optimize JSC on them.
>> 
>>> On Sat, Sep 15, 2018 at 5:20 AM Filip Pizlo  wrote:
>>> Wow!  That’s pretty cool!
>>> 
>>> I think that it would be great for this to be upstreamed. Can you create a 
>>> bug on bugs.webkit.org and post your patches for review?
>>> 
>>> -Filip
>>> 
>>>> On Sep 13, 2018, at 4:02 PM, Koby Boyango  wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I'm Koby Boyango, a senior researcher and developer at mce, and I've 
>>>> created node-jsc, an experimental port of node.js to the JavaScriptCore 
>>>> engine and iOS specifically.
>>>> 
>>>> node-jsc's core component, "jscshim" (deps/jscshim), implements (parts of) 
>>>> v8 API on top of JavaScriptCore. It contains a stripped down version of 
>>>> WebKit's source code (mainly JSC and WTF). To build WebKit, I'm using 
>>>> CMake to build the JSCOnly port, with JSC\WTF compiled as static 
>>>> libraries. For iOS I'm using my own build script with a custom toolchain 
>>>> file.
>>>> 
>> I'm really happy to hear that your node-jsc is using JSCOnly ports :)
>>>> The project also includes node-native-script, NativeScript's iOS runtime 
>>>> refactored as node-jsc native module, allowing access to native iOS APIs 
>>>> directly from javascript.
>>>> 
>>>> So first of all, I wanted to share this project with the WebKit developer 
>>>> community.
>>>> It's my first time working with WebKit, and node-jsc has been a great 
>>>> opportunity to experiment with it.
>>>> 
>>>> Second, as I needed to make some minor changes\additions, I'm using my own 
>>>> fork. I would love to discuss some of the changes I've made, and offer 
>>>> some patches if you'll find them useful. 
>>>> "WebKit Fork and Compilation" describes WebKit's usage in node-jsc and the 
>>>> major changes\additions I've made in my fork (node-jsc's README and 
>>>> jschim's documentation contains some more information).
>>>> 
>> Great, it is really nice if you have a patch for upstream :)
>> Looking through the documents, I have one question on LLInt v.s. CLoop.
>> 
>> https://github.com/mceSystems/node-jsc/blob/master/deps/jscshim/docs/webkit_fork_and_compilation.md#webkit-port-and-compilation
>> > Use the optimized assembly version of LLInt (JSC's interpreter), not 
>> > cloop. This requires enabling JIT support, although we won't be using 

Re: [webkit-dev] node-jsc: A node.js port to the JavaScriptCore engine and iOS

2018-09-14 Thread Filip Pizlo
Wow!  That’s pretty cool!

I think that it would be great for this to be upstreamed. Can you create a bug 
on bugs.webkit.org  and post your patches for review?

-Filip

On Sep 13, 2018, at 4:02 PM, Koby Boyango mailto:koby.b@mce.systems>> wrote:

> Hi,
> 
> I'm Koby Boyango, a senior researcher and developer at mce, and I've created 
> node-jsc , an experimental port of 
> node.js to the JavaScriptCore engine and iOS specifically.
> 
> node-jsc's core component, "jscshim" (deps/jscshim) 
> , implements 
> (parts of) v8 API on top of JavaScriptCore. It contains a stripped down 
> version of WebKit's source code (mainly JSC and WTF). To build WebKit, I'm 
> using CMake to build the JSCOnly port, with JSC\WTF compiled as static 
> libraries. For iOS I'm using my own build script 
> 
>  with a custom toolchain file 
> .
> 
> The project also includes node-native-script 
> , NativeScript's iOS 
> runtime refactored as node-jsc native module, allowing access to native iOS 
> APIs directly from javascript.
> 
> So first of all, I wanted to share this project with the WebKit developer 
> community.
> It's my first time working with WebKit, and node-jsc has been a great 
> opportunity to experiment with it.
> 
> Second, as I needed to make some minor changes\additions, I'm using my own 
> fork . I would love to discuss some of 
> the changes I've made, and offer some patches if you'll find them useful. 
> "WebKit Fork and Compilation 
> "
>  describes WebKit's usage in node-jsc and the major changes\additions I've 
> made in my fork (node-jsc's README 
>  and jschim's 
> documentation 
> 
>  contains some more information).
> 
> Besides that, I will appreciate any opinions\ideas\insights\suggestions :) 
> 
> Koby
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org 
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Can we drop supporting mixed Wasm::MemoryMode in one process?

2018-08-28 Thread Filip Pizlo

> On Aug 28, 2018, at 12:09 PM, Yusuke Suzuki  
> wrote:
> 
> 
> 
> On Wed, Aug 29, 2018 at 3:58 Filip Pizlo  <mailto:fpi...@apple.com>> wrote:
> 
>> On Aug 28, 2018, at 11:56 AM, Yusuke Suzuki > <mailto:yusukesuz...@slowstart.org>> wrote:
>> 
>> 
>> 
>> On Wed, Aug 29, 2018 at 3:49 Yusuke Suzuki > <mailto:yusukesuz...@slowstart.org>> wrote:
>> 
>> 
>> On Wed, Aug 29, 2018 at 3:27 Filip Pizlo > <mailto:fpi...@apple.com>> wrote:
>> 
>>> On Aug 28, 2018, at 11:25 AM, Yusuke Suzuki >> <mailto:yusukesuz...@slowstart.org>> wrote:
>>> 
>>> Thanks!
>>> 
>>> On Wed, Aug 29, 2018 at 3:22 Filip Pizlo >> <mailto:fpi...@apple.com>> wrote:
>>> I don’t like this proposal.
>>> 
>>> If we are running low on memory, we should switch to bounds checked memory.
>>> 
>>> How about using bound checking mode exclusively for low environment?
>> 
>> That would mean that, paradoxically, having a machine with a lot of memory 
>> means being able to spawn fewer wasm instances.
>> 
>> We want to support lightweight wasm instances because it wouldn’t be good to 
>> rule that out as a use case.
>> 
>> Hmmm, can we compile the BBQ phase / initial wasm code without knowing the 
>> attached memory’s mode? The current strategy basically defers compilation of 
>> wasm module until the memory mode is found.
>> Because of this, WebAssembly.compile is not so meaningful in our 
>> implementation right now...
>> And wasm ES6 module can import memory externally. This means that we cannot 
>> start wasm compilation after the memory mode of the impprted memory 
>> (described in the imported modulr) is downloaded, analyzed and found.
>> 
>> How about always compiling BBQ code with bound checking mode?
>> It should work with signaling memory with small / no tweaks. And OMG code 
>> will be compiled based on the memory mode attached to the module.
>> Since BBQ -> OMG function call should be linked, we need to call appropriate 
>> func for the running memory mode, but it would not introduce significant 
>> complexity.
> 
> What complexity are you trying to fix, specifically?
> 
> What I want is starting compilation before the memory is attached a.k.a. 
> instantiated)
> 
> 
> I think that what we really want is an interpreter as our baseline.  Then 
> tier-up to BBQ or OMG from there.  In that world, I don’t think any of this 
> matters.
> 
> Does this interpreter execute wasm binary directly? If so, we can skip 
> compiling and all should work well!
> 
> Even if we want some own bytecode (like stack VM to register VM etc.), it is 
> ok if the compilation result is not tied to the memory mode.

I don’t know if it will execute the wasm binary directly, but whatever bytecode 
it runs could be dissociated from memory mode.

-Filip


> 
> If the compilation result is tied to the memory mode, then we still need to 
> defer the compilation until the memory mode is attached.
> 
> 
> -Filip
> 
> 
>> 
>> 
>> 
>> 
>> -Filip
>> 
>> 
>>> 
>>> 
>>> -Filip
>>> 
>>> 
>>> 
>>>> On Aug 28, 2018, at 11:21 AM, Yusuke Suzuki >>> <mailto:yusukesuz...@slowstart.org>> wrote:
>>>> 
>>> 
>>> 
>>>> Posted this mail to webkit-dev mailing list too :)
>>>> 
>>>> On Wed, Aug 29, 2018 at 3:19 AM Yusuke Suzuki >>> <mailto:yusukesuz...@slowstart.org>> wrote:
>>>> Hi JSC folks,
>>>> 
>>>> In Wasm supported environment, our MemoryMode is a bit dynamic.
>>>> When we fail to allocate WasmMemory for signaling mode, we fall back to 
>>>> the bound checking memory instead.
>>>> 
>>>> But Wasm code compiled for signaling / bound checking is incompatible. If 
>>>> the code is compiled
>>>> as signaling mode, and if we attach the memory for bound checking, we need 
>>>> to recompile the
>>>> code for bound checking mode. This introduces significant complexity to 
>>>> our wasm compilation.
>>>> And our WebAssembly.compile is not basically compiling: it is just 
>>>> validating.
>>>> Actual compiling needs to be deferred until the memory is attached by 
>>>> instantiating.
>>>> It is not good when we would like to share WasmModule among multiple wasm 
>>>> threads / workers in the future, since the "compiled" Wasm module is not 
>>>> actually compiled.
>>>> 
>>>> So, my proposal is, can we explore the way to exclusively support one of 
>>>> MemoryMode in a certain architecture?
>>>> For example, in x64, enable signaling mode, and we report OOM errors if we 
>>>> fail to allocate WasmMemory with signaling mode.
>>>> 
>>>> Best regards,
>>>> Yusuke Suzuki
>>> 
>>>> ___
>>>> webkit-dev mailing list
>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Can we drop supporting mixed Wasm::MemoryMode in one process?

2018-08-28 Thread Filip Pizlo

> On Aug 28, 2018, at 11:25 AM, Yusuke Suzuki  
> wrote:
> 
> Thanks!
> 
> On Wed, Aug 29, 2018 at 3:22 Filip Pizlo  <mailto:fpi...@apple.com>> wrote:
> I don’t like this proposal.
> 
> If we are running low on memory, we should switch to bounds checked memory.
> 
> How about using bound checking mode exclusively for low environment?

That would mean that, paradoxically, having a machine with a lot of memory 
means being able to spawn fewer wasm instances.

We want to support lightweight wasm instances because it wouldn’t be good to 
rule that out as a use case.

-Filip


> 
> 
> -Filip
> 
> 
> 
>> On Aug 28, 2018, at 11:21 AM, Yusuke Suzuki > <mailto:yusukesuz...@slowstart.org>> wrote:
>> 
> 
> 
>> Posted this mail to webkit-dev mailing list too :)
>> 
>> On Wed, Aug 29, 2018 at 3:19 AM Yusuke Suzuki > <mailto:yusukesuz...@slowstart.org>> wrote:
>> Hi JSC folks,
>> 
>> In Wasm supported environment, our MemoryMode is a bit dynamic.
>> When we fail to allocate WasmMemory for signaling mode, we fall back to the 
>> bound checking memory instead.
>> 
>> But Wasm code compiled for signaling / bound checking is incompatible. If 
>> the code is compiled
>> as signaling mode, and if we attach the memory for bound checking, we need 
>> to recompile the
>> code for bound checking mode. This introduces significant complexity to our 
>> wasm compilation.
>> And our WebAssembly.compile is not basically compiling: it is just 
>> validating.
>> Actual compiling needs to be deferred until the memory is attached by 
>> instantiating.
>> It is not good when we would like to share WasmModule among multiple wasm 
>> threads / workers in the future, since the "compiled" Wasm module is not 
>> actually compiled.
>> 
>> So, my proposal is, can we explore the way to exclusively support one of 
>> MemoryMode in a certain architecture?
>> For example, in x64, enable signaling mode, and we report OOM errors if we 
>> fail to allocate WasmMemory with signaling mode.
>> 
>> Best regards,
>> Yusuke Suzuki
> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Can we drop supporting mixed Wasm::MemoryMode in one process?

2018-08-28 Thread Filip Pizlo
I don’t like this proposal.

If we are running low on memory, we should switch to bounds checked memory.

-Filip


> On Aug 28, 2018, at 11:21 AM, Yusuke Suzuki  
> wrote:
> 
> Posted this mail to webkit-dev mailing list too :)
> 
> On Wed, Aug 29, 2018 at 3:19 AM Yusuke Suzuki  > wrote:
> Hi JSC folks,
> 
> In Wasm supported environment, our MemoryMode is a bit dynamic.
> When we fail to allocate WasmMemory for signaling mode, we fall back to the 
> bound checking memory instead.
> 
> But Wasm code compiled for signaling / bound checking is incompatible. If the 
> code is compiled
> as signaling mode, and if we attach the memory for bound checking, we need to 
> recompile the
> code for bound checking mode. This introduces significant complexity to our 
> wasm compilation.
> And our WebAssembly.compile is not basically compiling: it is just validating.
> Actual compiling needs to be deferred until the memory is attached by 
> instantiating.
> It is not good when we would like to share WasmModule among multiple wasm 
> threads / workers in the future, since the "compiled" Wasm module is not 
> actually compiled.
> 
> So, my proposal is, can we explore the way to exclusively support one of 
> MemoryMode in a certain architecture?
> For example, in x64, enable signaling mode, and we report OOM errors if we 
> fail to allocate WasmMemory with signaling mode.
> 
> Best regards,
> Yusuke Suzuki
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] JavaScriptCore on Fuchsia

2018-06-26 Thread Filip Pizlo


> On Jun 26, 2018, at 2:28 PM, Adam Barth  wrote:
> 
> On Tue, Jun 26, 2018 at 2:08 PM Filip Pizlo  wrote:
>> This looks super interesting!
> 
> Thanks.  :)
> 
>> I have some questions:
>> 
>> - Does your JSC port enable all of JSC’s optimization features, like the FTL 
>> JIT and concurrent GC?
> 
> Nope.  I believe we just have the interpreter working.
> 
>>- Is it a goal to enable basically everything we enable, long term?
> 
> The Fuchsia project itself doesn't have any particular goals for
> JavaScriptCore.  It's possible that our customers who want to use
> JavaScriptCore on Fuchsia will have goals for additional functionality
> in the future, but assuming we host the code on svn.webkit.org, I
> would imagine recommending that those customers interact directly with
> the WebKit project.

Gotcha.

> 
>> - Is this 32-bit, 64-bit, or both?
> 
> Fuchsia supports only 64-bit architectures.  We have no interest in
> 32-bit support in JavaScriptCore.

Music to my ears.

> 
>> - Is this mainly for ARM, x86, some other CPU, or lots of CPUs?
> 
> Fuchsia supports x86_64 and aarch64.

Lovely.

> 
>> - Do you plan to do significant work on JSC, or do you mainly want to just 
>> stay up to date with what we’re doing?
> 
> I'd like to foster a healthy dynamic between the WebKit project and
> our mutual customers.  It's hard for me to predict to where those
> customers will fall on that spectrum, but I would not anticipate the
> Fuchsia project itself making significant contributions to JSC.
> However, I would anticipate us maintaining the integration between
> JavaScriptCore and Fuchsia.
> 
>> More thoughts inline:
>> 
>>> On Jun 26, 2018, at 2:00 PM, Adam Barth  wrote:
>>> 
>>> As part of developing Fuchsia [1] (a new open-source operating
>>> system), we ported JavaScriptCore to run on Fuchsia [2].  At the time,
>>> our intent was to use this code within the Fuchsia source tree but not
>>> to make it available for developers writing applications for Fuchsia.
>>> However, recently, we've gotten a number of requests from customers
>>> who would like to use JavaScriptCore in their Fuchsia applications.
>>> 
>>> I'd like your advice about the best way to maintain JavaScriptCore
>>> support for Fuchsia.  Here are some possibilities I can imagine, but
>>> we're open to other possibilities as well:
>>> 
>>> 1. Maintain a fork of JavaScriptCore somewhere on googlesource.com
>>> that supports Fuchsia and instruct customers to obtain the Fuchsia
>>> port of JavaScriptCore from googlesource.com.
>> 
>> I’d be OK with this.
>> 
>>> 2. Maintain a branch of JavaScriptCore in svn.webkit.org that supports 
>>> Fuchsia.
>> 
>> In my opinion, SVN branches are not significantly better than (1), and in 
>> many ways they are worse.  I wouldn’t advocate for this.
> 
> Make sense.
> 
>>> 3. Maintain support for JavaScriptCore in the mainline of svn.webkit.org.
>> 
>> I think that I’d be OK with this, too.  This seems better than (1) if you 
>> want to stay up-to-date.
> 
> Thanks!

Based on this, I think that it’s ideal for it to be in svn.webkit.org 
<http://svn.webkit.org/>.

-Filip


> 
> Adam
> 
> 
>>> For reference, here's the patch we applied to WTF and JavaScriptCore
>>> to enable Fuchsia support:
>>> 
>>> https://gist.github.com/abarth/b4fc909d83be5133cd7a5af209757e96
>>> 
>>> This patch is based on webkit@206446, but we'd obviously rebase the
>>> patch onto top-of-tree before contributing it.
>>> 
>>> I'm happy to answer any questions you might have that would help you
>>> provide the best advice.  If you would prefer to communicate off-list,
>>> I'm happy to do that as well.
>>> 
>>> Thanks in advance,
>>> Adam
>>> 
>>> [1] https://fuchsia.googlesource.com/docs/+/master/README.md
>>> [2] Actually, if you look at
>>> https://fuchsia.googlesource.com/third_party/webkit, you'll see that
>>> we've ported WebCore as well, but this email concerns only
>>> JavaScriptCore.
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] JavaScriptCore on Fuchsia

2018-06-26 Thread Filip Pizlo
Hi Adam,

This looks super interesting!

I have some questions:

- Does your JSC port enable all of JSC’s optimization features, like the FTL 
JIT and concurrent GC?
- Is it a goal to enable basically everything we enable, long term?

- Is this 32-bit, 64-bit, or both?

- Is this mainly for ARM, x86, some other CPU, or lots of CPUs?

- Do you plan to do significant work on JSC, or do you mainly want to just stay 
up to date with what we’re doing?

More thoughts inline:

> On Jun 26, 2018, at 2:00 PM, Adam Barth  wrote:
> 
> As part of developing Fuchsia [1] (a new open-source operating
> system), we ported JavaScriptCore to run on Fuchsia [2].  At the time,
> our intent was to use this code within the Fuchsia source tree but not
> to make it available for developers writing applications for Fuchsia.
> However, recently, we've gotten a number of requests from customers
> who would like to use JavaScriptCore in their Fuchsia applications.
> 
> I'd like your advice about the best way to maintain JavaScriptCore
> support for Fuchsia.  Here are some possibilities I can imagine, but
> we're open to other possibilities as well:
> 
> 1. Maintain a fork of JavaScriptCore somewhere on googlesource.com
> that supports Fuchsia and instruct customers to obtain the Fuchsia
> port of JavaScriptCore from googlesource.com.

I’d be OK with this.

> 
> 2. Maintain a branch of JavaScriptCore in svn.webkit.org that supports 
> Fuchsia.

In my opinion, SVN branches are not significantly better than (1), and in many 
ways they are worse.  I wouldn’t advocate for this.

> 
> 3. Maintain support for JavaScriptCore in the mainline of svn.webkit.org.

I think that I’d be OK with this, too.  This seems better than (1) if you want 
to stay up-to-date.

-Filip

> 
> For reference, here's the patch we applied to WTF and JavaScriptCore
> to enable Fuchsia support:
> 
> https://gist.github.com/abarth/b4fc909d83be5133cd7a5af209757e96
> 
> This patch is based on webkit@206446, but we'd obviously rebase the
> patch onto top-of-tree before contributing it.
> 
> I'm happy to answer any questions you might have that would help you
> provide the best advice.  If you would prefer to communicate off-list,
> I'm happy to do that as well.
> 
> Thanks in advance,
> Adam
> 
> [1] https://fuchsia.googlesource.com/docs/+/master/README.md
> [2] Actually, if you look at
> https://fuchsia.googlesource.com/third_party/webkit, you'll see that
> we've ported WebCore as well, but this email concerns only
> JavaScriptCore.
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Question about ARMv7 JIT maintenance by Apple

2018-05-04 Thread Filip Pizlo
We aren’t maintaining armv7. 

-Filip

> On May 4, 2018, at 7:01 PM, Yusuke SUZUKI  wrote:
> 
> Hi WebKittens,
> 
> JSC has X86, X86_64, ARM64, ARM, ARMv7, and MIPS JIT architectures.
> While X86, X86_64, and ARM64 is maintained by build bots configured by Apple,
> the other architectures are maintained by the community.
> 
> Recently, I'm about to adopting capstone disassembler for MIPS and ARM.
> And I'm also wondering if ARMv7 can use it instead of our own disassembler.
> 
> So my question is, is ARMv7 JIT maintained by Apple right now? Or is it 
> maintained by the community?
> If Apple does not maintain it right now, I would like to drop ARMv7 
> disassembler and use capstone instead.
> 
> Best regards,
> Yusuke Suzuki
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] bmalloc design question about relation with std malloc

2018-04-30 Thread Filip Pizlo
Ideally we would use the mmap allocator. But I wouldn’t do that if it causes a 
space usage regression, for example if we allocate a lot of small vectors. 

-Filip

> On Apr 30, 2018, at 3:35 AM, Yusuke SUZUKI  wrote:
> 
> Hi, WebKittens,
> 
> IIRC, bmalloc uses mmap based page allocator for internal memory use. For 
> example, bmalloc::Vector uses it instead of calling malloc.
> But recent changes start using std::vector, which means it uses std malloc 
> under the hood.
> 
> So my question is, if we want some internal memory allocation in bmalloc, 
> shoud we use std::malloc? Or should we use mmap based allocator?
> 
> Best regards,
> Yusuke Suzuki
> -- 
> Regards,
> Yusuke Suzuki
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Style guidelines for braced initialization

2018-04-26 Thread Filip Pizlo


> On Apr 26, 2018, at 10:35 AM, ross.kirsl...@sony.com wrote:
> 
> Hi everybody,
>  
> A simple question regarding https://webkit.org/code-style-guidelines/ 
> :
>  
> I've currently got a clang-format patch up (https://reviews.llvm.org/D46024) 
>  to support our space-before-brace style 
> for object initialization (i.e. `Foo foo { bar };` and not `Foo foo{ bar };`).
> Although we’re enforcing this with check-webkit-style, the guidelines page 
> currently makes no mention of braced initialization in particular. Should we 
> add a clause for this?

Yeah, sounds like we should!

-Filip


>  
> Thanks!
> Ross
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org 
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Disabling the 32-bit JITs by default.

2018-02-19 Thread Filip Pizlo
Would this be acceptable:

- 32-bit JIT gets disabled on macOS, iOS, and our Windows port, so those JITs 
get no testing on those platforms.
- code can stay in trunk so long as someone has a bot for it (we won’t).
- someone will have to step up to maintain the 32-bit JITs - like MIPS and 
ARMv6, we won’t keep them building or working.

How does that sound?

-Filip


> On Feb 19, 2018, at 5:12 PM, Guillaume Emont <guijem...@igalia.com> wrote:
> 
> Quoting Filip Pizlo (2018-02-19 14:17:44)
>> 
>>On Feb 19, 2018, at 11:43 AM, Guillaume Emont <guijem...@igalia.com> 
>> wrote:
>> 
>>Quoting Filip Pizlo (2018-02-19 13:05:27)
>> 
>> 
>> 
>>On Feb 19, 2018, at 10:53 AM, Guillaume Emont 
>> <guijem...@igalia.com
>>> wrote:
>> 
>>Hi Keith,
>> 
>>We at Igalia have been trying to provide a better story for 32-bit
>>platforms, in particular for Armv7 and MIPS. These platforms are
>>very
>>important to us, and disabling JIT renders many use cases
>>impossible.
>> 
>> 
>>What use cases?
>> 
>> 
>>I'm not sure of how much I can elaborate here, but in this particular
>>case that was for a set-top-box UI.
>> 
>> 
>> I bet that this doesn’t need a JIT.
> 
> I have measured performances in a set top box UI a few months ago on a
> mips device: in a typical use case provided by our client, I got 24 fps
> wit JIT+DFG, vs 6 fps without it. In that use case, having the JIT makes
> the difference between unusable and usable.
> 
>> 
>> If you want me to believe that it does, you need to show perf numbers.
> 
> Apart from the above, I can also run some benchmarks. Are there specific
> ones that you think matter more for this discussion? I have 2 mips
> devices as well as a raspberry pi 2 on which I can run benchmarks.
> 
> In the meantime, I quickly ran v8spider on a ci20 mips board on r228714.
> The NoJIT scenario was run with the same binary with JSC_useJIT=false, I
> guess the difference could be bigger if we were comparing to cloop.
> 
> --- v8-spider on mips ---
>  NoJIT  JIT   
>  
> 
> crypto11725.5325+-71.7021^   1830.4683+-6.8751^ 
> definitely 6.4058x faster
> deltablue 38857.3603+-189.8530   ^   2871.5320+-28.7077   ^ 
> definitely 13.5319x faster
> earley-boyer  13350.8512+-106.0597   ^   1775.7125+-4.0989^ 
> definitely 7.5186x faster
> raytrace   7894.6098+-33.3069^   2084.4858+-37.9436   ^ 
> definitely 3.7873x faster
> regexp 4055.1053+-120.4037   ^   2273.6319+-44.3103   ^ 
> definitely 1.7835x faster
> richards  36003.5590+-322.8705   ^   2108.9879+-46.2061   ^ 
> definitely 17.0715x faster
> splay  6936.2468+-24.3808^   1088.6877+-12.1142   ^ 
> definitely 6.3712x faster
> 
>12534.9214+-43.8771^   1934.9295+-5.2966^ 
> definitely 6.4782x faster
> 
> --
> 
> I ran the same on x86_64, and we see that for this benchmark, the
> average JIT speedup is actually less than on mips:
> 
> --- v8-spider on x86_64 ---
>  NoJIT  JIT   
>  
> 
> crypto  758.9786+-2.7148 ^ 83.2100+-17.1005   ^ 
> definitely 9.1212x faster
> deltablue  1330.8087+-149.0367   ^120.3405+-15.2389   ^ 
> definitely 11.0587x faster
> earley-boyer575.8073+-44.5565^ 90.5132+-15.6952   ^ 
> definitely 6.3616x faster
> raytrace322.8087+-8.4965 ^ 58.8531+-2.7943^ 
> definitely 5.4850x faster
> regexp  191.0544+-2.1591 ^131.5052+-35.2153   ^ 
> definitely 1.4528x faster
> richards   1439.3209+-117.4325   ^100.1944+-4.8524^ 
> definitely 14.3653x faster
> splay   279.3245+-7.1500 ^ 97.1257+-17.8935   ^ 
> definitely 2.8759x faster
> 
>  545.4854+-13.2273^ 94.3486+-5.6174^ 
> definitely 5.7816x faster
> 
> --
> 
>> 
>>I realize that having a JIT is good for marketing, but it’s better to
>>have a stable and well-maintained interpreter than a decrepit JIT.
>> Right now the 32-bit JIT is basically unmaintained.
>> 
>> 
>>Indeed these platforms used to be practically abandoned in WebKit. I
>>don't think that is true any more though. We've been working on fixing
>>this and getting mips32 

Re: [webkit-dev] Disabling the 32-bit JITs by default.

2018-02-19 Thread Filip Pizlo

> On Feb 19, 2018, at 1:03 PM, Adrian Perez de Castro <ape...@igalia.com> wrote:
> 
> Hello everybody,
> 
> On Mon, 19 Feb 2018 13:43:38 -0600, Guillaume Emont <guijem...@igalia.com> 
> wrote:
>> Quoting Filip Pizlo (2018-02-19 13:05:27)
>>> 
>>>> On Feb 19, 2018, at 10:53 AM, Guillaume Emont <guijem...@igalia.com> wrote:
>>>> 
>>>> Hi Keith,
>>>> 
>>>> We at Igalia have been trying to provide a better story for 32-bit
>>>> platforms, in particular for Armv7 and MIPS. These platforms are very
>>>> important to us, and disabling JIT renders many use cases impossible.
>>> 
>>> What use cases?
>> 
>> I'm not sure of how much I can elaborate here, but in this particular
>> case that was for a set-top-box UI.
>> 
>>> I realize that having a JIT is good for marketing, but it’s better to have
>>> a stable and well-maintained interpreter than a decrepit JIT. Right now
>>> the 32-bit JIT is basically unmaintained.
> 
> *Was* basically unmaintained a few months ago.
> 
> I agree that in many cases the JIT may be a marketing point, that not every
> application benefits from it, and that some applications may be better off
> using more CPU and saving memory instead by not having the JIT. Yet many
> applications *do* work better with a JIT — otherwise we would not be having
> this discussion, and the proposal would be to completely remove the JIT
> support, for all platforms ;-)
> 
>> Indeed these platforms used to be practically abandoned in WebKit. I
>> don't think that is true any more though. We've been working on fixing
>> this and getting mips32 and armv7+thumb2 to pass all the tests. We have
>> achieved that for mips32[1] and we are almost there for armv7[2]. I
>> would appreciate it if you could acknowledge that effort.
>> 
>> [1] https://build.webkit.org/builders/JSCOnly%20Linux%20MIPS32el%20Release
>> [2] 
>> https://build.webkit.org/builders/JSCOnly%20Linux%20ARMv7%20Thumb2%20Release
> 
> When we took the decision at Igalia of stepping up and work on keeping the
> 32-bit JIT support alive, we had only one person available to work full-time
> with the needed knowledge.
> 
> It was lucky that Guillaume could start right away when this topic was last
> discussed, but finding more developers to work on JSC is not trivial. Some of
> us have been even chipping in now and then to review patches — sometimes in
> our free time, like my informal reviews of patches for MIPS. Now we have a
> second person who can devote time to JSC, so I would expect things to get even
> better.
> 
>>>> We want to continue this effort to support these platforms. We have been
>>>> short on resources for that effort, which is why we did not realize
>>>> early enough that more mitigation was needed for 32-bit platforms. We
>>>> now have grown our team dedicated to this and we are hopeful that we
>>>> will avoid that kind of issue in the future.
>>> 
>>> I feel like I’ve heard this exact story before.  Every time we say that
>>> there isn’t any effort going into 32-bit, y’all say that you’ll put more
>>> effort into it Real Soon Now.  And then nothing happens, and we have the
>>> same conversation in 6 months.
>> 
>> I'm sorry it took us time to grow our team for this purpose, but that is
>> now a reality since the beginning of this month. And beside that, I
>> think you can agree that there has been significant progress on that
>> aspect, we were very far from having a green tree on mips32 about a year
>> ago, when we still had hundreds of test failures.
> 
> Also, we have been playing catch-up to get the 32-bit platforms in shape,
> without working on something more visible. This is the kind of churn that
> often goes unnoticed.
> 
>>>> We are working on a plan to mitigate Spectre on 32-bit platforms. We
>>>> would welcome community feedback on that, as well as what kinds of
>>>> mitigations would be considered sufficient.
> 
> Now that you mention plans: It would be extremely useful for us to have an
> idea of what's the JSC roadmap for the next few months. Being on the same page
> in this regard would allow us all to coordinate better, and better plan the
> focus of work on our side.

Our roadmap is to remove 32-bit JITs and to remove JSVALUE32_64, and then have 
32-bit platforms compile cloop with JSVALUE64.

> 
>>>> Regarding your patch, I think you should note that some specific 32-bit
>>>> CPUs are immune to Spectre (at least the Raspberry Pi[1] and some
>>>> MIPS

Re: [webkit-dev] Disabling the 32-bit JITs by default.

2018-02-19 Thread Filip Pizlo

> On Feb 19, 2018, at 11:43 AM, Guillaume Emont <guijem...@igalia.com> wrote:
> 
> Quoting Filip Pizlo (2018-02-19 13:05:27)
>> 
>>> On Feb 19, 2018, at 10:53 AM, Guillaume Emont <guijem...@igalia.com> wrote:
>>> 
>>> Hi Keith,
>>> 
>>> We at Igalia have been trying to provide a better story for 32-bit
>>> platforms, in particular for Armv7 and MIPS. These platforms are very
>>> important to us, and disabling JIT renders many use cases impossible.
>> 
>> What use cases?
> 
> I'm not sure of how much I can elaborate here, but in this particular
> case that was for a set-top-box UI.

I bet that this doesn’t need a JIT.

If you want me to believe that it does, you need to show perf numbers.

> 
>> 
>> I realize that having a JIT is good for marketing, but it’s better to have a 
>> stable and well-maintained interpreter than a decrepit JIT.  Right now the 
>> 32-bit JIT is basically unmaintained.
> 
> Indeed these platforms used to be practically abandoned in WebKit. I
> don't think that is true any more though. We've been working on fixing
> this and getting mips32 and armv7+thumb2 to pass all the tests. We have
> achieved that for mips32[1] and we are almost there for armv7[2]. I
> would appreciate it if you could acknowledge that effort.
> 
> [1] https://build.webkit.org/builders/JSCOnly%20Linux%20MIPS32el%20Release
> [2] 
> https://build.webkit.org/builders/JSCOnly%20Linux%20ARMv7%20Thumb2%20Release 
> <https://build.webkit.org/builders/JSCOnly%20Linux%20ARMv7%20Thumb2%20Release>

Passing all of the tests does not constitute stability in my book.  You need a 
lot of people using the JIT in anger for a while before you can be sure that 
you did it right.

Also, you need to prove that the JIT is actually a progression.  Last time we 
had such a conversation, you guys had perf regressions from enabling the JIT.  
So, use cases that needed perf would have been better off with the interpreter.

> 
>> 
>>> We
>>> want to continue this effort to support these platforms. We have been
>>> short on resources for that effort, which is why we did not realize
>>> early enough that more mitigation was needed for 32-bit platforms. We
>>> now have grown our team dedicated to this and we are hopeful that we
>>> will avoid that kind of issue in the future.
>> 
>> I feel like I’ve heard this exact story before.  Every time we say that 
>> there isn’t any effort going into 32-bit, y’all say that you’ll put more 
>> effort into it Real Soon Now.  And then nothing happens, and we have the 
>> same conversation in 6 months.
> 
> I'm sorry it took us time to grow our team for this purpose, but that is
> now a reality since the beginning of this month.

I’ve heard this before.

> And beside that, I
> think you can agree that there has been significant progress on that
> aspect, we were very far from having a green tree on mips32 about a year
> ago, when we still had hundreds of test failures.

I don’t agree that there has been significant progress.  The level of 
maintenance going into those JITs is a rounding error compared to how much work 
is done on ARM64 and x86_64.

-Filip


> 
>> 
>>> 
>>> We are working on a plan to mitigate Spectre on 32-bit platforms. We
>>> would welcome community feedback on that, as well as what kinds of
>>> mitigations would be considered sufficient.
>>> 
>>> Regarding your patch, I think you should note that some specific 32-bit
>>> CPUs are immune to Spectre (at least the Raspberry Pi[1] and some
>>> MIPS[2] devices), I think the deactivation should be done at run-time
>>> for CPUs not on a white list.
>> 
>> Keith’s main point is that the presence of 32-bit makes it harder to 
>> implement mitigations for 64-bit.  I don’t think it’s justifiable to hold 
>> back development of 64-bit Spectre mitigations because of a hardly-used and 
>> mostly-broken 32-bit JIT port that will be maintained by someone Real Soon 
>> Now.
> 
> I can't answer to that as I don't know enough what is hindering these
> mitigations exactly.
> 
> 
> Best regards,
> 
> Guillaume

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Disabling the 32-bit JITs by default.

2018-02-19 Thread Filip Pizlo

> On Feb 19, 2018, at 10:53 AM, Guillaume Emont  wrote:
> 
> Hi Keith,
> 
> We at Igalia have been trying to provide a better story for 32-bit
> platforms, in particular for Armv7 and MIPS. These platforms are very
> important to us, and disabling JIT renders many use cases impossible.

What use cases?

I realize that having a JIT is good for marketing, but it’s better to have a 
stable and well-maintained interpreter than a decrepit JIT.  Right now the 
32-bit JIT is basically unmaintained.

> We
> want to continue this effort to support these platforms. We have been
> short on resources for that effort, which is why we did not realize
> early enough that more mitigation was needed for 32-bit platforms. We
> now have grown our team dedicated to this and we are hopeful that we
> will avoid that kind of issue in the future.

I feel like I’ve heard this exact story before.  Every time we say that there 
isn’t any effort going into 32-bit, y’all say that you’ll put more effort into 
it Real Soon Now.  And then nothing happens, and we have the same conversation 
in 6 months.

> 
> We are working on a plan to mitigate Spectre on 32-bit platforms. We
> would welcome community feedback on that, as well as what kinds of
> mitigations would be considered sufficient.
> 
> Regarding your patch, I think you should note that some specific 32-bit
> CPUs are immune to Spectre (at least the Raspberry Pi[1] and some
> MIPS[2] devices), I think the deactivation should be done at run-time
> for CPUs not on a white list.

Keith’s main point is that the presence of 32-bit makes it harder to implement 
mitigations for 64-bit.  I don’t think it’s justifiable to hold back 
development of 64-bit Spectre mitigations because of a hardly-used and 
mostly-broken 32-bit JIT port that will be maintained by someone Real Soon Now.

-Filip


> 
> Best regards,
> 
> Guilaume Emont and the Igalia compilers team
> 
> [1] 
> https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/
> [2] 
> https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/
> 
> Quoting Keith Miller (2018-02-16 16:58:07)
>> I recently created a patch to disable the 32-bit JITs by default. 
>> https://bugs.webkit.org/show_bug.cgi?id=182886. 
>> 
>> The last time this was discussed was before the discovery of Spectre. In the 
>> interim, there have been a number of changes made to JavaScriptCore in an 
>> attempt to mitigate Spectre. Nobody has proposed a mitigation plan for 
>> 32-bit WebKit. For example, pointer poisoning only works for 64-bit 
>> processors as they currently have a number of high bits that will never be 
>> set in a valid pointer. In 32-bit code the full address space is mappable so 
>> pointer poisoning is not guaranteed to be effective.
>> 
>> Given the importance of developing mitigations for Spectre in a timely 
>> manner I think we should disable 32-bit JITs, in the near term, but more 
>> likely permanently.
>> 
>> Thoughts?
>> Keith
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Meltdown and Spectre attacks

2018-01-05 Thread Filip Pizlo
Here is what else is in trunk:

- index masking
- pointer poisoning

I’m going to write up what our thoughts are shortly. :-)  For now feel free to 
browse the code with those two hints.

-Filip


> On Jan 5, 2018, at 8:31 AM, Konstantin Tokarev  wrote:
> 
> 
> 
>> Hi,
>> 
>> Here's a collection of blog posts from other major browser vendors
>> regarding the Meltdown and Spectre attacks:
>> 
>> https://blogs.windows.com/msedgedev/2018/01/03/speculative-execution-mitigations-microsoft-edge-internet-explorer/
>> 
>> https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-class-timing-attack/
>> 
>> https://sites.google.com/a/chromium.org/dev/Home/chromium-security/ssca
>> 
>> Notably, Edge and Firefox are reducing the resolution of
>> performance.now(), and all three are disabling SharedArrayBuffer.
>> 
>> This is just a heads-up.
> 
> Seems like both mitigations are already present in trunk
> 
>> 
>> Michael
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org 
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> 
> -- 
> Regards,
> Konstantin
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org 
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Build issues due to anonymous namespace

2017-12-03 Thread Filip Pizlo
That also means not using static, for the same reason. FWIW, I think it’s a 
good idea. 

-Filip

> On Dec 3, 2017, at 12:10 PM, Darin Adler  wrote:
> 
> I think it’s also worthwhile to remove the anonymous namespace wrapping each 
> of these DestroyFunc structures when renaming them. Generally speaking, 
> anonymous namespace doesn’t work when compilation units are arbitrary, since 
> they say “limit this to one compilation unit”. So I’m not sure we should ever 
> use them any more.
> 
> — Darin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Build issues due to anonymous namespace

2017-12-03 Thread Filip Pizlo
Maybe just give that DestroyFunc a more unique name like 
JSSegmentedVariableObjectDestroyFunc. Cc me on such a patch and I’ll happily 
review it. 

-Filip

> On Dec 3, 2017, at 8:44 AM, Caio Lima  wrote:
> 
> Hi guys. I'm working in a Patch that is adding some files to JSC and I faced 
> following issue:
> 
> ./runtime/JSStringHeapCellType.cpp:36:8: error: redefinition of 'DestroyFunc'
> ...
> In file included from 
> /Users/caiolima/open_projects/WebKit/WebKitBuild/Debug/DerivedSources/JavaScriptCore/unified-sources/UnifiedSource109.cpp:1:
> ./runtime/JSSegmentedVariableObjectHeapCellType.cpp:36:8: note: previous 
> definition is here
> 
> That is my UnifiedSource109.cpp content:
> 
> #include "runtime/JSSegmentedVariableObjectHeapCellType.cpp"
> #include "runtime/JSSet.cpp"
> #include "runtime/JSSetIterator.cpp"
> #include "runtime/JSSourceCode.cpp"
> #include "runtime/JSString.cpp"
> #include "runtime/JSStringHeapCellType.cpp"
> #include "runtime/JSStringIterator.cpp"
> #include "runtime/JSStringJoiner.cpp"
> 
> So, I tried to use "namespace FILENAME" magic, but it doesn't seem working, 
> mainly because there is no other file using it. 
> 
> What are the actions do I need to take in such case to fix it properly?
> 
> Regards,
> Caio.
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] BigInt implementation

2017-10-19 Thread Filip Pizlo
Seems like V8’s is an ok place to start. Maybe all you’d have to do is remove 
the accurate GC artifacts (Handle and such). 

-Filip

> On Oct 19, 2017, at 2:31 AM, Daniel Ehrenberg <little...@igalia.com> wrote:
> 
> On 2017-10-19 03:18, Filip Pizlo wrote:
>>> On Oct 18, 2017, at 5:50 PM, Caio Lima <ticaiol...@gmail.com> wrote:
>>> 
>>> Hi WebKittens.
>>> 
>>> I’m planning to start implement JS BigInt proposal on JSC, however I
>>> would like to sync with you the way you are planning to implement such
>>> feature.
>>> 
>>> Right now, I’m thinking in implement BigInt operations into C++
>>> (possibly on WTF?) and make the JSBigInt use this implementation.
>> 
>> We need something GC-optimized from the start since that will
>> determine a lot of the details. So, I don’t think that a WTF
>> implementation is appropriate. It should be a JSCell from day one.
>> 
>>> As I
>>> have checked with some other implementors, some of them are going to
>>> use libgmp (SpiderMonkey case), but I don’t think its license (GPLv2)
>>> aligns with WebKit’s license and I heard that V8 is implementing their
>>> BigInt lib as well. By now, I’m thinking in implement a proof of
>>> concept and then, optimize the BigInt lib part. So, what I would like
>>> to collect from you is: Is there any problem start work on that
>>> feature?
>> 
>> We should do a GC-optimized bigint. If there was a great library that
>> had the right license, we could port it to allocate using our GC. I
>> don’t have a strong view on whether we should write our own from
>> scratch or use somebody else’s as a starting point (so long as the
>> license is ok).
> 
> I'm not sure if there is such a great library. The other programming
> languages I've seen BigInt implementations in were either based on
> external libraries and didn't have GC integration, or were based on
> writing their own BigInt library or forking it from another programming
> language. I'm not aware of an existing BigInt library which permits GC
> integration.
> 
> If GC optimization weren't needed,then libtommath [1] could be a good
> choice. However, this does not store BigInts in a single contiguous
> memory region, but rather has a header and discontiguous payload. I
> believe Jordi Montes (cc'd) is working on a library based on libtommath
> which would work more like this, but I'm not sure how far along it is,
> or how well it would integrate into JSC GC.
> 
> If you want to follow the tradition of forking another library, one
> option would be to start with V8's. It's relatively new and not quite
> finished, but the nice thing is that it implements the same operations
> that will be needed in JSC. [2]
> 
> [1] http://www.libtom.net/LibTomMath/
> [2] https://github.com/v8/v8/blob/master/src/objects/bigint.cc
>> 
>> I don’t have any objection to you working on this.
>> 
>> -Filip
>> 
>>> 
>>> It is one of the proposals that I’ve made to my Coding Experience at
>>> Igalia, so I will also have the support of developers from there,
>>> since they are implementing it on SpiderMonkey and the spec champion
>>> is also in the team.
>>> 
>>> Regards,
>>> Caio.
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] BigInt implementation

2017-10-18 Thread Filip Pizlo


> On Oct 18, 2017, at 5:50 PM, Caio Lima  wrote:
> 
> Hi WebKittens.
> 
> I’m planning to start implement JS BigInt proposal on JSC, however I
> would like to sync with you the way you are planning to implement such
> feature.
> 
> Right now, I’m thinking in implement BigInt operations into C++
> (possibly on WTF?) and make the JSBigInt use this implementation.

We need something GC-optimized from the start since that will determine a lot 
of the details. So, I don’t think that a WTF implementation is appropriate. It 
should be a JSCell from day one. 

> As I
> have checked with some other implementors, some of them are going to
> use libgmp (SpiderMonkey case), but I don’t think its license (GPLv2)
> aligns with WebKit’s license and I heard that V8 is implementing their
> BigInt lib as well. By now, I’m thinking in implement a proof of
> concept and then, optimize the BigInt lib part. So, what I would like
> to collect from you is: Is there any problem start work on that
> feature?

We should do a GC-optimized bigint. If there was a great library that had the 
right license, we could port it to allocate using our GC. I don’t have a strong 
view on whether we should write our own from scratch or use somebody else’s as 
a starting point (so long as the license is ok). 

I don’t have any objection to you working on this.

-Filip

> 
> It is one of the proposals that I’ve made to my Coding Experience at
> Igalia, so I will also have the support of developers from there,
> since they are implementing it on SpiderMonkey and the spec champion
> is also in the team.
> 
> Regards,
> Caio.
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Bring back ARMv6 support to JSC

2017-09-05 Thread Filip Pizlo


> On Sep 5, 2017, at 10:51 AM, Olmstead, Don <don.olmst...@sony.com> wrote:
> 
> We have plans to add a JSC-Only windows bot in the very near future. Would 
> that have any bearing on the state of JIT in Windows?

Not really. 

Because of the poor state of that code, I think we should rip it out.

Also maintaining the 32_64 value representation is no value for us. 

-Filip


> 
> -Original Message-
> From: webkit-dev [mailto:webkit-dev-boun...@lists.webkit.org] On Behalf Of 
> Filip Pizlo
> Sent: Tuesday, September 5, 2017 8:37 AM
> To: Adrian Perez de Castro <ape...@igalia.com>
> Cc: webkit-dev@lists.webkit.org
> Subject: Re: [webkit-dev] Bring back ARMv6 support to JSC
> 
> There isn’t anyone maintaining the 32-not JIT ports to the level of quality 
> we have in our 64-not ports. Making 32-bit use the 64-bit cloop would be a 
> quality progression for actual users of 32-bit. 
> 
> -Filip
> 
>>> On Sep 5, 2017, at 8:02 AM, Adrian Perez de Castro <ape...@igalia.com> 
>>> wrote:
>>> 
>>> On Tue, 5 Sep 2017 16:38:09 +0200, Osztrogonác Csaba <o...@inf.u-szeged.hu> 
>>> wrote:
>>> 
>>> [...]
>>> 
>>> Maybe it will be hard to say good bye to 32-bit architecutres for 
>>> many people, but please, it's 2017 now, the first ARMv8 SoC is out 4 
>>> years ago, the first AMD64 CPU is out 14 years ago.
>> 
>> While it's true that amd64/x86_64 has been around long enough to not 
>> have to care (much) about its 32-bit counterpart; the same cannot be said 
>> about ARM.
>> It would be great to be able to say that 32-bit ARM is well dead, but 
>> we are not there yet.
>> 
>> If we take x86_64 as an example, it has been “only” 10 years since the 
>> last new 32-bit CPU was announced and until 3-4 years ago it wasn't 
>> uncommon to see plently of people running 32-bit userlands. If things 
>> unroll in a similar way in the ARM arena, I would expect good 32-bit 
>> ARM support being relevant at least for another 3-4 years before the need 
>> starts to fade away.
>> 
>> If something, I think it may make more sense to remove 32-bit x86 
>> support, and have the 32-bit ARM support around for some more time.
>> 
>> Cheers,
>> 
>> 
>> --
>> Adrián 
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Bring back ARMv6 support to JSC

2017-09-05 Thread Filip Pizlo
There isn’t anyone maintaining the 32-not JIT ports to the level of quality we 
have in our 64-not ports. Making 32-bit use the 64-bit cloop would be a quality 
progression for actual users of 32-bit. 

-Filip

> On Sep 5, 2017, at 8:02 AM, Adrian Perez de Castro  wrote:
> 
>> On Tue, 5 Sep 2017 16:38:09 +0200, Osztrogonác Csaba  
>> wrote:
>> 
>> [...]
>> 
>> Maybe it will be hard to say good bye to 32-bit architecutres
>> for many people, but please, it's 2017 now, the first ARMv8 SoC
>> is out 4 years ago, the first AMD64 CPU is out 14 years ago.
> 
> While it's true that amd64/x86_64 has been around long enough to not have to
> care (much) about its 32-bit counterpart; the same cannot be said about ARM.
> It would be great to be able to say that 32-bit ARM is well dead, but we are
> not there yet.
> 
> If we take x86_64 as an example, it has been “only” 10 years since the last
> new 32-bit CPU was announced and until 3-4 years ago it wasn't uncommon to
> see plently of people running 32-bit userlands. If things unroll in a similar
> way in the ARM arena, I would expect good 32-bit ARM support being relevant
> at least for another 3-4 years before the need starts to fade away.
> 
> If something, I think it may make more sense to remove 32-bit x86 support,
> and have the 32-bit ARM support around for some more time.
> 
> Cheers,
> 
> 
> --
> Adrián 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Bring back ARMv6 support to JSC

2017-09-05 Thread Filip Pizlo
I think that JIT support on platforms that don’t get regular tuning doesn’t 
make sense. I think we should:

- Remove JIT support for 32-bit platforms
- Remove JIT support for Windows
- Remove JSVALUE32_64
- Use cloop In 64-bit mode on 32-bit platforms and Windows. 

I think this approach would be best for the project since it would mean less 
time spent on JIT ports that are never quite done. 

-Filip

> On Sep 5, 2017, at 6:01 AM, Caio Lima <ticaiol...@gmail.com> wrote:
> 
> Hi guys,I've posted this on the bug thread, but I would also like to
> revive the discussion here.
> 
> After our last discussion, I put some effort to enable IC for ARMv6
> into JIT layers and now I finally collected some results for that.
> 
> Now, we have regressions just into 2 tests in SunSpider (they aren't
> regressing in LongSpider) and 3 into Octane (gbemu, typescript and
> box2d). Also, I see regressions into microbenchmarks, mainly cases
> with observable-side-effects and set/map tests.
> 
> With these results, what do you think about keep working into ARMv6 support?
> 
> Maybe an important report is that I'm almost fixing the errors taking
> the http://trac.webkit.org/changeset/220532 as baseline. Currently
> there are ~40 tests failing, and the majority of them are due to OOM
> into my runtime env, due to memory constraints. I will try to merge it
> with current master this week to check the status of build+failures.
> 
> Regards,
> Caio.
> 
> 2017-08-01 20:52 GMT-03:00 Caio Lima <ticaiol...@gmail.com>:
>> Hi all.
>> 
>> FYI, I keep the last weeks investigating the issue with ARMv6 IC and I
>> was able to find the source of the bug and apply a quick fix to run
>> benchmarks again to get results. I just ran V8Spider, Octane and
>> Kraken by now and I'm attaching the results in this email.
>> 
>> We found some test cases regressing, and my attention now is to
>> identify the reason of the regression and how to fix them. Also, the
>> improvements got with JIT in ARMv6 aren't as big as Filip commented in
>> [1] to supported architectures.
>> 
>> [1] - https://bugs.webkit.org/show_bug.cgi?id=172765#c9
>> 
>> 2017-07-13 19:27 GMT-03:00 Caio Lima <ticaiol...@gmail.com>:
>>> Yes. It probably will take a while to process on device, but I'll run it.
>>> 
>>> Em qui, 13 de jul de 2017 às 17:50, Saam barati <sbar...@apple.com>
>>> escreveu:
>>>> 
>>>> And ARES6.
>>>> 
>>>> - Saam
>>>> 
>>>> 
>>>> On Jul 13, 2017, at 1:50 PM, Saam barati <sbar...@apple.com> wrote:
>>>> 
>>>> Can you please run Octane and Kraken too?
>>>> 
>>>> - Saam
>>>> 
>>>> On Jul 13, 2017, at 7:47 AM, Caio Lima <ticaiol...@gmail.com> wrote:
>>>> 
>>>> Finally I got the results from the last benchmark run. The results
>>>> shows that the speed-ups are considerable comparing with CLoop
>>>> version, since we get faster results in a big number of tests and
>>>> regress in a minor number of scripts. I would like to get feedback
>>>> from you as well, but IMHO enabling JIT for ARMv6 looks a good
>>>> improvement step and the amount of code we are touching in current
>>>> trunk code to make it possible is small.
>>>> 
>>>> The results are attached and I also uploaded them in
>>>> https://bugs.webkit.org/show_bug.cgi?id=172765.
>>>> 
>>>> PS.: Some test cases (bigswitch-indirect-symbol-or-undefined,
>>>> bigswitch-indirect-symbol, bigswitch, etc) are failing now and I'm
>>>> already investigating the source of problem to fix them.
>>>> 
>>>> Regards,
>>>> Caio.
>>>> 
>>>> 2017-07-05 22:54 GMT-03:00 Filip Pizlo <fpi...@apple.com>:
>>>> 
>>>> To be clear, I’m concerned that the 32-bit JIT backends have such bad
>>>> tuning for these embedded platforms that it’s just pure badness. Until you
>>>> can prove that you can change this, I think that porting should focus on
>>>> making the cloop great. Then, we can rip out support for weird CPUs rather
>>>> than bringing it back.
>>>> 
>>>> -Filip
>>>> 
>>>> On Jul 5, 2017, at 6:14 PM, Caio Lima <ticaiol...@gmail.com> wrote:
>>>> 
>>>> 2017-07-05 18:25 GMT-03:00 Filip Pizlo <fpi...@apple.com>:
>>>> 
>>>> You need to establish that the JIT is a performance progression over the
>>>> LLInt on ARMv6. I am opposed 

Re: [webkit-dev] Get rid of RefPtr, replace with std::optional?

2017-09-01 Thread Filip Pizlo


> On Sep 1, 2017, at 10:07 AM, Brady Eidson  wrote:
> 
> 
> 
>> On Sep 1, 2017, at 9:46 AM, Maciej Stachowiak  wrote:
>>> 
>>> Does RefPtr do anything for us today that std::optional doesn’t?
>> 
>> The obvious things would be: uses less storage space
> 
> Grumble. If that’s true (which, thinking about it, of course it is true) this 
> is pretty much a nonstarter. So… nevermind.

Even though I disagree with your proposal, I don’t think this is a good reason 
since we could create a template specialization for std::optional.

I think we could probably do that anyway since this type can arise through 
template instantiations (one place says std::optional and something else 
sets T to Ref): https://bugs.webkit.org/show_bug.cgi?id=176228 


-Filip

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Get rid of RefPtr, replace with std::optional?

2017-09-01 Thread Filip Pizlo


> On Sep 1, 2017, at 10:09 AM, Chris Dumez  wrote:
> 
> I think std::optional looks ugly. Also, unlike RefPtr<>, I do not 
> think it is copyable. It is pretty neat to be able to capture a RefPtr<> by 
> value in a lambda.
> Also, how do you convert it to a raw pointer? myOptionalRef.value_or(nullptr) 
> would not work. Not sure there would be a nice way to do so.
> 
> Finally, the storage space argument from Maciej is a good one.

We could create a specialization for std::optional.  Filed: 
https://bugs.webkit.org/show_bug.cgi?id=176228

That seems like a good idea separately from whether it should be used instead 
of RefPtr.  Even if we did have style prohibiting it, we might end up with such 
a type because of template specialization.

I can see cases were std::optional works more naturally into the 
surrounding code than RefPtr.  That probably happens if your code is already 
based on Ref.  In my experience there’s a lot of inertia to these things - once 
some code uses RefPtr enough, it can be awkward to introduce Ref and perhaps 
vice versa.  I don’t find it very hard to switch between thinking in terms of 
Ref and RefPtr, so I don’t mind that our code uses both.  I wouldn’t agree with 
a style that encourages using std::optional instead of RefPtr, but I also 
wouldn’t want to disallow it.

-Filip


>  
> --
>  Chris Dumez
> 
> 
> 
> 
>> On Sep 1, 2017, at 9:46 AM, Maciej Stachowiak > > wrote:
>> 
>> 
>> 
>>> On Sep 1, 2017, at 9:30 AM, Brady Eidson >> > wrote:
>>> 
>>> I recently worked on a patch where - because of the organic refactoring of 
>>> the patch over its development - I ended up with a std::optional 
>>> instead of a RefPtr.
>>> 
>>> A followup review after it had already landed pointed this out, and it got 
>>> me to thinking:
>>> 
>>> Does RefPtr do anything for us today that std::optional doesn’t?
>> 
>> The obvious things would be: uses less storage space, has a shorter name.
>> 
>>> 
>>> I kind of like the idea of replacing RefPtr with std::optional. It 
>>> makes it explicitly clear what object is actually holding the reference, 
>>> and completely removes some of the confusion of “when should I use Ref vs 
>>> RefPtr?"
>>> 
>>> Thoughts?
>>> 
>>> Thanks,
>>> ~Brady
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org 
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org 
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Get rid of RefPtr, replace with std::optional?

2017-09-01 Thread Filip Pizlo


> On Sep 1, 2017, at 9:30 AM, Brady Eidson  wrote:
> 
> I recently worked on a patch where - because of the organic refactoring of 
> the patch over its development - I ended up with a std::optional instead 
> of a RefPtr.
> 
> A followup review after it had already landed pointed this out, and it got me 
> to thinking:
> 
> Does RefPtr do anything for us today that std::optional doesn’t?

Simpler syntax.

> 
> I kind of like the idea of replacing RefPtr with std::optional. It makes 
> it explicitly clear what object is actually holding the reference, and 
> completely removes some of the confusion of “when should I use Ref vs RefPtr?”

There are many places in JSC where we don’t even consider using Ref.

-Filip


> 
> Thoughts?
> 
> Thanks,
> ~Brady
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Bring back ARMv6 support to JSC

2017-07-13 Thread Filip Pizlo
The fact that V8Spider is regressed indicates major problems with the JIT on 
ARMv6. 

I think that it would be better to work on fixing this JIT outside trunk, and 
then come back with patches once you have a speed up. In the future we 
shouldn’t allow partially-working JIT ports to land.

-Filip

> On Jul 13, 2017, at 7:47 AM, Caio Lima <ticaiol...@gmail.com> wrote:
> 
> Finally I got the results from the last benchmark run. The results
> shows that the speed-ups are considerable comparing with CLoop
> version, since we get faster results in a big number of tests and
> regress in a minor number of scripts. I would like to get feedback
> from you as well, but IMHO enabling JIT for ARMv6 looks a good
> improvement step and the amount of code we are touching in current
> trunk code to make it possible is small.
> 
> The results are attached and I also uploaded them in
> https://bugs.webkit.org/show_bug.cgi?id=172765.
> 
> PS.: Some test cases (bigswitch-indirect-symbol-or-undefined,
> bigswitch-indirect-symbol, bigswitch, etc) are failing now and I'm
> already investigating the source of problem to fix them.
> 
> Regards,
> Caio.
> 
> 2017-07-05 22:54 GMT-03:00 Filip Pizlo <fpi...@apple.com>:
>> To be clear, I’m concerned that the 32-bit JIT backends have such bad tuning 
>> for these embedded platforms that it’s just pure badness. Until you can 
>> prove that you can change this, I think that porting should focus on making 
>> the cloop great. Then, we can rip out support for weird CPUs rather than 
>> bringing it back.
>> 
>> -Filip
>> 
>>> On Jul 5, 2017, at 6:14 PM, Caio Lima <ticaiol...@gmail.com> wrote:
>>> 
>>> 2017-07-05 18:25 GMT-03:00 Filip Pizlo <fpi...@apple.com>:
>>>> You need to establish that the JIT is a performance progression over the 
>>>> LLInt on ARMv6. I am opposed to more ARMv6 patches landing until there is 
>>>> some evidence provided that you’re actually getting speed-ups.
>>> 
>>> It makes sense. I can get these numbers related to JIT.
>>> 
>>> BTW, there is a Patch that isn't JIT related
>>> (https://bugs.webkit.org/show_bug.cgi?id=172766).
>>> 
>>> Regards,
>>> Caio.
>>> 
>>>> -Filip
>>>> 
>>>>> On Jun 13, 2017, at 6:48 PM, Caio Lima <ticaiol...@gmail.com> wrote:
>>>>> 
>>>>> Hi All.
>>>>> 
>>>>> Some of you guys might know me through the work I have been doing in
>>>>> JSC. The experience working with WebKit has been great so far, thank
>>>>> you for the reviews!
>>>>> 
>>>>> Since 1st May, we at Igalia have been working on bring back the ARMv6
>>>>> support into JSC. We already have commits into our downstream branch
>>>>> port[2] that fixes some compile/runtime errors when building JSC to
>>>>> ARMv6 and also fixes some bugs. However, this branch is not synced
>>>>> with WebKit upstream tree and I would like to pursue the upstreaming
>>>>> of this ARMv6/JSC support to WebKit.
>>>>> 
>>>>> As a long shot, we are planning to maintain the ARMv6 support and make
>>>>> tests run as green as possible. Also, it's our goal make ARMv6 support
>>>>> not interfere with other ARM versions support code negatively and we
>>>>> will be in charge of implement platform-specific fixes/features for
>>>>> JSC/ARM6, this way no imposing burden to the rest of the community.
>>>>> 
>>>>> To keep track of work to be done, I've create a meta-bug in
>>>>> bugzilla[3] and it's going to be used firstly to organize the commits
>>>>> from our downstream branch, but pretty soon I'm going to create issues
>>>>> related with javascriptcore-test failures and send patches to fix
>>>>> them. We have already submitted 3 patches (they are marked as
>>>>> dependence of [3]) that fixes ARMv6 into LLInt and JIT layers and got
>>>>> a round of review into them.
>>>>> 
>>>>> Best Regards,
>>>>> Caio.
>>>>> 
>>>>> [1] - https://www.igalia.com/about-us/coding-experience
>>>>> [2] - https://github.com/WebPlatformForEmbedded/WPEWebKit
>>>>> [3] - https://bugs.webkit.org/show_bug.cgi?id=172765
>>>>> ___
>>>>> webkit-dev mailing list
>>>>> webkit-dev@lists.webkit.org
>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Bring back ARMv6 support to JSC

2017-07-05 Thread Filip Pizlo
To be clear, I’m concerned that the 32-bit JIT backends have such bad tuning 
for these embedded platforms that it’s just pure badness. Until you can prove 
that you can change this, I think that porting should focus on making the cloop 
great. Then, we can rip out support for weird CPUs rather than bringing it 
back. 

-Filip

> On Jul 5, 2017, at 6:14 PM, Caio Lima <ticaiol...@gmail.com> wrote:
> 
> 2017-07-05 18:25 GMT-03:00 Filip Pizlo <fpi...@apple.com>:
>> You need to establish that the JIT is a performance progression over the 
>> LLInt on ARMv6. I am opposed to more ARMv6 patches landing until there is 
>> some evidence provided that you’re actually getting speed-ups.
> 
> It makes sense. I can get these numbers related to JIT.
> 
> BTW, there is a Patch that isn't JIT related
> (https://bugs.webkit.org/show_bug.cgi?id=172766).
> 
> Regards,
> Caio.
> 
>> -Filip
>> 
>>> On Jun 13, 2017, at 6:48 PM, Caio Lima <ticaiol...@gmail.com> wrote:
>>> 
>>> Hi All.
>>> 
>>> Some of you guys might know me through the work I have been doing in
>>> JSC. The experience working with WebKit has been great so far, thank
>>> you for the reviews!
>>> 
>>> Since 1st May, we at Igalia have been working on bring back the ARMv6
>>> support into JSC. We already have commits into our downstream branch
>>> port[2] that fixes some compile/runtime errors when building JSC to
>>> ARMv6 and also fixes some bugs. However, this branch is not synced
>>> with WebKit upstream tree and I would like to pursue the upstreaming
>>> of this ARMv6/JSC support to WebKit.
>>> 
>>> As a long shot, we are planning to maintain the ARMv6 support and make
>>> tests run as green as possible. Also, it's our goal make ARMv6 support
>>> not interfere with other ARM versions support code negatively and we
>>> will be in charge of implement platform-specific fixes/features for
>>> JSC/ARM6, this way no imposing burden to the rest of the community.
>>> 
>>> To keep track of work to be done, I've create a meta-bug in
>>> bugzilla[3] and it's going to be used firstly to organize the commits
>>> from our downstream branch, but pretty soon I'm going to create issues
>>> related with javascriptcore-test failures and send patches to fix
>>> them. We have already submitted 3 patches (they are marked as
>>> dependence of [3]) that fixes ARMv6 into LLInt and JIT layers and got
>>> a round of review into them.
>>> 
>>> Best Regards,
>>> Caio.
>>> 
>>> [1] - https://www.igalia.com/about-us/coding-experience
>>> [2] - https://github.com/WebPlatformForEmbedded/WPEWebKit
>>> [3] - https://bugs.webkit.org/show_bug.cgi?id=172765
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Bring back ARMv6 support to JSC

2017-07-05 Thread Filip Pizlo
You need to establish that the JIT is a performance progression over the LLInt 
on ARMv6. I am opposed to more ARMv6 patches landing until there is some 
evidence provided that you’re actually getting speed-ups.

-Filip

> On Jun 13, 2017, at 6:48 PM, Caio Lima  wrote:
> 
> Hi All.
> 
> Some of you guys might know me through the work I have been doing in
> JSC. The experience working with WebKit has been great so far, thank
> you for the reviews!
> 
> Since 1st May, we at Igalia have been working on bring back the ARMv6
> support into JSC. We already have commits into our downstream branch
> port[2] that fixes some compile/runtime errors when building JSC to
> ARMv6 and also fixes some bugs. However, this branch is not synced
> with WebKit upstream tree and I would like to pursue the upstreaming
> of this ARMv6/JSC support to WebKit.
> 
> As a long shot, we are planning to maintain the ARMv6 support and make
> tests run as green as possible. Also, it's our goal make ARMv6 support
> not interfere with other ARM versions support code negatively and we
> will be in charge of implement platform-specific fixes/features for
> JSC/ARM6, this way no imposing burden to the rest of the community.
> 
> To keep track of work to be done, I've create a meta-bug in
> bugzilla[3] and it's going to be used firstly to organize the commits
> from our downstream branch, but pretty soon I'm going to create issues
> related with javascriptcore-test failures and send patches to fix
> them. We have already submitted 3 patches (they are marked as
> dependence of [3]) that fixes ARMv6 into LLInt and JIT layers and got
> a round of review into them.
> 
> Best Regards,
> Caio.
> 
> [1] - https://www.igalia.com/about-us/coding-experience
> [2] - https://github.com/WebPlatformForEmbedded/WPEWebKit
> [3] - https://bugs.webkit.org/show_bug.cgi?id=172765
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Data Memory Barrier ARMv6 question

2017-07-05 Thread Filip Pizlo
We should not use those helpers, especially in the JIT. It does not make sense 
for the JIT to emit calls to system functions when the user is expecting it to 
emit an instruction. If we cannot perfectly select the right barrier on a 
particular CPU, we should disable concurrency on that CPU. 

-Filip

> On Jul 5, 2017, at 8:41 AM, JF Bastien  wrote:
> 
> On Linux you can do the following:
> ((void(*)())0x0fa0)();
> 
> That address contains a helper which does the “right” barrier, including if 
> you’re not on an SMP system it’ll do nothing.
> 
> Details: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
> That file also lists other Linux helpers.
> 
> I think for ARMv6 it makes sense to use these helpers. AFAIK the mcr barrier 
> instruction ins’t supported by all ARMv6 CPUs.
> 
> For ARMv7 and later, DMB ish is the right thing.
> 
> 
>> On Jul 3, 2017, at 17:19, Caio Lima  wrote:
>> 
>> Hi all.
>> 
>> I'm working in this patch
>> (https://bugs.webkit.org/show_bug.cgi?id=172767) and Mark Lam raised
>> some questions about the data memory barrier (DMB for short) in ARMv6
>> using "mcr 15 ...". The point is that we are having divergences in ARM
>> official reference manual about the semantics of this instruction. We
>> have it discussed in the bug above and I would like to know if there
>> is somebody with stronger ARM background that could help us there and
>> then approve the patch to be committed.
>> 
>> I thanks in advance and best regards,
>> Caio Lima.
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo

> On Jun 13, 2017, at 1:00 PM, Chris Dumez <cdu...@apple.com> wrote:
> 
> 
>> On Jun 13, 2017, at 12:51 PM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>>> 
>>> On Jun 13, 2017, at 12:37 PM, Alex Christensen <achristen...@apple.com 
>>> <mailto:achristen...@apple.com>> wrote:
>>> 
>>> Ok, maybe we can get rid of std::function, then!  I hadn’t used BlockPtr as 
>>> much as Chris.  I’d be opposed to adding a copy constructor to 
>>> WTF::Function because the non-copyability of WTF::Function is why we made 
>>> it, and it has prevented many bugs.
>> 
>> I agree that the copy semantics of std::function are strange - each copy 
>> gets its own view of the state of the closure.  This gets super weird when 
>> you implement an algorithm that accidentally copies the function - the act 
>> of copying invokes copy constructors on all of the lambda-lifted state, 
>> which is probably not what you wanted.  So I’m not surprised that moving to 
>> WTF::Function avoided bugs.  What I’m proposing also prevents many of the 
>> same bugs because the lambda-lifted state never gets copied in my world.
> 
> I understand SharedTask avoids many of the same issues. My issue is that it 
> adds an extra data member for refcounting that is very rarely needed in 
> practice. Also, because this type is copyable, people can create refcounting 
> churn by inadvertently copying.
> Because most of the time, we do not want to share and WTF::Function is 
> sufficient as it stands, I do not think it is worth making WTF::Function 
> refcounted.

Perf is a good argument against this, but then it seems like WTF::Function 
should by default allow block-style sharing semantics, and then there would be 
a WTF::UniqueFunction that is more optimal.

Also, who is “we” here?  If “we” includes JSC, then “we” don’t find 
WTF::Function sufficient.  I try to use WTF::Function whenever I can because I 
like the name and I like the API, but often I have to switch to SharedTask in 
order to make the code work.  B3’s use of SharedTask for its StackmapGenerator 
is a great example.  We can’t use WTF::Function in that code because the data 
structures that reference the generator get copied and that’s OK when we use 
SharedTask.

> 
>> 
>> Do you think that code that uses ObjC blocks encounters the kind of bugs 
>> that you saw WTF::Function preventing?  Or are the bugs that Function 
>> prevents more specific to std::function?  I guess I’d never heard of a need 
>> to change block semantics to avoid bugs, so I have a hunch that the bugs you 
>> guys prevented were specific to the fact that std::function copies instead 
>> of sharing.
> 
> To be cleared, we viewed this as “copies instead of truly moving”, not 
> sharing. We never really needed sharing when WTF::Function was initially 
> called WTF::NonCopyableFunction.
> Yes, the bugs we were trying to avoid were related to using std::function and 
> copying things implicitly, even if you WTFMove() it around. Because we 
> started using WTF::Function instead of std::function in more places though, 
> having BlockPtr::fromCallable() to be able to pass a WTF::Function to an ObjC 
> function expecting a block became handy though. 
> 
>> 
>>> 
>>> I’ve also seen many cases where I have a WTF::Function that I want to make 
>>> sure is called once and only once before destruction.  I wouldn’t mind 
>>> adding a WTF::Callback subclass that just asserts that it has been called 
>>> once.  That would’ve prevented some bugs, too, but not every use of 
>>> WTF::Function has such a requirement.
>>> 
>>>> On Jun 13, 2017, at 12:31 PM, Chris Dumez <cdu...@apple.com 
>>>> <mailto:cdu...@apple.com>> wrote:
>>>> 
>>>> We already have BlockPtr for passing a Function as a lambda block.
>>>> 
>>>> Chris Dumez
>>>> 
>>>> On Jun 13, 2017, at 12:29 PM, Alex Christensen <achristen...@apple.com 
>>>> <mailto:achristen...@apple.com>> wrote:
>>>> 
>>>>> std::function, c++ lambda, and objc blocks are all interchangeable.  
>>>>> WTF::Functions cannot be used as objc blocks because the latter must be 
>>>>> copyable.  Until that changes or we stop using objc, we cannot completely 
>>>>> eliminate std::function from WebKit.
>>>>> ___
>>>>> webkit-dev mailing list
>>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>> 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo

> On Jun 13, 2017, at 12:37 PM, Alex Christensen  wrote:
> 
> Ok, maybe we can get rid of std::function, then!  I hadn’t used BlockPtr as 
> much as Chris.  I’d be opposed to adding a copy constructor to WTF::Function 
> because the non-copyability of WTF::Function is why we made it, and it has 
> prevented many bugs.

I agree that the copy semantics of std::function are strange - each copy gets 
its own view of the state of the closure.  This gets super weird when you 
implement an algorithm that accidentally copies the function - the act of 
copying invokes copy constructors on all of the lambda-lifted state, which is 
probably not what you wanted.  So I’m not surprised that moving to 
WTF::Function avoided bugs.  What I’m proposing also prevents many of the same 
bugs because the lambda-lifted state never gets copied in my world.

Do you think that code that uses ObjC blocks encounters the kind of bugs that 
you saw WTF::Function preventing?  Or are the bugs that Function prevents more 
specific to std::function?  I guess I’d never heard of a need to change block 
semantics to avoid bugs, so I have a hunch that the bugs you guys prevented 
were specific to the fact that std::function copies instead of sharing.

> 
> I’ve also seen many cases where I have a WTF::Function that I want to make 
> sure is called once and only once before destruction.  I wouldn’t mind adding 
> a WTF::Callback subclass that just asserts that it has been called once.  
> That would’ve prevented some bugs, too, but not every use of WTF::Function 
> has such a requirement.
> 
>> On Jun 13, 2017, at 12:31 PM, Chris Dumez > > wrote:
>> 
>> We already have BlockPtr for passing a Function as a lambda block.
>> 
>> Chris Dumez
>> 
>> On Jun 13, 2017, at 12:29 PM, Alex Christensen > > wrote:
>> 
>>> std::function, c++ lambda, and objc blocks are all interchangeable.  
>>> WTF::Functions cannot be used as objc blocks because the latter must be 
>>> copyable.  Until that changes or we stop using objc, we cannot completely 
>>> eliminate std::function from WebKit.
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org 
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo
And ObjC blocks share instead of copying, which is more like SharedTask.  I 
think that’s another reason why adding copy constructors to WTF::Function that 
do block-like sharing under the hood is rather attractive.

-Filip


> On Jun 13, 2017, at 12:29 PM, Alex Christensen  wrote:
> 
> std::function, c++ lambda, and objc blocks are all interchangeable.  
> WTF::Functions cannot be used as objc blocks because the latter must be 
> copyable.  Until that changes or we stop using objc, we cannot completely 
> eliminate std::function from WebKit.

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo

> On Jun 13, 2017, at 11:56 AM, Brady Eidson <beid...@apple.com> wrote:
> 
>> 
>> On Jun 13, 2017, at 9:55 AM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> Would SharedTask’s sharing be semantically incorrect for users of 
>> WTF::Function?  In other words, do you rely on the compiler error that says 
>> that there is no copy constructor?
> 
> 
> WTF::Function is used in cross-thread dispatching, where arguments captured 
> by the lambda are “thread safe copied”, “isolated copied”, etc.
> 
> Part of the safety is the lack of a copy constructor, as accidentally making 
> a copy on the originating thread can invalidate the setup for thread safety.
> 
> So, yes, they must maintain move-only semantics.

It seems that the benefit of keeping existing Function semantics is that it 
will sometimes prevent you from some races (but not all since as you say, you 
can still put the Function in a shared object).

The benefit of merging Function and SharedTask is twofold:

1) JSC uses SharedTask in a lot of places not because we want to actually share 
it with many threads, but because RefPtr<> can be stored in a lot more kinds of 
data structures.  I think there are places where we do some transformation on a 
set of tasks, and implementing it all using move semantics would be cumbersome. 
 For example, most textbook sorting algorithms are most easily implemented if 
you can do “=“ and temporarily create aliases (or copies).  This is the main 
reason why I find myself switching Function code to use SharedTask.

2) You won’t need a different type when you do want sharing between threads.  I 
wrote SharedTask initially for this purpose, but this is usually not the reason 
why I use it.

I get that being able to guarantee that (2) won’t happen is attractive, but I’m 
worried that using no-copy to ensure this makes other things harder.

> 
>> I’m imagining that if WTF::Function was backed by SharedTask that it would 
>> not result in any behavior change for existing WTF::Function users. 
> 
> I see the reverse. Is there any reason SharedTask can’t be backed by a 
> WTF::Function?
> 
> There’s no harm in adding Ref counting semantics on top the move-only 
> WTF::Function if SharedTask is your goal.

My goal is to see if we can merge SharedTask and Function.  I don’t have 
opinions on how the two should be implemented.

-Filip


> 
> Thanks,
>  Brady
> 
>>  At worst, it would mean that WTF::Function’s backing store has an extra 
>> word for the ref count - but if you only move and never copy then this word 
>> starts out at 1 and stays there until death, so it’s very cheap.
>> 
>> -Filip
>> 
>> 
>>> On Jun 13, 2017, at 9:43 AM, Chris Dumez <cdu...@apple.com 
>>> <mailto:cdu...@apple.com>> wrote:
>>> 
>>> In most cases in WebCore at least, we don’t actually want to share 
>>> ownership of the lambda so we don’t need RefCounting / SharedTask. Because 
>>> of this, I don’t think we should merge SharedTask into Function. I think 
>>> that as it stands, WTF::Function is a suitable replacement for most uses in 
>>> WebCore since we actually very rarely need copying (either it just builds 
>>> or the code can be refactored very slightly to avoid the copying).
>>> 
>>> --
>>>  Chris Dumez
>>> 
>>> 
>>> 
>>> 
>>>> On Jun 13, 2017, at 9:34 AM, Filip Pizlo <fpi...@apple.com 
>>>> <mailto:fpi...@apple.com>> wrote:
>>>> 
>>>> We should have a better story here.  Right now the story is too 
>>>> complicated.  We have:
>>>> 
>>>> - ScopedLambda or ScopedLambdaRef if you have a stack-allocated function 
>>>> that outlives its user
>>>> - SharedTask if you have a heap-allocated function that you want to share 
>>>> and ref-count
>>>> - WTF::Function if you have a heap-allocated function that you want to 
>>>> transfer ownership (move yes, copy no)
>>>> - std::function if you have a heap-alloated function that you want to pass 
>>>> by copy
>>>> 
>>>> Only std::function and WTF::Function do the magic that lets you say:
>>>> 
>>>> std::function f = 
>>>> 
>>>> Also, std::function has the benefit that it does copying.  None of the 
>>>> others do that.
>>>> 
>>>> ScopedLambda serves a specific purpose: it avoids allocation.  Probably we 
>>>> want to keep that one even if we merge the others.
>>>> 
>>>> IMO SharedTask has the nicest semantics.  I don’t 

Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo
I like these renamings. I’m also ok with renaming Function to UniqueFunction 
but I’m also ok with keeping the current name. 

-Filip

> On Jun 13, 2017, at 10:12 AM, Konstantin Tokarev <annu...@yandex.ru> wrote:
> 
> 
> 
> 13.06.2017, 20:08, "Maciej Stachowiak" <m...@apple.com>:
>> In case it turns out not to be possible to reduce the number of concepts 
>> (besides eliminating std::function), maybe it would help to change the names 
>> and behaviors of these classes to match better. Function, SharedFunction and 
>> ScopedFunction would have a much more obvious relationship to each other 
>> than Function, SharedTask and ScopedLambda.
> 
> Maybe rename Function to UniqueFunction?
> 
>> 
>> (I'm not sure if the direct assignment from a lambda is an incidental 
>> difference or one that's required by the different ownership semantics.)
>> 
>>  - Maciej
>> 
>>>  On Jun 13, 2017, at 9:34 AM, Filip Pizlo <fpi...@apple.com> wrote:
>>> 
>>>  We should have a better story here. Right now the story is too 
>>> complicated. We have:
>>> 
>>>  - ScopedLambda or ScopedLambdaRef if you have a stack-allocated function 
>>> that outlives its user
>>>  - SharedTask if you have a heap-allocated function that you want to share 
>>> and ref-count
>>>  - WTF::Function if you have a heap-allocated function that you want to 
>>> transfer ownership (move yes, copy no)
>>>  - std::function if you have a heap-alloated function that you want to pass 
>>> by copy
>>> 
>>>  Only std::function and WTF::Function do the magic that lets you say:
>>> 
>>>  std::function f = 
>>> 
>>>  Also, std::function has the benefit that it does copying. None of the 
>>> others do that.
>>> 
>>>  ScopedLambda serves a specific purpose: it avoids allocation. Probably we 
>>> want to keep that one even if we merge the others.
>>> 
>>>  IMO SharedTask has the nicest semantics. I don’t ever want the activation 
>>> of the function to be copied. In my experience I always want sharing if 
>>> more than one reference to the function exists. I think that what we really 
>>> want in most places is a WTF::Function that has sharing semantics like 
>>> SharedTask. That would let us get rid of std::function and SharedTask.
>>> 
>>>  In the current status quo, it’s not always correct to convert 
>>> std::function to the others because:
>>> 
>>>  - Unlike ScopedLambda and SharedTask, std::function has the magic 
>>> constructor that allows you to just assign a lambda to it.
>>>  - Unlike ScopedLambda, std::function is safe if the use is not scoped.
>>>  - Unlike WTF::Function, std::function can be copied.
>>> 
>>>  -Filip
>>> 
>>>>  On Jun 13, 2017, at 9:24 AM, Darin Adler <da...@apple.com> wrote:
>>>> 
>>>>  I’ve noticed many patches switching us from std::function to 
>>>> WTF::Function recently, to fix problems with copying and thread safety.
>>>> 
>>>>  Does std::function have any advantages over WTF::Function? Should we ever 
>>>> prefer std::function, or should we use WTF::Function everywhere in WebKit 
>>>> where we would otherwise use std::function?
>>>> 
>>>>  — Darin
>>>>  ___
>>>>  webkit-dev mailing list
>>>>  webkit-dev@lists.webkit.org
>>>>  https://lists.webkit.org/mailman/listinfo/webkit-dev
>>> 
>>>  ___
>>>  webkit-dev mailing list
>>>  webkit-dev@lists.webkit.org
>>>  https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> -- 
> Regards,
> Konstantin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo
Would SharedTask’s sharing be semantically incorrect for users of 
WTF::Function?  In other words, do you rely on the compiler error that says 
that there is no copy constructor?

I’m imagining that if WTF::Function was backed by SharedTask that it would not 
result in any behavior change for existing WTF::Function users.  At worst, it 
would mean that WTF::Function’s backing store has an extra word for the ref 
count - but if you only move and never copy then this word starts out at 1 and 
stays there until death, so it’s very cheap.

-Filip


> On Jun 13, 2017, at 9:43 AM, Chris Dumez <cdu...@apple.com> wrote:
> 
> In most cases in WebCore at least, we don’t actually want to share ownership 
> of the lambda so we don’t need RefCounting / SharedTask. Because of this, I 
> don’t think we should merge SharedTask into Function. I think that as it 
> stands, WTF::Function is a suitable replacement for most uses in WebCore 
> since we actually very rarely need copying (either it just builds or the code 
> can be refactored very slightly to avoid the copying).
> 
> --
>  Chris Dumez
> 
> 
> 
> 
>> On Jun 13, 2017, at 9:34 AM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> We should have a better story here.  Right now the story is too complicated. 
>>  We have:
>> 
>> - ScopedLambda or ScopedLambdaRef if you have a stack-allocated function 
>> that outlives its user
>> - SharedTask if you have a heap-allocated function that you want to share 
>> and ref-count
>> - WTF::Function if you have a heap-allocated function that you want to 
>> transfer ownership (move yes, copy no)
>> - std::function if you have a heap-alloated function that you want to pass 
>> by copy
>> 
>> Only std::function and WTF::Function do the magic that lets you say:
>> 
>> std::function f = 
>> 
>> Also, std::function has the benefit that it does copying.  None of the 
>> others do that.
>> 
>> ScopedLambda serves a specific purpose: it avoids allocation.  Probably we 
>> want to keep that one even if we merge the others.
>> 
>> IMO SharedTask has the nicest semantics.  I don’t ever want the activation 
>> of the function to be copied.  In my experience I always want sharing if 
>> more than one reference to the function exists.  I think that what we really 
>> want in most places is a WTF::Function that has sharing semantics like 
>> SharedTask.  That would let us get rid of std::function and SharedTask.
>> 
>> In the current status quo, it’s not always correct to convert std::function 
>> to the others because:
>> 
>> - Unlike ScopedLambda and SharedTask, std::function has the magic 
>> constructor that allows you to just assign a lambda to it.
>> - Unlike ScopedLambda, std::function is safe if the use is not scoped.
>> - Unlike WTF::Function, std::function can be copied.
>> 
>> -Filip
>> 
>> 
>>> On Jun 13, 2017, at 9:24 AM, Darin Adler <da...@apple.com 
>>> <mailto:da...@apple.com>> wrote:
>>> 
>>> I’ve noticed many patches switching us from std::function to WTF::Function 
>>> recently, to fix problems with copying and thread safety.
>>> 
>>> Does std::function have any advantages over WTF::Function? Should we ever 
>>> prefer std::function, or should we use WTF::Function everywhere in WebKit 
>>> where we would otherwise use std::function?
>>> 
>>> — Darin
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Should we ever use std::function instead of WTF::Function?

2017-06-13 Thread Filip Pizlo
We should have a better story here.  Right now the story is too complicated.  
We have:

- ScopedLambda or ScopedLambdaRef if you have a stack-allocated function that 
outlives its user
- SharedTask if you have a heap-allocated function that you want to share and 
ref-count
- WTF::Function if you have a heap-allocated function that you want to transfer 
ownership (move yes, copy no)
- std::function if you have a heap-alloated function that you want to pass by 
copy

Only std::function and WTF::Function do the magic that lets you say:

std::function f = 

Also, std::function has the benefit that it does copying.  None of the others 
do that.

ScopedLambda serves a specific purpose: it avoids allocation.  Probably we want 
to keep that one even if we merge the others.

IMO SharedTask has the nicest semantics.  I don’t ever want the activation of 
the function to be copied.  In my experience I always want sharing if more than 
one reference to the function exists.  I think that what we really want in most 
places is a WTF::Function that has sharing semantics like SharedTask.  That 
would let us get rid of std::function and SharedTask.

In the current status quo, it’s not always correct to convert std::function to 
the others because:

- Unlike ScopedLambda and SharedTask, std::function has the magic constructor 
that allows you to just assign a lambda to it.
- Unlike ScopedLambda, std::function is safe if the use is not scoped.
- Unlike WTF::Function, std::function can be copied.

-Filip


> On Jun 13, 2017, at 9:24 AM, Darin Adler  wrote:
> 
> I’ve noticed many patches switching us from std::function to WTF::Function 
> recently, to fix problems with copying and thread safety.
> 
> Does std::function have any advantages over WTF::Function? Should we ever 
> prefer std::function, or should we use WTF::Function everywhere in WebKit 
> where we would otherwise use std::function?
> 
> — Darin
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] !!Tests for equality comparison

2017-04-27 Thread Filip Pizlo
I think that this aspect of the style - its implications for ints - was 
deliberate.

The code uses the !int style in so many places that this style change would be 
a lot of churn for little benefit.  I eventually got used to this style, and 
now it feels pretty natural.

-Filip



> On Apr 27, 2017, at 4:06 PM, JF Bastien  wrote:
> 
> Hello C++ fans!
> 
> The C++ style check currently say:
> Tests for true/false, null/non-null, and zero/non-zero should all be done 
> without equality comparisons
> 
> I totally agree for booleans and pointers… but not for integers. I know it’s 
> pretty much the same thing, but I it takes me slightly longer to process code 
> like this:
> 
> int numTestsForEqualityComparison = 0:
> // Count ‘em!
> // …
> if (!numTestsForEqualityComparison)
>   printf(“Good job!”);
> 
> I read it as “if not number of tests for equality comparison”. That's weird. 
> It takes me every slightly longer to think about, and I’ve gotten it wrong a 
> bunch of times already. I’m not trying to check for “notness", I’m trying to 
> say “if there were zero tests for equality comparison”, a.k.a.:
> 
> if (numTestsForEqualityComparison == 0)
>   printf(“Good job!”);
> 
> So how about the C++ style let me just say that? I’m not suggesting we advise 
> using that style for integers everywhere, I’m just saying it should be 
> acceptable to check zero/non-zero using equality comparison.
> 
> 
> !!Thanks (i.e. many thanks),
> 
> JF
> 
> p.s.: With you I am, fans of Yoda comparison, but for another day this will 
> be.
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] VM::setExclusiveThread()

2017-02-28 Thread Filip Pizlo
It would be surprising if it did, since JS benchmarks usually acquire the JS 
lock around the whole benchmark.

-Filip


> On Feb 28, 2017, at 2:50 PM, Maciej Stachowiak <m...@apple.com> wrote:
> 
> 
> Good news that it doesn't affect Speedometer. Does this have any effect on 
> pure JS benchmarks running in the browser (e.g. JetStream)?
> 
> - Maciej
> 
>> On Feb 28, 2017, at 10:48 AM, Filip Pizlo <fpi...@apple.com> wrote:
>> 
>> Sounds good!
>> 
>> I agree that a 20% regression on a microbenchmark of the exclusive JSLock is 
>> not a problem, since that's not how WebCore usually behaves and Speedometer 
>> doesn't seem to care.
>> 
>> -Filip
>> 
>> 
>>> On Feb 28, 2017, at 10:38 AM, Mark Lam <mark@apple.com> wrote:
>>> 
>>> I’ve run Speedometer many times on a quiet 13” MacBookPro: removing the use 
>>> of exclusive thread status does not appear to impact performance in any 
>>> measurable way.  I also did some measurements on a microbenchmark locking 
>>> and unlocking the JSLock using a JSLockHolder in a loop.  The 
>>> microbenchmark shows that removing exclusive thread status results in the 
>>> locking and unlocking operation increasing by up to 20%.
>>> 
>>> Given that locking and unlocking the JSLock is a very small fraction of the 
>>> work done in a webpage, it’s not surprising that the 20% increase in time 
>>> for the lock and unlock operation is not measurable in Speedometer.  Note 
>>> also that the 20% only impacts WebCore which uses the exclusive thread 
>>> status.  For all other clients of JSC (which never uses exclusive thread 
>>> status), it may actually be faster to have exclusive thread checks removed 
>>> (simply due to that code doing less work).
>>> 
>>> I’ll put up a patch to remove the use of exclusive thread status.  This 
>>> will simplify the code and make it easier to move forward with new features.
>>> 
>>> Mark
>>> 
>>> 
>>>> On Feb 24, 2017, at 9:01 PM, Filip Pizlo <fpi...@apple.com> wrote:
>>>> 
>>>> Seems like if the relevant benchmarks (speedometer) are ok with it then we 
>>>> should just do this. 
>>>> 
>>>> -Filip
>>>> 
>>>>> On Feb 24, 2017, at 20:50, Mark Lam <mark@apple.com> wrote:
>>>>> 
>>>>> The JSC VM has this method setExclusiveThread().  Some details:
>>>>> 1. setExclusiveThread() is only used to forego actually locking/unlocking 
>>>>> the underlying lock inside JSLock.
>>>>> 2. setExclusiveThread() is only used by WebCore where we can guarantee 
>>>>> that the VM will only ever be used exclusively on one thread.
>>>>> 3. the underlying lock inside JSLock used to be a slow system lock.
>>>>> 
>>>>> Now that we have fast locking, I propose that we simplify the JSLock code 
>>>>> by removing the concept of the exclusiveThread and always lock/unlock the 
>>>>> underlying lock.  This also give us the ability to tryLock the JSLock 
>>>>> (something I would like to be able to do for something new I’m working 
>>>>> on).
>>>>> 
>>>>> Does anyone see a reason why we can’t remove the concept of the 
>>>>> exclusiveThread?
>>>>> 
>>>>> Thanks.
>>>>> 
>>>>> Mark
>>>>> 
>>>>> ___
>>>>> webkit-dev mailing list
>>>>> webkit-dev@lists.webkit.org
>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>>> 
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] VM::setExclusiveThread()

2017-02-28 Thread Filip Pizlo
Sounds good!

I agree that a 20% regression on a microbenchmark of the exclusive JSLock is 
not a problem, since that's not how WebCore usually behaves and Speedometer 
doesn't seem to care.

-Filip


> On Feb 28, 2017, at 10:38 AM, Mark Lam <mark@apple.com> wrote:
> 
> I’ve run Speedometer many times on a quiet 13” MacBookPro: removing the use 
> of exclusive thread status does not appear to impact performance in any 
> measurable way.  I also did some measurements on a microbenchmark locking and 
> unlocking the JSLock using a JSLockHolder in a loop.  The microbenchmark 
> shows that removing exclusive thread status results in the locking and 
> unlocking operation increasing by up to 20%.
> 
> Given that locking and unlocking the JSLock is a very small fraction of the 
> work done in a webpage, it’s not surprising that the 20% increase in time for 
> the lock and unlock operation is not measurable in Speedometer.  Note also 
> that the 20% only impacts WebCore which uses the exclusive thread status.  
> For all other clients of JSC (which never uses exclusive thread status), it 
> may actually be faster to have exclusive thread checks removed (simply due to 
> that code doing less work).
> 
> I’ll put up a patch to remove the use of exclusive thread status.  This will 
> simplify the code and make it easier to move forward with new features.
> 
> Mark
> 
> 
>> On Feb 24, 2017, at 9:01 PM, Filip Pizlo <fpi...@apple.com> wrote:
>> 
>> Seems like if the relevant benchmarks (speedometer) are ok with it then we 
>> should just do this. 
>> 
>> -Filip
>> 
>>> On Feb 24, 2017, at 20:50, Mark Lam <mark@apple.com> wrote:
>>> 
>>> The JSC VM has this method setExclusiveThread().  Some details:
>>> 1. setExclusiveThread() is only used to forego actually locking/unlocking 
>>> the underlying lock inside JSLock.
>>> 2. setExclusiveThread() is only used by WebCore where we can guarantee that 
>>> the VM will only ever be used exclusively on one thread.
>>> 3. the underlying lock inside JSLock used to be a slow system lock.
>>> 
>>> Now that we have fast locking, I propose that we simplify the JSLock code 
>>> by removing the concept of the exclusiveThread and always lock/unlock the 
>>> underlying lock.  This also give us the ability to tryLock the JSLock 
>>> (something I would like to be able to do for something new I’m working on).
>>> 
>>> Does anyone see a reason why we can’t remove the concept of the 
>>> exclusiveThread?
>>> 
>>> Thanks.
>>> 
>>> Mark
>>> 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] VM::setExclusiveThread()

2017-02-24 Thread Filip Pizlo
Seems like if the relevant benchmarks (speedometer) are ok with it then we 
should just do this. 

-Filip

> On Feb 24, 2017, at 20:50, Mark Lam  wrote:
> 
> The JSC VM has this method setExclusiveThread().  Some details:
> 1. setExclusiveThread() is only used to forego actually locking/unlocking the 
> underlying lock inside JSLock.
> 2. setExclusiveThread() is only used by WebCore where we can guarantee that 
> the VM will only ever be used exclusively on one thread.
> 3. the underlying lock inside JSLock used to be a slow system lock.
> 
> Now that we have fast locking, I propose that we simplify the JSLock code by 
> removing the concept of the exclusiveThread and always lock/unlock the 
> underlying lock.  This also give us the ability to tryLock the JSLock 
> (something I would like to be able to do for something new I’m working on).
> 
> Does anyone see a reason why we can’t remove the concept of the 
> exclusiveThread?
> 
> Thanks.
> 
> Mark
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Filip Pizlo

> On Feb 22, 2017, at 12:41 PM, Mark Lam <mark@apple.com> wrote:
> 
>> 
>> On Feb 22, 2017, at 12:35 PM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>>> 
>>> On Feb 22, 2017, at 12:33 PM, Mark Lam <mark@apple.com 
>>> <mailto:mark@apple.com>> wrote:
>>> 
>>> 
>>>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>>>> <mailto:fpi...@apple.com>> wrote:
>>>> 
>>>> 
>>>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>>>> <mailto:gga...@apple.com>> wrote:
>>>>> 
>>>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>>>> been easy to solve if I had access to register state.
>>>> 
>>>> The current RELEASE_ASSERT means that every assertion in what the compiler 
>>>> thinks is a function (i.e. some function and everything inlined into it) 
>>>> is coalesced into a single trap site.  I’d like to understand how you use 
>>>> the register state if you don’t even know which assertion you are at.
>>> 
>>> Correction: they are not coalesced.  I was mistaken about that.  The fact 
>>> that we turn them into inline asm (for emitting the int3) means the 
>>> compiler cannot optimize it away or coalesce it.  The compiler does move it 
>>> to the end of the emitted code for the function though because we end the 
>>> CRASH() macro with __builtin_unreachable().
>>> 
>>> Hence, each int3 can be correlated back to the RELEASE_ASSERT that 
>>> triggered it (with some extended disassembly work).
>> 
>> This never works for me.  I tested it locally.  LLVM will even coalesce 
>> similar inline assembly.
> 
> With my proposal, I’m emitting different inline asm now after the int3 trap 
> because I’m embedding line number and file strings.  Hence, even if the 
> compiler is smart enough to compare inline asm code blobs, it will find them 
> to be different, and hence, it doesn’t make sense to coalesce.

Are you claiming that LLVM does not currently now coalesce RELEASE_ASSERTS, or 
that it will not coalesce them anymore after you make some change?

> 
>> 
>>> 
>>>> I believe that if you do want to analyze register state, then switching 
>>>> back to calling some function that prints out diagnostic information is 
>>>> strictly better.  Sure, you get less register state, but at least you know 
>>>> where you crashed.  Knowing where you crashed is much more important than 
>>>> knowing the register state, since the register state is not useful if you 
>>>> don’t know where you crashed.
>>>> 
>>> 
>>> I would like to point out that we might be able to get the best of both 
>>> worlds.  Here’s how we can do it:
>>> 
>>> define RELEASE_ASSERT(assertion) do { \
>>> if (UNLIKELY(!(assertion))) { \
>>> preserveRegisterState(); \
>>> WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
>>> #assertion); \
>>> restoreRegisterState(); \
>>> CRASH(); \
>>> } \
>>> 
>>> preserveRegisterState() and restoreRegisterState() will carefully push and 
>>> pop registers onto / off the stack (like how the JIT probe works).
>> 
>> Why not do the preserve/restore inside the WTFReport call?
> 
> Because I would like to preserve the register values that were used in the 
> comparison that failed the assertion.

That doesn't change anything.  You can create a WTFFail that is written in 
assembly and first saves all registers, and restores them prior to trapping.

-Filip


> 
> Mark
> 
>> 
>>> This allows us to get a log message on the terminal when we’re running 
>>> manually.
>>> 
>>> In addition, we can capture some additional information about the assertion 
>>> site by forcing the compiler to emit code to capture the code location info 
>>> after the trapping instruction.  This is redundant but provides an easy 
>>> place to find this info (i.e. after the int3 instruction).
>>> 
>>> #define WTFBreakpointTrap() do { \
>>> __asm__ volatile ("int3"); \
>>> __asm__ volatile( "" :  : "r"(__FILE__), "r"(__LINE__), 
>>> "r"(WTF_PRETTY_FUNCTION)); \
>>> } while (false)
>>> 
>>> We can easily get the line number this way.  How

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Filip Pizlo
Mark,

I know that you keep saying this.  I remember you told me this even when I was 
sitting on a RELEASE_ASSERT that had gotten rage-coalesced.

Your reasoning sounds great, but this just isn't what happens in clang.  
__builtin_trap gets coalesced, as does inline asm.

-Filip


> On Feb 22, 2017, at 12:38 PM, Mark Lam <mark@apple.com> wrote:
> 
> For some context, we used to see aggregation of the CRASH() for 
> RELEASE_ASSERT() in the old days before we switched to using the int3 trap.  
> Back then we called a crash() function that never returns.  As a result, the 
> C++ compiler was able to coalesce all the calls.  With the int3 trap emitted 
> by inline asm, the C++ compiler has less ability to determine that the crash 
> sites have the same code (probably because it doesn’t bother comparing what’s 
> in the inline asm blobs).
> 
> Mark
> 
> 
>> On Feb 22, 2017, at 12:33 PM, Mark Lam <mark@apple.com 
>> <mailto:mark@apple.com>> wrote:
>> 
>>> 
>>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>>> <mailto:fpi...@apple.com>> wrote:
>>> 
>>> 
>>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>>> <mailto:gga...@apple.com>> wrote:
>>>> 
>>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>>> been easy to solve if I had access to register state.
>>> 
>>> The current RELEASE_ASSERT means that every assertion in what the compiler 
>>> thinks is a function (i.e. some function and everything inlined into it) is 
>>> coalesced into a single trap site.  I’d like to understand how you use the 
>>> register state if you don’t even know which assertion you are at.
>> 
>> Correction: they are not coalesced.  I was mistaken about that.  The fact 
>> that we turn them into inline asm (for emitting the int3) means the compiler 
>> cannot optimize it away or coalesce it.  The compiler does move it to the 
>> end of the emitted code for the function though because we end the CRASH() 
>> macro with __builtin_unreachable().
>> 
>> Hence, each int3 can be correlated back to the RELEASE_ASSERT that triggered 
>> it (with some extended disassembly work).
>> 
>>> I believe that if you do want to analyze register state, then switching 
>>> back to calling some function that prints out diagnostic information is 
>>> strictly better.  Sure, you get less register state, but at least you know 
>>> where you crashed.  Knowing where you crashed is much more important than 
>>> knowing the register state, since the register state is not useful if you 
>>> don’t know where you crashed.
>>> 
>> 
>> I would like to point out that we might be able to get the best of both 
>> worlds.  Here’s how we can do it:
>> 
>> define RELEASE_ASSERT(assertion) do { \
>> if (UNLIKELY(!(assertion))) { \
>> preserveRegisterState(); \
>> WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
>> #assertion); \
>> restoreRegisterState(); \
>> CRASH(); \
>> } \
>> 
>> preserveRegisterState() and restoreRegisterState() will carefully push and 
>> pop registers onto / off the stack (like how the JIT probe works).
>> This allows us to get a log message on the terminal when we’re running 
>> manually.
>> 
>> In addition, we can capture some additional information about the assertion 
>> site by forcing the compiler to emit code to capture the code location info 
>> after the trapping instruction.  This is redundant but provides an easy 
>> place to find this info (i.e. after the int3 instruction).
>> 
>> #define WTFBreakpointTrap() do { \
>> __asm__ volatile ("int3"); \
>> __asm__ volatile( "" :  : "r"(__FILE__), "r"(__LINE__), 
>> "r"(WTF_PRETTY_FUNCTION)); \
>> } while (false)
>> 
>> We can easily get the line number this way.  However, the line number is not 
>> very useful by itself when we have inlining.  Hence, I also capture the 
>> __FILE__ and WTF_PRETTY_FUNCTION.  However, I haven’t been able to figure 
>> out how to decode those from the otool disassembler yet.
>> 
>> The only downside of doing this extra work is that it increases the code 
>> size for each RELEASE_ASSERT site.  This is probably insignificant in total.
>> 
>> Performance-wise, it should be neutral-ish because the 
>> __builtin_unreachable() in the CRASH() macro + the UNLIKELY() macro wo

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Filip Pizlo

> On Feb 22, 2017, at 12:33 PM, Mark Lam <mark@apple.com> wrote:
> 
> 
>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> 
>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>> <mailto:gga...@apple.com>> wrote:
>>> 
>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>> been easy to solve if I had access to register state.
>> 
>> The current RELEASE_ASSERT means that every assertion in what the compiler 
>> thinks is a function (i.e. some function and everything inlined into it) is 
>> coalesced into a single trap site.  I’d like to understand how you use the 
>> register state if you don’t even know which assertion you are at.
> 
> Correction: they are not coalesced.  I was mistaken about that.  The fact 
> that we turn them into inline asm (for emitting the int3) means the compiler 
> cannot optimize it away or coalesce it.  The compiler does move it to the end 
> of the emitted code for the function though because we end the CRASH() macro 
> with __builtin_unreachable().
> 
> Hence, each int3 can be correlated back to the RELEASE_ASSERT that triggered 
> it (with some extended disassembly work).

This never works for me.  I tested it locally.  LLVM will even coalesce similar 
inline assembly.

> 
>> I believe that if you do want to analyze register state, then switching back 
>> to calling some function that prints out diagnostic information is strictly 
>> better.  Sure, you get less register state, but at least you know where you 
>> crashed.  Knowing where you crashed is much more important than knowing the 
>> register state, since the register state is not useful if you don’t know 
>> where you crashed.
>> 
> 
> I would like to point out that we might be able to get the best of both 
> worlds.  Here’s how we can do it:
> 
> define RELEASE_ASSERT(assertion) do { \
> if (UNLIKELY(!(assertion))) { \
> preserveRegisterState(); \
> WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
> #assertion); \
> restoreRegisterState(); \
> CRASH(); \
> } \
> 
> preserveRegisterState() and restoreRegisterState() will carefully push and 
> pop registers onto / off the stack (like how the JIT probe works).

Why not do the preserve/restore inside the WTFReport call?

> This allows us to get a log message on the terminal when we’re running 
> manually.
> 
> In addition, we can capture some additional information about the assertion 
> site by forcing the compiler to emit code to capture the code location info 
> after the trapping instruction.  This is redundant but provides an easy place 
> to find this info (i.e. after the int3 instruction).
> 
> #define WTFBreakpointTrap() do { \
> __asm__ volatile ("int3"); \
> __asm__ volatile( "" :  : "r"(__FILE__), "r"(__LINE__), 
> "r"(WTF_PRETTY_FUNCTION)); \
> } while (false)
> 
> We can easily get the line number this way.  However, the line number is not 
> very useful by itself when we have inlining.  Hence, I also capture the 
> __FILE__ and WTF_PRETTY_FUNCTION.  However, I haven’t been able to figure out 
> how to decode those from the otool disassembler yet.
> 
> The only downside of doing this extra work is that it increases the code size 
> for each RELEASE_ASSERT site.  This is probably insignificant in total.
> 
> Performance-wise, it should be neutral-ish because the 
> __builtin_unreachable() in the CRASH() macro + the UNLIKELY() macro would 
> tell the compiler to put this in a slow path away from the main code path.
> 
> Any thoughts on this alternative?
> 
> Mark
> 
> 
>>> 
>>> I also want the freedom to add RELEASE_ASSERT without ruining performance 
>>> due to bad register allocation or making the code too large to inline. For 
>>> example, hot paths in WTF::Vector use RELEASE_ASSERT.
>> 
>> Do we have data about the performance benefits of the current RELEASE_ASSERT 
>> implementation?
>> 
>>> 
>>> Is some compromise solution possible?
>>> 
>>> Some options:
>>> 
>>> (1) Add a variant of RELEASE_ASSERT that takes a string and logs.
>> 
>> The point of C++ assert macros is that I don’t have to add a custom string.  
>> I want a RELEASE_ASSERT macro that automatically stringifies the expression 
>> and uses that as the string.
>> 
>> If I had a choice between a RELEASE_ASSERT that can accurate report where it 
>> crashed but sometimes tr

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Filip Pizlo

> On Feb 22, 2017, at 12:23 PM, Saam barati <sbar...@apple.com> wrote:
> 
>> 
>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> 
>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>> <mailto:gga...@apple.com>> wrote:
>>> 
>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>> been easy to solve if I had access to register state.
>> 
>> The current RELEASE_ASSERT means that every assertion in what the compiler 
>> thinks is a function (i.e. some function and everything inlined into it) is 
>> coalesced into a single trap site.  I’d like to understand how you use the 
>> register state if you don’t even know which assertion you are at.
> When I disassemble JavaScriptCore, I often find a succession of int3s at the 
> bottom of a function. Does LLVM sometimes combine them and sometimes not?

Yeah.

> 
> For example, this is what the bottom of the 
> __ZN3JSC20AbstractModuleRecord18getModuleNamespaceEPNS_9ExecStateE looks like:
> 
> 5c25  popq%r14
> 5c27  popq%r15
> 5c29  popq%rbp
> 5c2a  retq
> 5c2b  int3
> 5c2c  int3
> 5c2d  int3
> 5c2e  int3
> 5c2f  nop

I’m curious how many branches target those traps.

For example in the GC, I was often getting crashes that LLVM was convinced were 
vector overflow.  Turns out that the compiler loves to coalesce other traps 
with the one from the vector overflow assert, so if you assert for some random 
reason in a function that accesses vectors, all of our tooling will report with 
total confidence that you’re overflowing a vector.

That’s way worse than not having register state.

-Filip


> 
> - Saam
>> 
>> I believe that if you do want to analyze register state, then switching back 
>> to calling some function that prints out diagnostic information is strictly 
>> better.  Sure, you get less register state, but at least you know where you 
>> crashed.  Knowing where you crashed is much more important than knowing the 
>> register state, since the register state is not useful if you don’t know 
>> where you crashed.
>> 
>>> 
>>> I also want the freedom to add RELEASE_ASSERT without ruining performance 
>>> due to bad register allocation or making the code too large to inline. For 
>>> example, hot paths in WTF::Vector use RELEASE_ASSERT.
>> 
>> Do we have data about the performance benefits of the current RELEASE_ASSERT 
>> implementation?
>> 
>>> 
>>> Is some compromise solution possible?
>>> 
>>> Some options:
>>> 
>>> (1) Add a variant of RELEASE_ASSERT that takes a string and logs.
>> 
>> The point of C++ assert macros is that I don’t have to add a custom string.  
>> I want a RELEASE_ASSERT macro that automatically stringifies the expression 
>> and uses that as the string.
>> 
>> If I had a choice between a RELEASE_ASSERT that can accurate report where it 
>> crashed but sometimes trashes the register state, and a RELEASE_ASSERT that 
>> always gives me the register state but cannot tell me which assert in the 
>> function it’s coming from, then I would always choose the one that can tell 
>> me where it crashed.  That’s much more important, and the register state is 
>> not useful without that information.
>> 
>>> 
>>> (2) Change RELEASE_ASSERT to do the normal debug ASSERT thing in Debug 
>>> builds. (There’s not much need to preserve register state in debug builds.)
>> 
>> That would be nice, but doesn’t make RELEASE_ASSERT useful for debugging 
>> issues where timing is important.  I no longer use RELEASE_ASSERTS for those 
>> kinds of assertions, because if I do it then I will never know where I 
>> crashed.  So, I use the explicit:
>> 
>> if (!thing) {
>>   dataLog(“…”);
>>   RELEASE_ASSERT_NOT_REACHED();
>> }
>> 
>> -Filip
>> 
>> 
>>> 
>>> Geoff
>>> 
>>>> On Feb 22, 2017, at 11:09 AM, Filip Pizlo <fpi...@apple.com 
>>>> <mailto:fpi...@apple.com>> wrote:
>>>> 
>>>> I disagree actually.  I've lost countless hours to converting this:
>>>> 
>>>> RELEASE_ASSERT(blah)
>>>> 
>>>> into this:
>>>> 
>>>> if (!blah) {
>>>> dataLog("Reason why I crashed");
>>>> RELEASE_ASSERT_NOT_REACHED();
&

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Filip Pizlo

> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com> wrote:
> 
> I’ve lost countless hours to investigating CrashTracers that would have been 
> easy to solve if I had access to register state.

The current RELEASE_ASSERT means that every assertion in what the compiler 
thinks is a function (i.e. some function and everything inlined into it) is 
coalesced into a single trap site.  I’d like to understand how you use the 
register state if you don’t even know which assertion you are at.

I believe that if you do want to analyze register state, then switching back to 
calling some function that prints out diagnostic information is strictly 
better.  Sure, you get less register state, but at least you know where you 
crashed.  Knowing where you crashed is much more important than knowing the 
register state, since the register state is not useful if you don’t know where 
you crashed.

> 
> I also want the freedom to add RELEASE_ASSERT without ruining performance due 
> to bad register allocation or making the code too large to inline. For 
> example, hot paths in WTF::Vector use RELEASE_ASSERT.

Do we have data about the performance benefits of the current RELEASE_ASSERT 
implementation?

> 
> Is some compromise solution possible?
> 
> Some options:
> 
> (1) Add a variant of RELEASE_ASSERT that takes a string and logs.

The point of C++ assert macros is that I don’t have to add a custom string.  I 
want a RELEASE_ASSERT macro that automatically stringifies the expression and 
uses that as the string.

If I had a choice between a RELEASE_ASSERT that can accurate report where it 
crashed but sometimes trashes the register state, and a RELEASE_ASSERT that 
always gives me the register state but cannot tell me which assert in the 
function it’s coming from, then I would always choose the one that can tell me 
where it crashed.  That’s much more important, and the register state is not 
useful without that information.

> 
> (2) Change RELEASE_ASSERT to do the normal debug ASSERT thing in Debug 
> builds. (There’s not much need to preserve register state in debug builds.)

That would be nice, but doesn’t make RELEASE_ASSERT useful for debugging issues 
where timing is important.  I no longer use RELEASE_ASSERTS for those kinds of 
assertions, because if I do it then I will never know where I crashed.  So, I 
use the explicit:

if (!thing) {
   dataLog(“…”);
   RELEASE_ASSERT_NOT_REACHED();
}

-Filip


> 
> Geoff
> 
>> On Feb 22, 2017, at 11:09 AM, Filip Pizlo <fpi...@apple.com> wrote:
>> 
>> I disagree actually.  I've lost countless hours to converting this:
>> 
>> RELEASE_ASSERT(blah)
>> 
>> into this:
>> 
>> if (!blah) {
>>  dataLog("Reason why I crashed");
>>  RELEASE_ASSERT_NOT_REACHED();
>> }
>> 
>> Look in the code - you'll find lots of stuff like this.
>> 
>> I don't think analyzing register state at crashes is more important than 
>> keeping our code sane.
>> 
>> -Filip
>> 
>> 
>>> On Feb 21, 2017, at 5:56 PM, Mark Lam <mark@apple.com> wrote:
>>> 
>>> Oh yeah, I forgot about that.  I think the register state is more important 
>>> for crash analysis, especially if we can make sure that the compiler does 
>>> not aggregate the int3s.  I’ll explore alternatives.
>>> 
>>>> On Feb 21, 2017, at 5:54 PM, Saam barati <sbar...@apple.com> wrote:
>>>> 
>>>> I thought the main point of moving to SIGTRAP was to preserve register 
>>>> state?
>>>> 
>>>> That said, there are probably places where we care more about the message 
>>>> than the registers.
>>>> 
>>>> - Saam
>>>> 
>>>>> On Feb 21, 2017, at 5:43 PM, Mark Lam <mark@apple.com> wrote:
>>>>> 
>>>>> Is there a reason why RELEASE_ASSERT (and friends) does not call 
>>>>> WTFReportAssertionFailure() to report where the assertion occur?  Is this 
>>>>> purely to save memory?  svn blame tells me that it has been this way 
>>>>> since the introduction of RELEASE_ASSERT in r140577 many years ago.
>>>>> 
>>>>> Would anyone object to adding a call to WTFReportAssertionFailure() in 
>>>>> RELEASE_ASSERT() like we do for ASSERT()?  One of the upside 
>>>>> (side-effect) of adding this call is that it appears to stop the compiler 
>>>>> from aggregating all the RELEASE_ASSERTS into a single code location, and 
>>>>> this will help with post-mortem crash debugging.
>>>>> 
>>>>> Any thoughts?
>>>>> 
>>>>> Mark
&g

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Filip Pizlo
I disagree actually.  I've lost countless hours to converting this:

RELEASE_ASSERT(blah)

into this:

if (!blah) {
   dataLog("Reason why I crashed");
   RELEASE_ASSERT_NOT_REACHED();
}

Look in the code - you'll find lots of stuff like this.

I don't think analyzing register state at crashes is more important than 
keeping our code sane.

-Filip


> On Feb 21, 2017, at 5:56 PM, Mark Lam  wrote:
> 
> Oh yeah, I forgot about that.  I think the register state is more important 
> for crash analysis, especially if we can make sure that the compiler does not 
> aggregate the int3s.  I’ll explore alternatives.
> 
>> On Feb 21, 2017, at 5:54 PM, Saam barati  wrote:
>> 
>> I thought the main point of moving to SIGTRAP was to preserve register state?
>> 
>> That said, there are probably places where we care more about the message 
>> than the registers.
>> 
>> - Saam
>> 
>>> On Feb 21, 2017, at 5:43 PM, Mark Lam  wrote:
>>> 
>>> Is there a reason why RELEASE_ASSERT (and friends) does not call 
>>> WTFReportAssertionFailure() to report where the assertion occur?  Is this 
>>> purely to save memory?  svn blame tells me that it has been this way since 
>>> the introduction of RELEASE_ASSERT in r140577 many years ago.
>>> 
>>> Would anyone object to adding a call to WTFReportAssertionFailure() in 
>>> RELEASE_ASSERT() like we do for ASSERT()?  One of the upside (side-effect) 
>>> of adding this call is that it appears to stop the compiler from 
>>> aggregating all the RELEASE_ASSERTS into a single code location, and this 
>>> will help with post-mortem crash debugging.
>>> 
>>> Any thoughts?
>>> 
>>> Mark
>>> 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Webkit port to ppc64le.

2017-02-22 Thread Filip Pizlo
You have to port the llint either way since even the JIT relies on some llint 
code. 

-Filip

> On Feb 22, 2017, at 07:22, Atul Sowani  wrote:
> 
> Sure, thanks @Konstantin. So I will first attempt having working JIT on 
> ppc64le. Your comments definitely helped in deciding the direction for 
> porting. Thanks! :)
> 
>> On Wed, Feb 22, 2017 at 8:28 PM, Konstantin Tokarev  
>> wrote:
>> 
>> 
>> 22.02.2017, 17:41, "Atul Sowani" :
>> > So, essentially, some tweaking in the code of LLInt/interpreter _is_ 
>> > required when porting it to the new platform. Is my understanding correct?
>> 
>> Yes. In order to port LLInt you should implement offlineasm backend. See for 
>> example arm.rb, arm64.rb, mips.rb in Source/JavaScriptCore/offlineasm/. You 
>> may also need arch-specific adjustements in Source/JavaScriptCore/llint, 
>> though it's better to minimize their number.
>> 
>> My point was that you don't have to port LLInt before you have working JIT. 
>> AFAIU main purpose of architecture-specific LLInt implementation is to 
>> provide interpreter with the same calling convention as JIT, not to make 
>> interpreter faster on its own.
>> 
>> See also https://trac.webkit.org/wiki/JavaScriptCore
>> 
>> > On Wed, Feb 22, 2017 at 4:11 PM, Konstantin Tokarev  
>> > wrote:
>> >
>> >> 22.02.2017, 13:15, "Atul Sowani" :
>> >>> Hi,
>> >>>
>> >>> This is not specific to any particular branch/version of WebKit. I was 
>> >>> browsing the code with ppc64le porting in mind. By default JIT is not 
>> >>> available on ppc64le, so the CLoop code is used instead.
>> >>>
>> >>> I see there is low level interpreter code in 
>> >>> qtwebkit/Source/JavaScriptCore/llint directory and inside 
>> >>> qtwebkit/Source/JavaScriptCore/interpreter directory (AbstractPC, 
>> >>> CallFrame etc.). I am wondering if one needs to touch this code as well 
>> >>> to make it work correctly on ppc64le.
>> >>
>> >> Porting JIT to new platfrom does not require porting LLInt, it can work 
>> >> without interpreter tier. However, it is a good idea to port LLInt after 
>> >> you have JIT in place, to improve overall quality.
>> >>
>> >>>
>> >>> Any comments/suggestions?
>> >>>
>> >>> Thanks,
>> >>> Atul.
>> >>> ,
>> >>>
>> >>> ___
>> >>> webkit-dev mailing list
>> >>> webkit-dev@lists.webkit.org
>> >>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> >>
>> >> --
>> >> Regards,
>> >> Konstantin
>> > ,
>> >
>> > ___
>> > webkit-dev mailing list
>> > webkit-dev@lists.webkit.org
>> > https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> 
>> -- 
>> Regards,
>> Konstantin
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-21 Thread Filip Pizlo
+1

-Filip

> On Feb 21, 2017, at 17:43, Mark Lam  wrote:
> 
> Is there a reason why RELEASE_ASSERT (and friends) does not call 
> WTFReportAssertionFailure() to report where the assertion occur?  Is this 
> purely to save memory?  svn blame tells me that it has been this way since 
> the introduction of RELEASE_ASSERT in r140577 many years ago.
> 
> Would anyone object to adding a call to WTFReportAssertionFailure() in 
> RELEASE_ASSERT() like we do for ASSERT()?  One of the upside (side-effect) of 
> adding this call is that it appears to stop the compiler from aggregating all 
> the RELEASE_ASSERTS into a single code location, and this will help with 
> post-mortem crash debugging.
> 
> Any thoughts?
> 
> Mark
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [webkit-reviewers] usage of auto

2017-01-12 Thread Filip Pizlo


> On Jan 12, 2017, at 08:54, Brady Eidson  wrote:
> 
> My take-away from this discussion so far is that there is actually very 
> little consensus on usage of auto, which means there’s probably very little 
> room for actual style guideline rules.
> 
> I think there are two very limited rules that are probably not objectionable 
> to anybody.
> 
> 1 - If you are using auto for a raw pointer type, you should use auto*
> 2 - If you are using auto in a range-based for loop for values that aren’t 
> pointers, you should use (const) auto&

In some cases you need a copy for the code to be correct. I understand why & is 
often better for performance but there is a significant and dangerous 
behavioral difference.

I agree with encouraging people to use auto& because it's usually ok, but I 
disagree with mandating it because it's sometimes wrong. 

-Filip

> 
> If there’s no objections to these rules, I think it’s valuable to have them 
> in the style guidelines at the very least.
> 
> Thanks,
> ~Brady
> 
> 
>> On Jan 11, 2017, at 10:27 PM, saam barati  wrote:
>> 
>> 
>> 
>>> On Jan 11, 2017, at 11:15 AM, JF Bastien  wrote:
>>> 
>>> Would it be helpful to focus on small well-defined cases where auto makes 
>>> sense, and progressively grow that list as we see fit?
>>> 
>>> 
>>> e.g. I think this is great:
>>> auto ptr = std::make_unique(bar);
>>> Proposed rule: if the type is obvious because it's on the line, then auto 
>>> is good.
>>> Similarly:
>>> auto i = static_cast(j);
>>> auto foo = make_foo();
>>> auto bar = something.get_bar(); // Sometimes, "bar" is obvious.
>> I'm not sure I agree with this style. There are times where the type of an 
>> auto variable is obvious-enough, but it's almost never more obvious than 
>> actually writing out the types. Writing out types, for my brain at least, 
>> almost always makes the code easier to understand. The most obvious place 
>> where I prefer auto over explicit types is when something has a lot of 
>> template bloat.
>> 
>> I feel like the places where auto makes the code better are limited, but 
>> places where auto makes the code more confusing, or requires me to spend 
>> more time figuring it out, are widespread. (Again, this is how my brain 
>> reads code.)
>> 
>> Also, I completely agree with Geoff that I use types to grep around the 
>> source code and to figure out what data structures are being used. If we 
>> used auto more inside JSC it would hurt my workflow for reading and 
>> understanding new code.
>> 
>> - Saam
>> 
>>> 
>>> 
>>> Range-based loops are a bit tricky. IMO containers with "simple" types are 
>>> good candidates for either:
>>> for (const auto& v : cont) { /* don't change v */ }
>>> for auto& v : cont) { /* change v */ }
>>> But what's "simple"? I'd say all numeric, pointer, and string types at 
>>> least. It gets tricky for more complex types, and I'd often rather have the 
>>> type in the loop. Here's more discussion on this, including a 
>>> recommendation to use auto&& on range-based loops! I think this gets 
>>> confusing, and I'm not a huge fan of r-value references everywhere.
>>> 
>>> 
>>> Here's another I like, which Yusuke pointed out a while ago (in ES6 
>>> Module's implementation?):
>>> struct Foo {
>>>   typedef Something Bar;
>>>   // ...
>>>   Bar doIt();
>>> };
>>> auto Foo::doIt() -> Bar
>>> {
>>>   // ...
>>> }
>>> Why? Because Bar is scoped to Foo! It looks odd the first time, but I think 
>>> this is idiomatic "modern" C++.
>>> 
>>> 
>>> I also like creating unnamed types, though I know this isn't everyone's 
>>> liking:
>>> auto ohMy()
>>> {
>>>   struct { int a; float b; } result;
>>>   // ...
>>>   return result;
>>> }
>>> void yeah()
>>> {
>>>   auto myMy = ohMy();
>>>   dataLogLn(myMy.a, myMy.b);
>>> }
>>> I initially had that with consumeLoad, which returns a T as well as a 
>>> ConsumeDependency. I couldn't care less about the container for T and 
>>> ConsumeDependency, I just want these two values.
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [webkit-reviewers] usage of auto

2017-01-11 Thread Filip Pizlo
I'm only arguing for why using auto would be bad in the code snippet that we 
were talking about.

My views regarding auto in other code are not strong.  I only object to using 
auto when it is dropping useful information.

-Filip



> On Jan 11, 2017, at 9:15 AM, Darin Adler  wrote:
> 
> OK, you didn’t convince me but I can see that your opinions here are strongly 
> held!
> 
> — Darin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [webkit-reviewers] usage of auto

2017-01-11 Thread Filip Pizlo


On Jan 10, 2017, at 23:49, Darin Adler <da...@apple.com> wrote:

>> On Jan 10, 2017, at 10:17 PM, Filip Pizlo <fpi...@apple.com> wrote:
>> 
>>while (Arg src = worklist.pop()) {
>>HashMap<Arg, Vector>::iterator iter = 
>> mapping.find(src);
>>if (iter == mapping.end()) {
>>// With a shift it's possible that we previously built 
>> the tail of this shift.
>>// See if that's the case now.
>>if (verbose)
>>dataLog("Trying to append shift at ", src, "\n");
>>currentPairs.appendVector(shifts.take(src));
>>continue;
>>}
>>Vector pairs = WTFMove(iter->value);
>>mapping.remove(iter);
>> 
>>for (const ShufflePair& pair : pairs) {
>>currentPairs.append(pair);
>>ASSERT(pair.src() == src);
>>worklist.push(pair.dst());
>>}
>>}
> 
> Here is the version I would write in my favored coding style:
> 
>while (auto source = workList.pop()) {
>auto foundSource = mapping.find(source);
>if (foundSource == mapping.end()) {
>// With a shift it's possible that we previously built the tail of 
> this shift.
>// See if that's the case now.
>if (verbose)
>dataLog("Trying to append shift at ", source, "\n");
>currentPairs.appendVector(shifts.take(source));
>continue;
>}
>auto pairs = WTFMove(foundSource->value);
>mapping.remove(foundSource);
>for (auto& pair : pairs) {
>currentPairs.append(pair);
>ASSERT(pair.source() == source);
>workList.push(pair.destination());
>}
>}
> 
> You argued that specifying the type for both source and for the iterator 
> helps reassure you that the types match. I am not convinced that is an 
> important interesting property unless the code has types that can be 
> converted, but with conversions that we must be careful not to do. If there 
> was some type that could be converted to Arg or that Arg could be converted 
> to that was a dangerous possibility, then I grant you that, although I still 
> prefer my version despite that.

It would take me longer to understand your version. The fact that the algorithm 
is working on Args is the first thing I'd want to know when reading this code, 
and with your version I'd have to spend some time to figure that out. It 
actually quite tricky to infer that it's working with Args, so this would be a 
time-consuming puzzle. 

So, if the code was changed in this manner then I'd want it changed back 
because this version would make my job harder. 

I get that you don't need or want the type to understand this code. But I want 
the type to understand this code because knowing that it works with Args makes 
all the difference for me. 

I also get that conversions are possible and so the static assertion provided 
by a type annotation is not as smart as it would be in some other languages. 
But this is irrelevant to me. I would want to know that source is an Arg and 
pair is a ShufflePair regardless of whether this information was checked by the 
compiler. In this case, putting the information in a type means that it is 
partly checked. You could get it wrong and still have compiling code but that's 
unlikely. And for what it's worth, my style is to prefer explicit constructors 
unless I have a really good reason for implicit precisely because I like to 
limit the amount of implicit conversions that are possible. I don't think you 
can implicitly convert Arg to anything. 

> 
> You also said that it’s important to know that the type of foundSource->value 
> matches the type of pairs. I would argue that’s even clearer in my favored 
> style, because we know that auto guarantees that are using the same type for 
> both, although we don’t state explicitly what that type is.

I agree with you that anytime you use auto, it's possible to unambiguously 
infer what the type would have been if you think about it.

I want to reduce the amount of time I spend reading code because I read a lot 
of it. It would take me longer to read the version with auto because knowing 
the types is a huge hint about the code's intent and in your version I would 
have to pause and think about it to figure out the types.

> 
> Here’s one thing to consider: Why is it important to see the type of 
> foundSource->value, but not important to see the type of shifts.take(source)? 
> I

Re: [webkit-dev] [webkit-reviewers] usage of auto

2017-01-10 Thread Filip Pizlo
Brady asked:

> Have you identified any situation where explicitly calling out the type in a 
> range-based for loop has been better than using the proper form of auto?

> Have you identified a situation where explicitly calling out a nasty 
> templated type, like in my example, added to readability over using auto?

Darin asked:

> I’d love to see examples where using auto substantially hurts readability so 
> we could debate them.


In many places, I agree that auto is better.  But there are a bunch of 
algorithms in the compiler and GC where knowing the type helps me read the code 
more quickly.  Here are a few examples.  (1) is a loop, (2) is loop-like, and 
(3) is a nasty templated type.

1) B3 compiler code cloning loop.

I find that this code is easier to read with types:

Value* clone = m_proc.clone(value);
for (Value*& child : clone->children()) {
if (Value* newChild = mappings[i].get(child))
child = newChild;
}
if (value->type() != Void)
mappings[i].add(value, clone);

cases[i]->append(clone);
if (value->type() != Void)
cases[i]->appendNew(m_proc, value->origin(), 
clone, value);

Here's another code cloning loop I found - sort of the same basic algorithm:

for (Value* value : *tail) {
Value* clone = m_proc.clone(value);
for (Value*& child : clone->children()) {
if (Value* replacement = map.get(child))
child = replacement;
}
if (value->type() != Void)
map.add(value, clone);
block->append(clone);
}

When reading this code, it's pretty important to know that value, clone, child, 
newChild, and replacement are all Values.  As soon as you know this piece of 
information the algorithm - its purpose and how it functions - becomes clear.  
You can infer this information if you know where to look - m_proc.clone() and 
clone->children() are give-aways - but this isn't as immediately obvious as the 
use of the type name.

If someone refactored this code to use auto, it would be harder for me to read 
this code.  I would spend more time reading it than I would have spent if it 
spelled out the type.  I like seeing the type spelled out because that's how I 
recognize if the loop is over blocks, values, or something else.  I like to 
spell out the type even when it's super obvious:

for (BasicBlock* block : m_proc) {
for (Value* value : *block) {
if (value->opcode() == Phi && candidates.contains(block))
valuesToDemote.add(value);
for (Value* child : value->children()) {
if (child->owner != block && 
candidates.contains(child->owner))
valuesToDemote.add(child);
}
}
}

Sticking to this format for compiler loops means that I spend less time reading 
the code, because I can recognize important patterns at a glance.

2) Various GC loops

The GC usually loops using lambdas, but the same question comes into play: is 
the value's type auto or is it spelled out?

forEachFreeCell(
freeList,
[&] (HeapCell* cell) {
if (false)
dataLog("Free cell: ", RawPointer(cell), "\n");
if (m_attributes.destruction == NeedsDestruction)
cell->zap();
clearNewlyAllocated(cell);
});

It's useful to know that 'cell' is a HeapCell, not a FreeCell or a JSCell.  I 
believe that this code will only compile if you say HeapCell.  Combined with 
the function name, this tells you that this is a cell that is free, but not 
necessarily on a free-list ('cause then it would be a FreeCell).  This also 
tells you that the cell wasn't necessarily a JSCell before it was freed - it 
could have been a raw backing store.  That's important because then we don't 
have a guarantee about the format of its header.

I think that spelling out the type really helps here.  In the GC, we often 
assert everything, everywhere, all the time.  Typical review comes back with 
suggestions for more assertions.  Types are a form of assertion, so they are 
consistent with how we hack the GC.

3) AirEmitShuffle

My least favorite part of compilers is the shuffle.  That's the algorithm that 
figures out how to move data from one set of registers to another, where the 
sets may overlap.

It has code like this:

while (Arg src = worklist.pop()) {
HashMap::iterator iter = 
mapping.find(src);
if (iter == mapping.end()) {
// With a shift it's possible that we previously built the 
tail of this shift.
// See if that's the case now.
if (verbose)
dataLog("Trying 

Re: [webkit-dev] Thread naming policy in WebKit

2017-01-05 Thread Filip Pizlo


-Filip


> On Jan 5, 2017, at 10:51 AM, Geoffrey Garen  wrote:
> 
> Alternatively, we could just change thread name from a char* to a struct { 
> char*, char* } that contains a long name and a short name.
> 
> Geoff
> 
>> On Jan 5, 2017, at 9:37 AM, Brady Eidson > > wrote:
>> 
>>> 
>>> On Jan 5, 2017, at 12:48 AM, Yusuke SUZUKI >> > wrote:
>>> 
>>> On Thu, Jan 5, 2017 at 5:43 PM, Darin Adler >> > wrote:
>>> I understand the appeal of “org.webkit” and structured names but personally 
>>> I would prefer to read names that look like titles and are made up of words 
>>> with spaces, like these:
>>> 
>>> “WebKit: Image Decoder”, rather than “org.webkit.ImageDecoder”.
>>> “WebKit: JavaScript DFG Compiler” rather than “org.webkit.jsc.DFGCompiler”.
>>> 
>>> Not sure how well that would generalize to all the different names.
>>> 
>>> I like the idea of having a smart way of automatically making a shorter 
>>> name for the platforms that have shorter length limits.
>>> 
>>> One interesting idea I've come up with is that,
>>> 
>>> 1. specifying "org.webkit.ImageDecoder"
>>> 2. In Linux, we just use "ImageDecoder" part.
>>> 3. In macOS port, we automatically convert it to "WebKit: Image Decoder”
>> 
>> Why do we specify “org.webkit.ImageDecoder” if only the “ImageDecoder” part 
>> is ever going to be used?
>> Is that because Windows could use “org.webkit.”?
>> 
>> Again, back to Darin’s point, I don’t see any particular value in ever 
>> seeing “org.webkit.”
>> 
>> Additionally, the way this proposal treats “ImageDecoder” as multiple words, 
>> presumably separated on case-change, is problematic.
>> 
>> e.g. “IndexedDatabaseServer” would expand to “Indexed Database Server”, 
>> different from today.
>> e.g. “IndexedDBServer”, which is probably what this should be called, would 
>> expand to “Indexed D B Server"
>> e.g. “GCController” would expand to “G C Controller”
>> 
>> —
>> 
>> Taking your proposal and running with it, I think we could do this:
>> 
>> 1 - Specify the feature name with spaces: “Asynchronous Disassembler”
>> 
>> 2 - On Linux, it gets collapsed and truncated to 15: “AsynchronousDis”
>> 2a - It could get truncated with ellipses: “AsynchronousDi…" 
>> 
>> 3 - On Windows, it gets “WebKit: “ added and is truncated to 30: “WebKit: 
>> Asynchronous Disassemb”
>> 3a - It could get truncated with ellipses: “WebKit: Asynchronous Disassem…”
>> 
>> 4 - On macOS/iOS, it gets “WebKit: “ added: “WebKit: Asynchronous 
>> Disassembler"
>> 
>> Addendum: If we see value in having somethings flagged as “JSC” instead of 
>> “WebKit”, we just augment the input to include that.
>> The above could be “JSC.Asynchronous Disassembler”, and a WebKit specific 
>> feature could be “WebKit. IndexedDB Server”
>> 
>> Thanks,
>> ~Brady
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org 
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Thread naming policy in WebKit

2017-01-04 Thread Filip Pizlo
This sounds great to me!

-Filip

> On Jan 4, 2017, at 20:28, Yusuke SUZUKI  wrote:
> 
> Hi WebKittens!
> 
> Recently, I started naming threads in Linux. And I also started naming 
> threads created by WTF::AutomaticThread.
> Previously, all the thread is not named in Linux. It makes difficult to find 
> problematic threads when using GDB in Linux.
> For example, if you run the JSC shell, all the threads are named as "jsc" 
> (this is the name of the process).
> 
> The problem raised here is that we have no consistent policy for thread names.
> I picked several names in WebKit.
> 
> In WebCore,
> IDB server is "IndexedDatabase Server"
> AsyncAudioDecoder is "Audio Decoder"
> GCController is "WebCore: GCController"
> Icon Database is "WebCore: IconDatabase"
> Audio's ReverbConvolver is "convolution background thread"
> The thread name used by WorkQueue in WebCore is,
> Crypto Queue is "com.apple.WebKit.CryptoQueue"
> Image decoder is "org.webkit.ImageDecoder"
> Blob utility is "org.webkit.BlobUtility"
> Data URL decoder is "org.webkit.DataURLDecoder"
> In JSC
> Before this patch, all the AutomaticThreads (including JIT worklist / DFG 
> worklist) is "WTF::AutomaticThread"
> Super Sampler thread is "JSC Super Sampler"
> Asychronous Disasm is "Asynchronous Disassembler"
> Sampling profiler is "jsc.sampling-profiler.thread"
> WASM compiler thread is "jsc.wasm-b3-compilation.thread"
> In WebKit2
> Network Cache is "IOChannel::readSync"
> IPC workqueue is "com.apple.IPC.ReceiveQueue"
> 
> To choose the appropriate naming policy, there are two requirements.
> This is discussed in https://bugs.webkit.org/show_bug.cgi?id=166678 and 
> https://bugs.webkit.org/show_bug.cgi?id=166684
> 
> 1. We should have super descriptive name including the iformation "This 
> thread is related to WebKit".
> If we use WebKit as the framework, WebKit will launch several threads along 
> with the user application's threads.
> So if the thread name does not include the above information, it is quite 
> confusing: Is this crash related to WebKit OR user's applications?
> This should be met at least in macOS port. In the Linux port, this 
> requirement is a bit difficult to be met due to the second requirement.
> 
> 2. The thread name should be <= 15 characters in Linux. <= 31 characters in 
> Windows.
> This is super unfortunate. But we need this requirement. But in macOS, I 
> think we do not have any limitation like that (correct?)
> I cannot find "PTHREAD_MAX_NAMELEN_NP" definition in macOS.
> 
> To meet the above requirements as much as possible, I suggest the name, 
> "org.webkit.MODULE.NAME(15characters)".
> This policy is derived from the WorkQueue's naming policy, like 
> "org.webkit.ImageDecoder".
> For example, we will name DFG compiler worklist thread as 
> "org.webkit.jsc.DFGCompiler" or "org.webkit.JavaScriptCore.DFGCompiler".
> 
> In Linux / Windows, we have the system to normalize the above name to 
> "NAME(15characters)".
> For example, when you specify "org.webkit.jsc.DFGCompiler", it will be shown 
> as "DFGCompiler" in Linux.
> This naming policy meets (1) in macOS. And (2) in all the environments. In 
> macOS, the name is not modified.
> So we can see the full name "org.webkit.jsc.DFGCompiler".
> In Linux, we can see "DFGCompiler", it is quite useful compared to nameless 
> threads.
> 
> What do you think of?
> 
> Best regards,
> Yusuke Suzuki
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Reducing the use of EncodedJSValue and use JSValue directly instead.

2017-01-03 Thread Filip Pizlo
I think that this is great!

I agree with the policy that we should use JSValue everywhere that it would 
give us the same codegen/ABI (args in registers, return in registers, etc). 

-Filip

> On Jan 3, 2017, at 14:33, Mark Lam  wrote:
> 
> Over the holiday period, I looked into the possibility of giving 
> EncodedJSValue a default constructor (because I didn’t like having to return 
> encodedJSValue() instead of { } in lots of places), and learned why we had 
> EncodedJSValue in the first place (i.e. for C linkage).  This led me to 
> discover (in my reading of the code) that we don’t really need to use 
> EncodedJSValue for most of our host functions (those of type NativeFunction, 
> GetValueFunc, and PutValueFunc).  
> 
> I propose that we switch to using JSValue directly where we can.  This has 
> the benefit of:
> 1. better type safety with the use of JSValue (EncodedJSValue is a int64_t 
> typedef).
> 2. more readable code (less marshaling back and forth with 
> JSValue::encode()/decode()).
> 
> The patch for this change can be found here:
> https://bugs.webkit.org/show_bug.cgi?id=166658
> 
> Perf is neutral.  Any opinions?
> 
> Thanks.
> 
> Mark
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Standalone B3 JIT compiler for WebAssembly ?

2016-12-20 Thread Filip Pizlo


> On Dec 20, 2016, at 05:29, Stéphane Letz  wrote:
> 
> Hi,
> 
> This is a LLVM standalone JIT compiler for WebAssembly here : 
> https://github.com/WebAssembly/wasm-jit-prototype
> 
> How complex would it be to follow the same approach with the B3 JIT compiler?

Probably not very hard. You could even start with our wasm implementation and 
just remove JS stuff. 

> Is it something that is possibly going to happen at some point in the future ?

This doesn't seem like something that we would do. 

> 
> Thanks.
> 
> Stéphane Letz
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] PSA: SharedArrayBuffer is now enabled by default in JSC on all ports

2016-12-08 Thread Filip Pizlo
As of http://trac.webkit.org/changeset/209568 


-Filip


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Filip Pizlo
EWS doesn't hate it anymore!

Reviews welcome.  I've been slowly integrating feedback as I've received it.

-Filip



> On Nov 4, 2016, at 11:52 AM, Filip Pizlo <fpi...@apple.com> wrote:
> 
> Haha, I'm fixing it!
> 
> I could use a review of the time API even while I fix some broken corners in 
> WebCore and WK2.
> 
> -Filip
> 
> 
>> On Nov 4, 2016, at 11:31 AM, Brent Fulgham <bfulg...@apple.com 
>> <mailto:bfulg...@apple.com>> wrote:
>> 
>> EWS Hates your patch! :-)
>> 
>>> On Nov 4, 2016, at 10:01 AM, Filip Pizlo <fpi...@apple.com 
>>> <mailto:fpi...@apple.com>> wrote:
>>> 
>>> Hi everyone!
>>> 
>>> That last time we talked about this, there seemed to be a lot of agreement 
>>> that we should go with the Seconds/MonotonicTime/WallTime approach.
>>> 
>>> I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045 
>>> <https://bugs.webkit.org/show_bug.cgi?id=152045>
>>> 
>>> That patch just takes a subset of our time code - all of the stuff that 
>>> transitively touches ParkingLot - and converts it to use the new time 
>>> classes.  Reviews welcome!
>>> 
>>> -Filip
>>> 
>>> 
>>> 
>>>> On May 22, 2016, at 6:41 PM, Filip Pizlo <fpi...@apple.com 
>>>> <mailto:fpi...@apple.com>> wrote:
>>>> 
>>>> Hi everyone!
>>>> 
>>>> I’d like us to stop using std::chrono and go back to using doubles for 
>>>> time.  First I list the things that I think we wanted to get from 
>>>> std::chrono - the reasons why we started switching to it in the first 
>>>> place.  Then I list some disadvantages of std::chrono that we've seen from 
>>>> fixing std::chrono-based code.  Finally I propose some options for how to 
>>>> use doubles for time.
>>>> 
>>>> Why we switched to std::chrono
>>>> 
>>>> A year ago we started using std::chrono for measuring time.  std::chrono 
>>>> has a rich typesystem for expressing many different kinds of time.  For 
>>>> example, you can distinguish between an absolute point in time and a 
>>>> relative time.  And you can distinguish between different units, like 
>>>> nanoseconds, milliseconds, etc.
>>>> 
>>>> Before this, we used doubles for time.  std::chrono’s advantages over 
>>>> doubles are:
>>>> 
>>>> Easy to remember what unit is used: We sometimes used doubles for 
>>>> milliseconds and sometimes for seconds.  std::chrono prevents you from 
>>>> getting the two confused.
>>>> 
>>>> Easy to remember what kind of clock is used: We sometimes use the 
>>>> monotonic clock and sometimes the wall clock (aka the real time clock).  
>>>> Bad things would happen if we passed a time measured using the monotonic 
>>>> clock to functions that expected time measured using the wall clock, and 
>>>> vice-versa.  I know that I’ve made this mistake in the past, and it can be 
>>>> painful to debug.
>>>> 
>>>> In short, std::chrono uses compile-time type checking to catch some bugs.
>>>> 
>>>> Disadvantages of using std::chrono
>>>> 
>>>> We’ve seen some problems with std::chrono, and I think that the problems 
>>>> outweigh the advantages.  std::chrono suffers from a heavily templatized 
>>>> API that results in template creep in our own internal APIs.  
>>>> std::chrono’s default of integers without overflow protection means that 
>>>> math involving std::chrono is inherently more dangerous than math 
>>>> involving double.  This is particularly bad when we use time to speak 
>>>> about timeouts.
>>>> 
>>>> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
>>>> measuring time.  This leads to verbosity and template creep throughout 
>>>> common algorithms that take time as an argument.  For example if we use 
>>>> doubles, a method for sleeping for a second might look like 
>>>> sleepForSeconds(double).  This works even if someone wants to sleep for a 
>>>> nanoseconds, since 0.01 is easy to represent using a double.  Also, 
>>>> multiplying or dividing a double by a small constant factor (1,000,000,000 
>>>> is small by double standards) is virtually guaranteed to avoid any loss of 
>>>> precision.  But as soon as such a utility gets st

Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Filip Pizlo
Haha, I'm fixing it!

I could use a review of the time API even while I fix some broken corners in 
WebCore and WK2.

-Filip


> On Nov 4, 2016, at 11:31 AM, Brent Fulgham <bfulg...@apple.com> wrote:
> 
> EWS Hates your patch! :-)
> 
>> On Nov 4, 2016, at 10:01 AM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> Hi everyone!
>> 
>> That last time we talked about this, there seemed to be a lot of agreement 
>> that we should go with the Seconds/MonotonicTime/WallTime approach.
>> 
>> I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045 
>> <https://bugs.webkit.org/show_bug.cgi?id=152045>
>> 
>> That patch just takes a subset of our time code - all of the stuff that 
>> transitively touches ParkingLot - and converts it to use the new time 
>> classes.  Reviews welcome!
>> 
>> -Filip
>> 
>> 
>> 
>>> On May 22, 2016, at 6:41 PM, Filip Pizlo <fpi...@apple.com 
>>> <mailto:fpi...@apple.com>> wrote:
>>> 
>>> Hi everyone!
>>> 
>>> I’d like us to stop using std::chrono and go back to using doubles for 
>>> time.  First I list the things that I think we wanted to get from 
>>> std::chrono - the reasons why we started switching to it in the first 
>>> place.  Then I list some disadvantages of std::chrono that we've seen from 
>>> fixing std::chrono-based code.  Finally I propose some options for how to 
>>> use doubles for time.
>>> 
>>> Why we switched to std::chrono
>>> 
>>> A year ago we started using std::chrono for measuring time.  std::chrono 
>>> has a rich typesystem for expressing many different kinds of time.  For 
>>> example, you can distinguish between an absolute point in time and a 
>>> relative time.  And you can distinguish between different units, like 
>>> nanoseconds, milliseconds, etc.
>>> 
>>> Before this, we used doubles for time.  std::chrono’s advantages over 
>>> doubles are:
>>> 
>>> Easy to remember what unit is used: We sometimes used doubles for 
>>> milliseconds and sometimes for seconds.  std::chrono prevents you from 
>>> getting the two confused.
>>> 
>>> Easy to remember what kind of clock is used: We sometimes use the monotonic 
>>> clock and sometimes the wall clock (aka the real time clock).  Bad things 
>>> would happen if we passed a time measured using the monotonic clock to 
>>> functions that expected time measured using the wall clock, and vice-versa. 
>>>  I know that I’ve made this mistake in the past, and it can be painful to 
>>> debug.
>>> 
>>> In short, std::chrono uses compile-time type checking to catch some bugs.
>>> 
>>> Disadvantages of using std::chrono
>>> 
>>> We’ve seen some problems with std::chrono, and I think that the problems 
>>> outweigh the advantages.  std::chrono suffers from a heavily templatized 
>>> API that results in template creep in our own internal APIs.  std::chrono’s 
>>> default of integers without overflow protection means that math involving 
>>> std::chrono is inherently more dangerous than math involving double.  This 
>>> is particularly bad when we use time to speak about timeouts.
>>> 
>>> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
>>> measuring time.  This leads to verbosity and template creep throughout 
>>> common algorithms that take time as an argument.  For example if we use 
>>> doubles, a method for sleeping for a second might look like 
>>> sleepForSeconds(double).  This works even if someone wants to sleep for a 
>>> nanoseconds, since 0.01 is easy to represent using a double.  Also, 
>>> multiplying or dividing a double by a small constant factor (1,000,000,000 
>>> is small by double standards) is virtually guaranteed to avoid any loss of 
>>> precision.  But as soon as such a utility gets std::chronified, it becomes 
>>> a template.  This is because you cannot have 
>>> sleepFor(std::chrono::seconds), since that wouldn’t allow you to represent 
>>> fractions of seconds.  This brings me to my next point.
>>> 
>>> Overflow danger: std::chrono is based on integers and its math methods do 
>>> not support overflow protection.  This has led to serious bugs like 
>>> https://bugs.webkit.org/show_bug.cgi?id=157924 
>>> <https://bugs.webkit.org/show_bug.cgi?id=157924>.  This cancels out the 
>>> “remember what

Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-11-04 Thread Filip Pizlo
Hi everyone!

That last time we talked about this, there seemed to be a lot of agreement that 
we should go with the Seconds/MonotonicTime/WallTime approach.

I have implemented it: https://bugs.webkit.org/show_bug.cgi?id=152045

That patch just takes a subset of our time code - all of the stuff that 
transitively touches ParkingLot - and converts it to use the new time classes.  
Reviews welcome!

-Filip



> On May 22, 2016, at 6:41 PM, Filip Pizlo <fpi...@apple.com> wrote:
> 
> Hi everyone!
> 
> I’d like us to stop using std::chrono and go back to using doubles for time.  
> First I list the things that I think we wanted to get from std::chrono - the 
> reasons why we started switching to it in the first place.  Then I list some 
> disadvantages of std::chrono that we've seen from fixing std::chrono-based 
> code.  Finally I propose some options for how to use doubles for time.
> 
> Why we switched to std::chrono
> 
> A year ago we started using std::chrono for measuring time.  std::chrono has 
> a rich typesystem for expressing many different kinds of time.  For example, 
> you can distinguish between an absolute point in time and a relative time.  
> And you can distinguish between different units, like nanoseconds, 
> milliseconds, etc.
> 
> Before this, we used doubles for time.  std::chrono’s advantages over doubles 
> are:
> 
> Easy to remember what unit is used: We sometimes used doubles for 
> milliseconds and sometimes for seconds.  std::chrono prevents you from 
> getting the two confused.
> 
> Easy to remember what kind of clock is used: We sometimes use the monotonic 
> clock and sometimes the wall clock (aka the real time clock).  Bad things 
> would happen if we passed a time measured using the monotonic clock to 
> functions that expected time measured using the wall clock, and vice-versa.  
> I know that I’ve made this mistake in the past, and it can be painful to 
> debug.
> 
> In short, std::chrono uses compile-time type checking to catch some bugs.
> 
> Disadvantages of using std::chrono
> 
> We’ve seen some problems with std::chrono, and I think that the problems 
> outweigh the advantages.  std::chrono suffers from a heavily templatized API 
> that results in template creep in our own internal APIs.  std::chrono’s 
> default of integers without overflow protection means that math involving 
> std::chrono is inherently more dangerous than math involving double.  This is 
> particularly bad when we use time to speak about timeouts.
> 
> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
> measuring time.  This leads to verbosity and template creep throughout common 
> algorithms that take time as an argument.  For example if we use doubles, a 
> method for sleeping for a second might look like sleepForSeconds(double).  
> This works even if someone wants to sleep for a nanoseconds, since 0.01 
> is easy to represent using a double.  Also, multiplying or dividing a double 
> by a small constant factor (1,000,000,000 is small by double standards) is 
> virtually guaranteed to avoid any loss of precision.  But as soon as such a 
> utility gets std::chronified, it becomes a template.  This is because you 
> cannot have sleepFor(std::chrono::seconds), since that wouldn’t allow you to 
> represent fractions of seconds.  This brings me to my next point.
> 
> Overflow danger: std::chrono is based on integers and its math methods do not 
> support overflow protection.  This has led to serious bugs like 
> https://bugs.webkit.org/show_bug.cgi?id=157924 
> <https://bugs.webkit.org/show_bug.cgi?id=157924>.  This cancels out the 
> “remember what unit is used” benefit cited above.  It’s true that I know what 
> type of time I have, but as soon as I duration_cast it to another unit, I may 
> overflow.  The type system does not help!  This is insane: std::chrono 
> requires you to do more work when writing multi-unit code, so that you 
> satisfy the type checker, but you still have to be just as paranoid around 
> multi-unit scenarios.  Forgetting that you have milliseconds and using it as 
> seconds is trivially fixable.  But if std::chrono flags such an error and you 
> fix it with a duration_cast (as any std::chrono tutorial will tell you to 
> do), you’ve just introduced an unchecked overflow and such unchecked 
> overflows are known to cause bugs that manifest as pages not working 
> correctly.
> 
> I think that doubles are better than std::chrono in multi-unit scenarios.  It 
> may be possible to have std::chrono work with doubles, but this probably 
> implies us writing our own clocks.  std::chrono’s default clocks use 
> integers, not doubles.  It also may be possible to teach std::chrono to do 
> overflow protection, but t

Re: [webkit-dev] Terminology for giving up ownership: take, release, move

2016-09-06 Thread Filip Pizlo

> On Sep 6, 2016, at 11:07 AM, Geoffrey Garen  wrote:
> 
> “take” grinds my gears too — though I’ve gotten used to it, more or less.
> 
> I read “object.verb()” as a command, “verb”, directed at “object” (or 
> sometimes as a question, “verb?”, directed at “object”). I think most APIs 
> are phrased this way. And if I were Antonin Scalia, I would make the 
> originalist argument that Smalltalk originally defined a method in 
> object-oriented programming as a message to a receiver — not a message about 
> a sender.
> 
>> In the context of a container, take() sort of makes sense by parallel to 
>> get(). Though get() could be interpreted as either what the caller is doing 
>> or what the callee is doing.
>> 
>> In other words, you could say that in the code below, function something 
>> gets an item from the collection. In that sense, take() is a parallel 
>> construct. Of course, you could instead say that function something asks 
>> collection to get an item. That's what makes take() not make sense. But I am 
>> not sure release() makes sense either way, for a collection. It conveys 
>> letting go of the item but doesn't seem to convey fetching in the sake way 
>> get() or take() do. I don't think move() would be right in this context 
>> either.
>> 
>> function something(Collection& collection, Key& key)
>> {
>>  doSomething(collection.get(key))
>> }
> 
> Though it is possible to read “get” in this context as “I get from 
> collection”, I think it is more natural to read “get” as a command: 
> “collection, get this for me”. Other access verbs on collections, such as 
> “find”, “add”, and “remove”, establish this pattern.
> 
>> Given that explanation, I think a possible direction is to rename the smart 
>> pointer release() operation to take(). Many of our smart pointers already 
>> have a get(). And the idea of taking the underlying value from a smart 
>> pointer kind of makes sense, even though it is caller-perspective.
> 
> I’ve gotten used to “take", so I won’t call it pure applesauce, but it’s not 
> my preference.
> 
> My favorite suggestion so far is “move”. The C++ standard helps make this a 
> good word because it introduces as terms of art std::move and “move” 
> constructors. But perhaps it is bad for a function named “move” not to return 
> an rvalue reference. For example, one might object to 
> “std::move(collection.move(key))”. Why the double move?

But it kinda does return an rvalue reference!  If foo() returns T then:

bar(foo())

will bind to the && overload of bar().  I don't think you'd have to do 
std::move(collection.move(key)).

> 
> My second favorite suggestion is “release”. It matches a term of art in std 
> smart pointers and it’s pretty clear.

FWIW, I still like release() better than move().  a = move(b) is a command to 
the system to move b to a.  So, value = collection.move(key) feels like a 
command to the collection to move key to value, which is clearly not what is 
going on.

-Filip


> 
> My third favorite suggestion is “remove”. For collections, “remove” is just 
> plain clearer. But “remove” is worse for non-collection value types like 
> smart pointers because we “move” values in C++ — we do not “remove” them.
> 
> There are some good thesaurus words like cede or doff or discharge but they 
> all lack familiarity as terms of art.
> 
> Geoff
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Terminology for giving up ownership: take, release, move

2016-09-05 Thread Filip Pizlo

> On Sep 5, 2016, at 2:35 PM, Ryosuke Niwa  wrote:
> 
> On Mon, Sep 5, 2016 at 10:13 AM, Darin Adler  > wrote:
>> Hi folks.
>> 
>> WebKit has some critical functions that involve asking an object to give up 
>> ownership of something so the caller can take ownership.
>> 
>> In the C++ standard library itself, this is called move, as in std::move.
>> 
>> In WebKit smart pointers, we call this operation release, as in 
>> RefPtr::releaseNonNull and String::releaseImpl.
>> 
>> In WebKit collections, we call this operation take, as in HashMap::take and 
>> ExceptionOr::takeReturnValue.
>> 
>> The release vs. take terminology is distracting to my eyes. The verb “take" 
>> states what the caller wishes to do, and the verb “release” states what the 
>> caller wants the collection or smart pointer to do. My first thought was be 
>> to rename the take functions to use the word release instead, but I fear it 
>> might make them harder to understand instead of easier and clearly it would 
>> make them longer.
> 
> I agree the verb "take" is not semantically sound here.  How about
> HashMap::receiveReleased / ExceptionOr::receiveReleased?  Or simply
> HashMap::released / ExceptionOr::takeReleased?  Even HashMap::receive
> / ExceptionOr::receiveReturnValue might work better because "receive"
> is more a passive form of accepting the ownership of something.

I don't think that HashMap::receiveReleased() fits with Subject::verbPhrase().  
In HashMap::take(), the HashMap is releasing ownership of a value.  So, it is 
releasing it.  It's definitely not receiving it.

-Filip

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Terminology for giving up ownership: take, release, move

2016-09-05 Thread Filip Pizlo

> On Sep 5, 2016, at 10:13 AM, Darin Adler  wrote:
> 
> Hi folks.
> 
> WebKit has some critical functions that involve asking an object to give up 
> ownership of something so the caller can take ownership.
> 
> In the C++ standard library itself, this is called move, as in std::move.
> 
> In WebKit smart pointers, we call this operation release, as in 
> RefPtr::releaseNonNull and String::releaseImpl.
> 
> In WebKit collections, we call this operation take, as in HashMap::take and 
> ExceptionOr::takeReturnValue.
> 
> The release vs. take terminology is distracting to my eyes. The verb “take" 
> states what the caller wishes to do, and the verb “release” states what the 
> caller wants the collection or smart pointer to do. My first thought was be 
> to rename the take functions to use the word release instead, but I fear it 
> might make them harder to understand instead of easier and clearly it would 
> make them longer.
> 
> Does anyone have other ideas on how to collapse WebKit project terminology 
> down so we don’t have three different single words that are used to mean 
> almost the same thing?

The use of "take" for these methods grinds my gears, for the same reason you 
were distracted: "take" describes the desires of the caller, but that doesn't 
work for me because I read "fred.makeCoffee()" as "makeCoffee()" being an 
imperative verb phrase and "fred" as being the subject that will make me the 
coffee.  So, "HashMap::take" means to me that the HashMap is taking something 
from me, rather than releasing something to me.

I wonder if there is anyone who is surprised more by release than by take, and 
who would find it strange to say ExceptionOr::releaseReturnValue.

I wouldn't want any words other than "release" used for this purpose, because I 
know exactly what to expect "release" to mean, since we use it so much already. 
 I think that would be even worse than sometimes using "take", because even 
though "takeReturnValue" is annoying, I've learned to know what it means.

If there isn't anyone who prefers take, maybe we should just rename "take" to 
"release" in these cases?

-Filip


> 
> — Darin
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Proposal: Adopt Web Inspector coding style for all WebKit JS/CSS source code

2016-07-06 Thread Filip Pizlo
I like the idea of adopting inspector style for JS builtins!

It might also be good to adopt it for JS tests that we write ourselves, with an 
escape hatch if you need to violate style to test some feature. For example, it 
should be a goal to follow inspector style for the JetStream harness code, and 
probably for all of ES6SampleBench. New JS tests in JavaScriptCore/tests/stress 
that we write ourselves probably should follow inspector style, because it's 
code that we have to read and understand and I can't think of a reason not to 
be consistent. Thoughts?

-Filip

> On Jul 6, 2016, at 7:53 PM, Ryosuke Niwa  wrote:
> 
>> On Wed, Jul 6, 2016 at 7:34 PM, Dean Jackson  wrote:
>> I propose we make it official that the Web Inspector Coding Style is what 
>> must be used for all JavaScript and CSS that count as source code in the 
>> project.
>> https://trac.webkit.org/wiki/WebInspectorCodingStyleGuide
>> 
>> Now that JavaScript is used in more places (JS builtins, some parts of the 
>> DOM, media controls) it would be nice to make it all consistent. Note that 
>> the page above can't decide if it is just JS or both JS and CSS, but I think 
>> it should be both.
> 
> It's hard to tell which parts of the above guide would apply to
> non-Inspector JS code because it has a bunch of Inspector specific
> guidelines such as layering guides and references to
> https://trac.webkit.org/browser/trunk/Source/WebInspectorUI/UserInterface/Views/Variables.css
> 
> We should probably extract the parts that matter into a separate MD
> file or a section in the wiki page before we proceed with this
> discussion.
> 
> - R. Niwa
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-05-23 Thread Filip Pizlo


On May 23, 2016, at 11:16 AM, Geoffrey Garen  wrote:

>> 3 - There exists a solution - non-templated custom classes - that removes 
>> both classes of subtle bugs, without the template creep.
> 
> Hard to argue with this.
> 
> Can we go with “WallClock” and “MonotonicClock” instead of “WallTime” and 
> “MonotonicTime"? Clock is a nice clear noun. Also, time is an illusion caused 
> by parallax in the astral plane.

I think time is the right term. "3pm" is a "time", not a "clock". Also 42 
seconds since monotonic epoch is a monotonic time, not a monotonic clock. 

> 
> Geoff
> 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-05-23 Thread Filip Pizlo


> On May 22, 2016, at 10:46 PM, Michal Debski <m.deb...@samsung.com> wrote:
> 
> Hi,
> 
>  
> 
> sorry but this really bugs me. Isn't this enough?
> 
>  
> 
> namespace WTF {
> 
> using nanoseconds = std::chrono::duration<double, std::nano>;
> using microseconds = std::chrono::duration<double, std::micro>;
> using milliseconds = std::chrono::duration<double, std::milli>;
> using seconds = std::chrono::duration;
> using minutes = std::chrono::duration<double, std::ratio<60>>;
> using hours = std::chrono::duration<double, std::ratio<3600>>;
> 
> 
> template 
> std::chrono::time_point<Clock, seconds> now()
> {
> return Clock::now();
> }
> 
Can you clarify how this returns fractional seconds?

Note that your code snippets are not enough to cover WebKit's use of clocks. 
For example we would need wall clock and monotonic clock classes with 
time_point types. If we have to significantly customize std::chrono and forbid 
some but not all of its API, then probably the total amount of code to do this 
would be comparable to writing our own Seconds/WallTime/MonotonicTime classes.
> 
> }
> 
>  
> 
> and forbid using std::chrono::clock::now()/durations with check-style. It 
> seems like the best of both worlds.
> 
It's an interesting perspective. But I would find it confusing if we used a 
non-canonical kind of std::chrono. And I'm not convinced that the changes 
required to make this work are as simple as what you say. 
> Oh and the infinity:
> 
>  
> 
> namespace std {
> namespace chrono {
> 
> template<>
> struct duration_values {
> static constexpr double min() { return 
> -std::numeric_limits::infinity(); }
> static constexpr double zero() { return .0; }
>     static constexpr double max() { return 
> std::numeric_limits::infinity(); }
> };
> 
> }
> } 
> 
> Best regards,
> Michal Debski
> 
>  
> 
> --- Original Message ---
> 
> Sender : Filip Pizlo<fpi...@apple.com>
> 
> Date : May 23, 2016 02:41 (GMT+01:00)
> 
> Title : [webkit-dev] RFC: stop using std::chrono, go back to using doubles 
> for time
> 
>  
> 
> Hi everyone!
> 
> I’d like us to stop using std::chrono and go back to using doubles for time.  
> First I list the things that I think we wanted to get from std::chrono - the 
> reasons why we started switching to it in the first place.  Then I list some 
> disadvantages of std::chrono that we've seen from fixing std::chrono-based 
> code.  Finally I propose some options for how to use doubles for time.
> 
> Why we switched to std::chrono
> 
> A year ago we started using std::chrono for measuring time.  std::chrono has 
> a rich typesystem for expressing many different kinds of time.  For example, 
> you can distinguish between an absolute point in time and a relative time.  
> And you can distinguish between different units, like nanoseconds, 
> milliseconds, etc.
> 
> Before this, we used doubles for time.  std::chrono’s advantages over doubles 
> are:
> 
> Easy to remember what unit is used: We sometimes used doubles for 
> milliseconds and sometimes for seconds.  std::chrono prevents you from 
> getting the two confused.
> 
> Easy to remember what kind of clock is used: We sometimes use the monotonic 
> clock and sometimes the wall clock (aka the real time clock).  Bad things 
> would happen if we passed a time measured using the monotonic clock to 
> functions that expected time measured using the wall clock, and vice-versa.  
> I know that I’ve made this mistake in the past, and it can be painful to 
> debug.
> 
> In short, std::chrono uses compile-time type checking to catch some bugs.
> 
> Disadvantages of using std::chrono
> 
> We’ve seen some problems with std::chrono, and I think that the problems 
> outweigh the advantages.  std::chrono suffers from a heavily templatized API 
> that results in template creep in our own internal APIs.  std::chrono’s 
> default of integers without overflow protection means that math involving 
> std::chrono is inherently more dangerous than math involving double.  This is 
> particularly bad when we use time to speak about timeouts.
> 
> Too many templates: std::chrono uses templates heavily.  It’s overkill for 
> measuring time.  This leads to verbosity and template creep throughout common 
> algorithms that take time as an argument.  For example if we use doubles, a 
> method for sleeping for a second might look like sleepForSeconds(double).  
> This works even if someone wants to sleep for a nanoseconds, since 0.01 
> is easy to represent using a double.  Also, multiplying or dividing a double 
> by a small constant facto

[webkit-dev] RFC: stop using std::chrono, go back to using doubles for time

2016-05-22 Thread Filip Pizlo
Hi everyone!

I’d like us to stop using std::chrono and go back to using doubles for time.  
First I list the things that I think we wanted to get from std::chrono - the 
reasons why we started switching to it in the first place.  Then I list some 
disadvantages of std::chrono that we've seen from fixing std::chrono-based 
code.  Finally I propose some options for how to use doubles for time.

Why we switched to std::chrono

A year ago we started using std::chrono for measuring time.  std::chrono has a 
rich typesystem for expressing many different kinds of time.  For example, you 
can distinguish between an absolute point in time and a relative time.  And you 
can distinguish between different units, like nanoseconds, milliseconds, etc.

Before this, we used doubles for time.  std::chrono’s advantages over doubles 
are:

Easy to remember what unit is used: We sometimes used doubles for milliseconds 
and sometimes for seconds.  std::chrono prevents you from getting the two 
confused.

Easy to remember what kind of clock is used: We sometimes use the monotonic 
clock and sometimes the wall clock (aka the real time clock).  Bad things would 
happen if we passed a time measured using the monotonic clock to functions that 
expected time measured using the wall clock, and vice-versa.  I know that I’ve 
made this mistake in the past, and it can be painful to debug.

In short, std::chrono uses compile-time type checking to catch some bugs.

Disadvantages of using std::chrono

We’ve seen some problems with std::chrono, and I think that the problems 
outweigh the advantages.  std::chrono suffers from a heavily templatized API 
that results in template creep in our own internal APIs.  std::chrono’s default 
of integers without overflow protection means that math involving std::chrono 
is inherently more dangerous than math involving double.  This is particularly 
bad when we use time to speak about timeouts.

Too many templates: std::chrono uses templates heavily.  It’s overkill for 
measuring time.  This leads to verbosity and template creep throughout common 
algorithms that take time as an argument.  For example if we use doubles, a 
method for sleeping for a second might look like sleepForSeconds(double).  This 
works even if someone wants to sleep for a nanoseconds, since 0.01 is easy 
to represent using a double.  Also, multiplying or dividing a double by a small 
constant factor (1,000,000,000 is small by double standards) is virtually 
guaranteed to avoid any loss of precision.  But as soon as such a utility gets 
std::chronified, it becomes a template.  This is because you cannot have 
sleepFor(std::chrono::seconds), since that wouldn’t allow you to represent 
fractions of seconds.  This brings me to my next point.

Overflow danger: std::chrono is based on integers and its math methods do not 
support overflow protection.  This has led to serious bugs like 
https://bugs.webkit.org/show_bug.cgi?id=157924 
.  This cancels out the 
“remember what unit is used” benefit cited above.  It’s true that I know what 
type of time I have, but as soon as I duration_cast it to another unit, I may 
overflow.  The type system does not help!  This is insane: std::chrono requires 
you to do more work when writing multi-unit code, so that you satisfy the type 
checker, but you still have to be just as paranoid around multi-unit scenarios. 
 Forgetting that you have milliseconds and using it as seconds is trivially 
fixable.  But if std::chrono flags such an error and you fix it with a 
duration_cast (as any std::chrono tutorial will tell you to do), you’ve just 
introduced an unchecked overflow and such unchecked overflows are known to 
cause bugs that manifest as pages not working correctly.

I think that doubles are better than std::chrono in multi-unit scenarios.  It 
may be possible to have std::chrono work with doubles, but this probably 
implies us writing our own clocks.  std::chrono’s default clocks use integers, 
not doubles.  It also may be possible to teach std::chrono to do overflow 
protection, but that would make me so sad since using double means not having 
to worry about overflow at all.

The overflow issue is interesting because of its implications for how we do 
timeouts.  The way to have a method with an optional timeout is to do one of 
these things:

- Use 0 to mean no timeout.
- Have one function for timeout and one for no timeout.
- Have some form of +Inf or INT_MAX to mean no timeout.  This makes so much 
mathematical sense.

WebKit takes the +Inf/INT_MAX approach.  I like this approach the best because 
it makes the most mathematical sense: not giving a timeout is exactly like 
asking for a timeout at time-like infinity.  When used with doubles, this Just 
Works.  +Inf is greater than any value and it gets preserved properly in math 
(+Inf * real = +Inf, so it survives gracefully in unit conversions; +Inf + real 
= +Inf, so it also survives 

Re: [webkit-dev] Build slave for JSCOnly Linux MIPS

2016-04-19 Thread Filip Pizlo

> On Apr 19, 2016, at 12:55 PM, Konstantin Tokarev <annu...@yandex.ru> wrote:
> 
> 
> 
> 19.04.2016, 18:23, "Filip Pizło" <fpi...@apple.com <mailto:fpi...@apple.com>>:
>>>  On Apr 19, 2016, at 5:50 AM, Carlos Alberto Lopez Perez <clo...@igalia.com 
>>> <mailto:clo...@igalia.com>> wrote:
>>> 
>>>>  On 18/04/16 21:50, Filip Pizlo wrote:
>>>>  I don't want a buildbot for MIPS. It's not a relevant architecture
>>>>  anymore. I don't think that JSC developers should have to expend
>>>>  effort towards keeping this architecture working.
>>> 
>>>  MIPS is still relevant on embedded devices (mainly set-top boxes).
>>>  It is also a popular platform for IoT projects.
>> 
>> This doesn't seem like a big enough deal to have in trunk.
>> 
>> Have you considered an out-of-tree port?
> 
> Sorry but I don't see how can it work. MIPS contributors are working on 
> different WebKit ports and trunk is the only place where we can collaborate.

Can you elaborate on this?

> 
> Also, I would like to thank WebKit reviewers for valuable input that you are 
> providing on MIPS patches. Sorry if it wastes a bit of your time, but I hope 
> it is not that much.

That’s not the point.  I’m trying to understand if MIPS qualifies as an in-tree 
port.

> 
>> 
>> -Filip
>> 
>>>  I understand that you don't want to spend time supporting this
>>>  architecture, and that is ok. Nobody is proposing here that JSC
>>>  developers should spend time maintaining JSC on MIPS.
>>> 
>>>  However, if someone is willing to do the work, I think we should let
>>>  them do it and don't actively block it.
>>> 
>>>  Having a buildbot testing JSC on MIPS certainly helps anyone interested
>>>  in maintaining this architecture.
>>> 
>>>  And anybody not interested in MIPS, can just ignore this bot.
>>> 
>>>  ___
>>>  webkit-dev mailing list
>>>  webkit-dev@lists.webkit.org
>>>  https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
> 
> -- 
> Regards,
> Konstantin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Build slave for JSCOnly Linux MIPS

2016-04-19 Thread Filip Pizlo

> On Apr 19, 2016, at 11:33 AM, Konstantin Tokarev <annu...@yandex.ru> wrote:
> 
> 
> 
> 19.04.2016, 21:15, "Filip Pizlo" <fpi...@apple.com <mailto:fpi...@apple.com>>:
>> I did a quick look over the trac query of GCC 4.8 changes that you provided. 
>> None of the ones I looked at were scary but they were annoying. They seemed 
>> to be things like:
>> 
>> - Sometimes saying { } to initialize a variable doesn’t work.
>> - Sometimes you need to say “const”.
>> - Sometimes you need to play with variables to get around internal compiler 
>> errors.
>> 
>> I didn’t find any cases of GCC 4.8 not supporting a language feature that we 
>> want to use. Do you think that’s correct?
> 
> According to [1], GCC provides complete C++11 feature list since 4.8.1. 
> However, it fails to compile FTLLazySlowPathCall.h, see complete set of 
> diagnostics in [2].

Ouch!  Is there a bug for this?

-Filip


> 
> There is another minor bug: 4.8 does not allow aggregate initialization for 
> structs which have deleted constructors [3].
> 
> [1] https://gcc.gnu.org/projects/cxx-status.html#cxx11 
> <https://gcc.gnu.org/projects/cxx-status.html#cxx11>
> [2] http://pastebin.com/ikyDTZ9s <http://pastebin.com/ikyDTZ9s>
> [3] https://bugs.webkit.org/show_bug.cgi?id=155698 
> <https://bugs.webkit.org/show_bug.cgi?id=155698>
>https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52707 
> <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52707>
> 
>> 
>> -Filip
>> 
>>>  On Apr 19, 2016, at 11:02 AM, Michael Catanzaro <mcatanz...@igalia.com> 
>>> wrote:
>>> 
>>>  Hi,
>>> 
>>>  On Mon, 2016-04-18 at 17:27 -0700, Filip Pizlo wrote:
>>>>  I am sympathetic to the principle that we should support the
>>>>  compilers that ship on the most popular versions of Linux.
>>> 
>>>  Great. :)
>>> 
>>>>  I’d like to understand if that argument is sufficient to support GCC
>>>>  4.8.
>>>> 
>>>>  Can you clarify, is it the case that if I installed the latest stable
>>>>  Fedora, I’d get GCC 4.8?
>>> 
>>>  No, all currently-supported versions of Fedora include GCC 5 (only).
>>>  Different distros have very different release cycles and policies for
>>>  compiler upgrades. Fedora releases roughly every six months, and each
>>>  release is supported for roughly 13 months. GCC releases once per year.
>>>  The GCC developers coordinate with Fedora release planning to time GCC
>>>  releases to coincide with spring Fedora releases; in the winter before
>>>  a new GCC release, we rebuilt all of Fedora with the GCC beta so the
>>>  GCC developers can collect bug reports. So we will never have issues
>>>  with Fedora, as the oldest Fedora will be at most one year behind
>>>  upstream GCC. (Note that I co-maintain the WebKitGTK+ package there and
>>>  I'm making sure all supported Fedoras get updates.)
>>> 
>>>  But Fedora is exceptional in this regard. Other distros are supported
>>>  for much longer than 13 months (5 years for Ubuntu LTS and newly also
>>>  for Debian, 10 years for enterprise distros) and therefore have much
>>>  older compilers. The question is where do we draw the line. We
>>>  obviously cannot support a 10 year old distro; those are maintained by
>>>  rich corporations, and if they cared about WebKit security, they could
>>>  take responsibility for that. We could handle 5 years, but do we really
>>>  want to? (It's clear Apple doesn't.) It's really inconvenient to not
>>>  have access to newer dependencies or language features for so long. We
>>>  might start by saying that we only support the latest release of [list
>>>  of major distros that have recently been shipping WebKit updates]. Most
>>>  of these distros are currently built using GCC 4.9, though they might
>>>  have GCC 5 or GCC 6 packaged as well, but not used by default. The big
>>>  one still using GCC 4.8 is openSUSE.
>>> 
>>>  We don't *need* to consider Ubuntu right now, because they rarely ever
>>>  take our updates, nor Debian, because they never take our updates. I
>>>  think WebKit updates for Debian is all but totally a lost cause, but
>>>  I'm kinda still hopeful for Ubuntu, so I'd like to keep them in mind.
>>> 
>>>  Also, different distros have different policies on using alternative
>>>  compilers. E.g. in Fedora we are usually required to always use
>>>  Fedora's GCC, and only one version

Re: [webkit-dev] Build slave for JSCOnly Linux MIPS

2016-04-19 Thread Filip Pizlo
I did a quick look over the trac query of GCC 4.8 changes that you provided.  
None of the ones I looked at were scary but they were annoying.  They seemed to 
be things like:

- Sometimes saying { } to initialize a variable doesn’t work.
- Sometimes you need to say “const”.
- Sometimes you need to play with variables to get around internal compiler 
errors.

I didn’t find any cases of GCC 4.8 not supporting a language feature that we 
want to use.  Do you think that’s correct?

-Filip


> On Apr 19, 2016, at 11:02 AM, Michael Catanzaro <mcatanz...@igalia.com> wrote:
> 
> Hi,
> 
> On Mon, 2016-04-18 at 17:27 -0700, Filip Pizlo wrote:
>> I am sympathetic to the principle that we should support the
>> compilers that ship on the most popular versions of Linux.
> 
> Great. :)
> 
>> I’d like to understand if that argument is sufficient to support GCC
>> 4.8.
>> 
>> Can you clarify, is it the case that if I installed the latest stable
>> Fedora, I’d get GCC 4.8?
> 
> No, all currently-supported versions of Fedora include GCC 5 (only).
> Different distros have very different release cycles and policies for
> compiler upgrades. Fedora releases roughly every six months, and each
> release is supported for roughly 13 months. GCC releases once per year.
> The GCC developers coordinate with Fedora release planning to time GCC
> releases to coincide with spring Fedora releases; in the winter before
> a new GCC release, we rebuilt all of Fedora with the GCC beta so the
> GCC developers can collect bug reports. So we will never have issues
> with Fedora, as the oldest Fedora will be at most one year behind
> upstream GCC. (Note that I co-maintain the WebKitGTK+ package there and
> I'm making sure all supported Fedoras get updates.)
> 
> But Fedora is exceptional in this regard. Other distros are supported
> for much longer than 13 months (5 years for Ubuntu LTS and newly also
> for Debian, 10 years for enterprise distros) and therefore have much
> older compilers. The question is where do we draw the line. We
> obviously cannot support a 10 year old distro; those are maintained by
> rich corporations, and if they cared about WebKit security, they could
> take responsibility for that. We could handle 5 years, but do we really
> want to? (It's clear Apple doesn't.) It's really inconvenient to not
> have access to newer dependencies or language features for so long. We
> might start by saying that we only support the latest release of [list
> of major distros that have recently been shipping WebKit updates]. Most
> of these distros are currently built using GCC 4.9, though they might
> have GCC 5 or GCC 6 packaged as well, but not used by default. The big
> one still using GCC 4.8 is openSUSE.
> 
> We don't *need* to consider Ubuntu right now, because they rarely ever
> take our updates, nor Debian, because they never take our updates. I
> think WebKit updates for Debian is all but totally a lost cause, but
> I'm kinda still hopeful for Ubuntu, so I'd like to keep them in mind.
> 
> Also, different distros have different policies on using alternative
> compilers. E.g. in Fedora we are usually required to always use
> Fedora's GCC, and only one version is available at a time... but if a
> package *really* has no chance of being built with GCC, we're allowed
> to use Fedora's Clang instead. I'm not sure what the policies are for
> Debian and Ubuntu, but they always have available a newer GCC than is
> used for building packages, and until recently were using Clang to
> build Chromium, so alternative compilers must be permitted at least in
> exceptional cases. I was trying to convince the openSUSE folks to use
> Clang to build WebKit, to avoid the GCC 4.8 issue, but they were not
> enthusiastic. (But consider that all these distros will have older
> versions of Clang as well.)
> 
> Now, whether openSUSE is important enough on its own to justify holding
> back or lowering our GCC requirement... maybe not. But anyway, since we
> have significant contributors like Konstantin stuck with GCC 4.8, and
> since this doesn't require giving up on any significant language
> features, I think it's OK. If it's only a little work to support that
> compiler (on the level we already have in trunk), I think it's a good
> idea.
> 
> But there is another problem here. openSUSE seems to have no intention
> of upgrading to a newer GCC anytime soon, because they have started to
> inherit core packages like GCC from the SUSE enterprise distro. So I
> might need to negotiate with them if it would be possible to build
> WebKit with clang after all.
> 
>> Can you clarify what you mean by “backport”?  I’m trying to get a
>> picture of how your releases work.  For example, a

Re: [webkit-dev] Build slave for JSCOnly Linux MIPS

2016-04-18 Thread Filip Pizlo

> On Apr 18, 2016, at 4:43 PM, Michael Catanzaro  wrote:
> 
> On Mon, 2016-04-18 at 15:01 -0700, Geoffrey Garen wrote:
>> GCC 4.8 is three years old.
> 
> Yeah, unfortunately no Linux distros are ever willing to do major
> compiler upgrades, even Fedora is only willing to take minor compiler
> upgrades, so our willingness to support old compilers is proportional
> to the size of our user base eligible for updates. :/

I am sympathetic to the principle that we should support the compilers that 
ship on the most popular versions of Linux.  I’d like to understand if that 
argument is sufficient to support GCC 4.8.

Can you clarify, is it the case that if I installed the latest stable Fedora, 
I’d get GCC 4.8?

> 
> I understand resistance to restoring support for a compiler we
> previously decided to drop, so let me tone down my request: I just ask
> that when we decide to drop support for a compiler, we consider which
> major distros actually shipping our updates still use that compiler, so
> we can get a feel for how many users will stop getting WebKit updates
> as a consequence. We've just recently gotten to the point where a few
> distros are beginning to take our updates, and I'm really hoping to
> facilitate this by maintaining support for the compilers these distros
> are using as long as possible.
> 
> It's really annoying to be stuck catering to older compilers, but I
> don't see any good solution to this without throwing users under the
> bus. :(

I’m sympathetic to this.

> 
>> I don’t think we should put a three year hold on all current and
>> future C++ language features.
>> 
>> Vendors that want to ship security updates to old, stable OS’s should
>> maintain a branch that cherry-picks fixes from trunk and applies
>> build fixes as necessary, rather than holding back development in
>> trunk.
> 
> I'm not aware of any distributor that has ever attempted to seriously
> backport security fixes; not even enterprise distros like RHEL do this.
> Reality is that distros only ship what we release upstream.

Can you clarify what you mean by “backport”?  I’m trying to get a picture of 
how your releases work.  For example, are you saying that RHEL wouldn’t take a 
security update that you backported, or that they won’t invest energy into 
backporting it themselves?

> 
>>> It's unfortunate as of course we want to be able to use new
>> language
>>> features, but we need to balance this with the desire to ensure our
>>> updates reach users.
>> 
>> I don’t think that shipping trunk as a security update on very old
>> platforms is a viable strategy.
> 
> Nobody ships trunk, they ship our stable releases, which branch off of
> trunk every six months. We don't have the resources to maintain
> compiler support outside of trunk.

How many changes are required to make GCC 4.8 work?  I think this will provide 
important context for this discussion.

-Filip


> 
> Michael
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Build slave for JSCOnly Linux MIPS

2016-04-18 Thread Filip Pizlo
Yeah.  If we allow GCC 4.8 then I think we should make all of our code compile 
with it.  If that proves too difficult (like if we had to get rid of all 
lambdas), then I think we need to not allow GCC 4.8.

-Filip


> On Apr 18, 2016, at 12:54 PM, Anders Carlsson <ander...@apple.com> wrote:
> 
> I also don’t think we should require different versions of GCC for different 
> projects. 
> 
> - Anders
> 
>> On Apr 18, 2016, at 12:50 PM, Filip Pizlo <fpi...@apple.com> wrote:
>> 
>> I don't want a buildbot for MIPS. It's not a relevant architecture anymore. 
>> I don't think that JSC developers should have to expend effort towards 
>> keeping this architecture working.
>> 
>> -Filip
>> 
>>> On Apr 18, 2016, at 12:36 PM, Konstantin Tokarev <annu...@yandex.ru> wrote:
>>> 
>>> Hello,
>>> 
>>> I'd like to run build slave for MIPS port of JSC (with JSCOnly port on 
>>> Linux). On the first iteration it will just ensure that compilation is not 
>>> broken, afterwards I'm planning to add running tests on target.
>>> 
>>> However, I'm planning to use cross-toolchain based on GCC 4.8.4 (for now, 
>>> the latest one supplied by Broadcom), and unmodified tree won't build 
>>> because of GCC version check in Source/WTF/wtf/Compiler.h.
>>> 
>>> Are there any objections for lowering GCC requirement from 4.9.0 to 4.8.4 
>>> (only for JavaScriptCore without B3)? I'm going to fix arising compilation 
>>> errors myself.
>>> 
>>> -- 
>>> Regards,
>>> Konstantin
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Build slave for JSCOnly Linux MIPS

2016-04-18 Thread Filip Pizlo
I don't want a buildbot for MIPS. It's not a relevant architecture anymore. I 
don't think that JSC developers should have to expend effort towards keeping 
this architecture working.

-Filip

> On Apr 18, 2016, at 12:36 PM, Konstantin Tokarev  wrote:
> 
> Hello,
> 
> I'd like to run build slave for MIPS port of JSC (with JSCOnly port on 
> Linux). On the first iteration it will just ensure that compilation is not 
> broken, afterwards I'm planning to add running tests on target.
> 
> However, I'm planning to use cross-toolchain based on GCC 4.8.4 (for now, the 
> latest one supplied by Broadcom), and unmodified tree won't build because of 
> GCC version check in Source/WTF/wtf/Compiler.h.
> 
> Are there any objections for lowering GCC requirement from 4.9.0 to 4.8.4 
> (only for JavaScriptCore without B3)? I'm going to fix arising compilation 
> errors myself.
> 
> -- 
> Regards,
> Konstantin
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Sukolsak Sakshuwong is now a WebKit Reviewer

2016-03-08 Thread Filip Pizlo
Congrats!

-Filip


> On Mar 8, 2016, at 11:33 AM, Mark Lam  wrote:
> 
> Hi everyone,
> 
> Just want to announce that Sukolsak Sakshuwong is now a reviewer.  
> Congratulations, Sukolsak.
> 
> Mark
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [Block Pointer] Deterministic Region Based Memory Manager

2016-03-06 Thread Filip Pizlo
Phil,

I think you need to do better than this.

-Filip


> On Mar 6, 2016, at 7:28 PM, Phil Bouchard <philipp...@gmail.com> wrote:
> 
> On 03/06/2016 10:17 PM, Filip Pizlo wrote:
>> 
>>> On Mar 6, 2016, at 6:36 PM, Phil Bouchard <philipp...@gmail.com> wrote:
>>> 
>>> That should speed up my benchmarking process.
>> 
>> It will also make your benchmarking process inconclusive for the purpose of 
>> evaluating a memory manager’s performance relative to our garbage collector. 
>>  To put it another way, I don’t care if your memory manager is faster than 
>> someone else’s garbage collector.
> 
> It is very subjective but if we know that:
> - WebKit's GC is 2x faster than the Mark & Sweep GC
> - block_ptr<> is 2x faster than the Mark & Sweep GC
> 
> Then that means WebKit's GC == block_ptr<>.
> 
> 
> Regards,
> -Phil
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [Block Pointer] Deterministic Region Based Memory Manager

2016-03-06 Thread Filip Pizlo

> On Mar 6, 2016, at 6:36 PM, Phil Bouchard  wrote:
> 
> On 03/06/2016 12:59 AM, Phil Bouchard wrote:
>> 
>> Anyway I am not sure if I can create a patch within a short period of
>> time but if I happen to have an interesting Javascript benchmark then I
>> will repost it to this mailing list.
> 
> Hmmm... I just want to say there are embeddable JS engines out there:
> http://duktape.org/
> 
> That should speed up my benchmarking process.

It will also make your benchmarking process inconclusive for the purpose of 
evaluating a memory manager’s performance relative to our garbage collector.  
To put it another way, I don’t care if your memory manager is faster than 
someone else’s garbage collector.

-Filip

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [Block Pointer] Deterministic Region Based Memory Manager

2016-03-05 Thread Filip Pizlo
Phil,

I would expect our GC to be much faster than shared_ptr.  This shouldn’t really 
be surprising; it’s the expected behavior according to the GC literature.  
High-level languages avoid the kind of eager reference counting that shared_ptr 
does because it’s too expensive.  I would expect a 2x-5x slow-down if we 
switched to anything that did reference counting.

You should take a look at our GC, and maybe read some of the major papers about 
GC.  It’s awesome stuff.  Here are a few papers that I consider good reading:

Some great ideas about high-throughput GC: 
http://www.cs.utexas.edu/users/mckinley/papers/mmtk-icse-2004.pdf 
<http://www.cs.utexas.edu/users/mckinley/papers/mmtk-icse-2004.pdf>
Some great ideas about low-latency GC: 
http://www.filpizlo.com/papers/pizlo-pldi2010-schism.pdf 
<http://www.filpizlo.com/papers/pizlo-pldi2010-schism.pdf>
Some great ideas about GC roots: 
http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-88-2.pdf 
<http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-88-2.pdf>
A good exploration of the limits of reference counting performance: 
http://research.microsoft.com/pubs/70475/tr-2007-104.pdf 
<http://research.microsoft.com/pubs/70475/tr-2007-104.pdf>

Anyway, you can’t ask us to change our code to use your memory manager.  You 
can, however, try to get your memory manager to work in WebKit, and post a 
patch if you get it working.  If that patch is an improvement - in the sense 
that both you and the reviewers can apply the patch and confirm that it is in 
fact a progression and doesn’t break anything - then this would be the kind of 
thing we would accept.

Having looked at your code a bit, I think that you’ll encounter the following 
problems:
- Your code uses std::mutex for synchronization.  std::mutex is quite slow.  
You should look at WTF::Lock, it’s much better (as in, orders of magnitude 
better).
- Your code implements lifecycle management that is limited to reference 
counting.  This is not adequate to support JS, DOM, and JIT semantics, which 
are based on solving arbitrary data flow equations over the reachability set.
- It’s not clear that your allocator results in fast path code that is 
competitive against either of the JSC GC’s allocators.  Both of those require 
~5 instructions in the common case.  That instruction count includes the 
oversize object safety checks.
- It’s not clear that your allocator is compatible with JITing and standard 
JavaScript performance optimizations, which assume that values can be passed 
around as bits without calling into the runtime.  A reference counter needs to 
do some kinds of memory operations on variable assignments.  This is likely to 
be about a 2x-5x slow-down.  I would expect a 2x slow-down if you did 
non-thread-safe reference counting, and 5x if you made it thread-safe.

-Filip


> On Mar 5, 2016, at 8:05 PM, Phil Bouchard <philipp...@gmail.com> wrote:
> 
> On 03/05/2016 01:02 AM, Phil Bouchard wrote:
>> On 03/05/2016 12:49 AM, Filip Pizlo wrote:
>>> 
>>> If you're right then you've resolved CS problems dating back to the
>>> 50's. Extraordinary claims require extraordinary evidence. You haven't
>>> provided any evidence.
>> 
>> It wasn't easy to implement but it's done now so we can all move forward.
>> 
>>> Replacing our GC with anything else is going to be incredibly difficult.
>>> 
>>> We aren't going to be compelled by a comparison of our GC to something
>>> else if it isn't in the context of web workloads.
>> 
>> So you're saying it's impossible?  Is there a design document I could
>> start reading?
>> 
>> 
>> Regards,
>> -Phil
>> (Sorry if I don't reply... it's late)
> 
> Believe it or not, I made a mistake in the fast_pool_allocator which 
> allocates proxies.  I wasn't using unitary size which was clogging the 
> allocator.  I fixed it and now block_ptr<> is *faster* than shared_ptr<>:
> 
> make:
> auto_ptr:   25637845 ns
> shared_ptr: 26789342 ns
> block_ptr:  50487376 ns
> 
> new:
> auto_ptr:   23139888 ns
> shared_ptr: 48198668 ns
> block_ptr:  39151845 ns
> 
> So the performance comparison reduces to this.  Now it's just a matter of 
> finding out if block_ptr<> leaks in any way.
> 
> 
> Regards,
> -Phil
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [Block Pointer] Deterministic Region Based Memory Manager

2016-03-04 Thread Filip Pizlo


> On Mar 4, 2016, at 9:33 PM, Phil Bouchard  wrote:
> 
>> On 03/05/2016 12:07 AM, Ryosuke Niwa wrote:
>> Hi Phil,
>> 
>> You made a similar post in December 2014:
>> https://lists.webkit.org/pipermail/webkit-dev/2014-December/027113.html
>> 
>> Are you suggesting you have done or ready to do the following?
> 
> I just completed the implementation of block_ptr<> but I am ready to run 
> benchmarks on my laptop once the code is integrated locally on my computer.
> 
> I work for a smart TV company which uses WebKit so I can ask assistance from 
> their part if I need to but I need to know the interests from the Open Source 
> community first.

If you're right then you've resolved CS problems dating back to the 50's. 
Extraordinary claims require extraordinary evidence. You haven't provided any 
evidence. 

> 
>>> Let’s be clear, though: we’re unlikely to accept a patch in which all of 
>>> our JS object references are replaced by uses of your block_ptr, unless 
>>> that patch is a significant speed-up on web benchmarks, there aren’t any 
>>> slow-downs, and you can prove that all of the JSC GC’s lifetime semantics 
>>> are preserved (including tricky things like the relationship between 
>>> Executable objects, Structure objects, and CodeBlocks).
>> 
>> Otherwise, I don't think we would be adopting your memory management
>> library anytime soon.   I don't think we're interested in
>> experimenting with your library on behalf of you either given
>> implementing a concurrent GC on top of our existing GC would be much
>> lower risk and will address some of the issues you have pointed out in
>> the thread.
> 
> It depends on the complexity of swapping the garbage collector with the 
> block_ptr<> in WebKit.  If that is easy I can do it myself on my laptop.  If 
> not then perhaps I can download the garbage collector and compare it with 
> block_ptr<> but objectively; including the collection cycle.

Replacing our GC with anything else is going to be incredibly difficult. 

We aren't going to be compelled by a comparison of our GC to something else if 
it isn't in the context of web workloads. 

> 
> I wasn't confident in 2014 because my code wasn't solid but now I am.  I just 
> need some minimal guidance.
> 
> 
> Regards,
> -Phil
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Change WTFCrash to not trash the crash site register state.

2016-02-08 Thread Filip Pizlo
Makes sense to me.

-Filip


> On Feb 8, 2016, at 12:33 PM, Mark Lam <mark@apple.com> wrote:
> 
> A store to 0xbbadbeef will still require the use of a register (at least on 
> ARM).  The breakpoint instruction uses no registers (hence, we don’t have to 
> choose which register to sacrifice).  We can still identify the crash as an 
> assertion by looking fro the EXC_BREAKPOINT instead of the 0xbbadbeef address.
> 
> Mark
> 
> 
>> On Feb 8, 2016, at 12:14 PM, Filip Pizlo <fpi...@apple.com> wrote:
>> 
>> I like this idea.  I’ve wanted this for a while.
>> 
>> Can you explain why your approach doesn’t inline a store to 0xbbadbeef, so 
>> that this aspect of the current behavior is preserved?
>> 
>> -Filip
>> 
>> 
>>> On Feb 8, 2016, at 11:55 AM, Mark Lam <mark@apple.com> wrote:
>>> 
>>> Hi WebKit folks,
>>> 
>>> For non-debug OS(DARWIN) builds, I would like to change WTFCrash()’s 
>>> implementation into an inlined function that has a single inlined asm 
>>> statement that issues a breakpoint trap.  The intent is to crash directly 
>>> in the caller’s frame and preserve the register values at the time of the 
>>> crash.  As a result, for non-debug OS(DARWIN) builds, crashes due to failed 
>>> RELEASE_ASSERTs will now show up in crash reports as crashing due to 
>>> EXC_BREAKPOINT (SIGTRAP) instead of a EXC_BAD_ACCESS (SIGSEGV) on address 
>>> 0xbbadbeef.
>>> 
>>> This is in contrast to the current implementation where WTFCrash() is a 
>>> function that calls a lot of handler / callback functions before actually 
>>> crashing.  As a result, by the time it crashes, the caller’s register 
>>> values has most likely been trashed by all the work that the WTFCrash and 
>>> the handlers / callbacks do.  The register values in the captured crash 
>>> report will, therefore, no longer be useful for crash site analysis. 
>>> 
>>> You can find the patch for this change at 
>>> https://bugs.webkit.org/show_bug.cgi?id=153996.  This change will only be 
>>> applied for non-debug OS(DARWIN) builds for now.  I’m leaving all other 
>>> build build configurations with the existing WTFCrash() implementation and 
>>> behavior.
>>> 
>>> Does anyone have any opinion / feedback on this change?
>>> 
>>> Thanks.
>>> 
>>> Regards,
>>> Mark
>>> 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Change WTFCrash to not trash the crash site register state.

2016-02-08 Thread Filip Pizlo
I like this idea.  I’ve wanted this for a while.

Can you explain why your approach doesn’t inline a store to 0xbbadbeef, so that 
this aspect of the current behavior is preserved?

-Filip


> On Feb 8, 2016, at 11:55 AM, Mark Lam  wrote:
> 
> Hi WebKit folks,
> 
> For non-debug OS(DARWIN) builds, I would like to change WTFCrash()’s 
> implementation into an inlined function that has a single inlined asm 
> statement that issues a breakpoint trap.  The intent is to crash directly in 
> the caller’s frame and preserve the register values at the time of the crash. 
>  As a result, for non-debug OS(DARWIN) builds, crashes due to failed 
> RELEASE_ASSERTs will now show up in crash reports as crashing due to 
> EXC_BREAKPOINT (SIGTRAP) instead of a EXC_BAD_ACCESS (SIGSEGV) on address 
> 0xbbadbeef.
> 
> This is in contrast to the current implementation where WTFCrash() is a 
> function that calls a lot of handler / callback functions before actually 
> crashing.  As a result, by the time it crashes, the caller’s register values 
> has most likely been trashed by all the work that the WTFCrash and the 
> handlers / callbacks do.  The register values in the captured crash report 
> will, therefore, no longer be useful for crash site analysis. 
> 
> You can find the patch for this change at 
> https://bugs.webkit.org/show_bug.cgi?id=153996.  This change will only be 
> applied for non-debug OS(DARWIN) builds for now.  I’m leaving all other build 
> build configurations with the existing WTFCrash() implementation and behavior.
> 
> Does anyone have any opinion / feedback on this change?
> 
> Thanks.
> 
> Regards,
> Mark
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Some text about the B3 compiler

2016-02-02 Thread Filip Pizlo

> On Feb 2, 2016, at 4:56 PM, Carlos Alberto Lopez Perez  
> wrote:
> 
> On 02/02/16 19:58, Ryosuke Niwa wrote:
>> On Tue, Feb 2, 2016 at 10:42 AM, Carlos Alberto Lopez Perez
>>> But this script seems focused on comparing the performance between
>>> different browsers (safari vs chrome vs firefox) rather than in testing
>>> and comparing the performance between different revisions of WebKit.
>> 
>> Not at all.  It simply supports running benchmark in other browsers.
>> 
>>> Do you think it makes any difference (from the point of view of
>>> detecting failures, not from the performance PoV) to run this tests in a
>>> full-fledged browser like Safari rather than in WebKitTestRunner ?
>> 
>> Yes. There are many browser features that can significantly impact the
>> real world performance.
>> 
> 
> I'm specifically not asking about performance, but about correctness.
> 
> This discussion was started because Filip said that running JS tests on
> a browser catches many failures that are not cached when running the
> tests from the terminal.
> 
> So, I'm wondering if running the JS tests on WTR or Safari makes any
> difference when catching failures.

I suspect that the browser will catch more failures.

> 
>>> We already have a performance test bot running tests inside WTR.
>>> And I see that the current set of tests executed on this bot already
>>> includes Speedometer, and that JetStream and Sunspider are skipped on
>>> PerformanceTests/Skipped.
>>> 
>>> So I see some options going forward:
>>> 
>>> - Fix the JetStream and Sunspider tests so they can be run as part of
>>> the current run-perf-tests script that the performance bots execute.
>> 
>> We should use run-benchmark instead since run-benchmark spits out the
>> JSON file that's compatible with run-pref-tests.
>> 
> 
> I'm a bit lost here. Are you planning to deprecate run-perf-tests with
> run-benchmark? What is wrong with run-perf-tests?
> 
>>> - Implement support on the script run-benchmark to run the tests inside
>>> WTR, and create a new step running this script that will be executed on
>>> the test bots.
>> 
>> I don't see a point in doing this.   Why is it desirable to run these
>> benchmarks inside WebKitTestRunner?
>> 
> 
> Less dependencies: WTR (or the MiniBrowser) is something that is
> currently built by the bots on each build.
> If we want to use Epiphany (for example) for the performance tests, is
> another thing we have to take care of building before each run. Not a
> big deal, but I wonder if is really worth.
> 
>>> - Deploy a new bot that runs run-perf-tests on a full-fledged browser
>>> like Safari or Epiphany.
>> 
>> We should just do this.
>> 
>>> I wonder what you think is the best option or if there is some option
>>> not viable.
>>> 
>>> From my PoV, the option #1 has the advantage of reusing the current
>>> infrastructure that collects and draws performance data at
>>> https://perf.webkit.org
>> 
>> We have an internal instance of the same dashboard to which we're
>> reporting results of run-benchmark script.
>> 
> 
> What about making this public? We will happily contribute with a
> GTK+/Linux buildbot for it.
> 
> 
> Regards.
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Some text about the B3 compiler

2016-01-30 Thread Filip Pizlo
Michael,

As my original message said, I was wondering if there was any support for 
running some JavaScript tests *in browser*.  run-jsc-stress-tests doesn’t 
support that because it doesn’t know what a browser is.

Some tests, like Kraken, Octane, JetStream, and Speedometer, either require a 
browser to run (like JetStream and Speedometer) or have significantly different 
behavior in the browser than in their command-line harnesses (like Kraken and 
Octane).  If you did have a bot that ran these tests in some GTK+ or EFL 
browser, you’d probably catch bugs that testing the JSC shell cannot catch.

-Filip


> On Jan 30, 2016, at 7:50 PM, Michael Catanzaro  wrote:
> 
> On Sat, 2016-01-30 at 16:06 -0800, Filip Pizło wrote:
>> Do we have Linux bots that run Octane, Speedometer, JetStream and
>> Kraken in browser?
>> 
>> We find that this catches a lot of bugs that none of the other tests
>> catch. 
>> 
>> -Filip
> 
> This is the command that the bots run:
> 
> /usr/bin/env ruby Tools/Scripts/run-jsc-stress-tests -j
> /home/slave/webkitgtk/gtk-linux-64-release-
> tests/build/WebKitBuild/Release/bin/jsc -o /home/slave/webkitgtk/gtk-
> linux-64-release-tests/build/WebKitBuild/Release/bin/jsc-stress-results 
> PerformanceTests/SunSpider/tests/sunspider-1.0
> PerformanceTests/JetStream/cdjs/cdjs-tests.yaml
> Source/JavaScriptCore/tests/executableAllocationFuzz.yaml
> Source/JavaScriptCore/tests/exceptionFuzz.yaml
> PerformanceTests/SunSpider/no-architecture-specific-optimizations.yaml
> PerformanceTests/SunSpider/tests/v8-v6
> Source/JavaScriptCore/tests/mozilla/mozilla-tests.yaml
> Source/JavaScriptCore/tests/stress LayoutTests/js/regress/script-tests
> PerformanceTests/SunSpider/profiler-test.yaml LayoutTests/jsc-layout-
> tests.yaml Source/JavaScriptCore/tests/typeProfiler.yaml
> Source/JavaScriptCore/tests/controlFlowProfiler.yaml
> Source/JavaScriptCore/tests/es6.yaml
> Source/JavaScriptCore/tests/modules.yaml --ftl-jit --
> 
> I see SunSpider and JetStream in there, but not the others
> 
> Michael

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Some text about the B3 compiler

2016-01-29 Thread Filip Pizlo
Follow up on this:

> On Jan 29, 2016, at 11:38 AM, Filip Pizlo <fpi...@apple.com> wrote:
> 
> I started coverting the documentation to Markdown.  I don’t think this is a 
> good idea.
> 
> - Markdown has no definition lists.  The entire IR document is a definition 
> list.  I don’t want B3’s documentation to be blocked on this issue.

It turns out that it does have them, but they are very weak.  For example, you 
can’t have code blocks or paragraphs inside them.  We want to have code blocks 
inside opcode definitions, to show examples.

> - Markdown’s conversion step makes the workflow awkward.  I’m not going to 
> use some Markdown editing app - that will prevent me from being able to 
> properly format code examples.  I need a code editor for that.

This was hard to get around.  This isn’t a problem with Markdown, but rather, a 
problem with using Wordpress to render Markdown that is in svn.  There is no 
way to preview the Markdown before committing it.  That would lead to unusual 
problems, where after a patch is landed, the patch author or someone else would 
have to do a bunch of blind follow-up commits to fix any style issues, like 
code blocks that don’t fit properly or whatever.

Considering that we will have to be hacking raw HTML inside those Markdown 
files (due to definition lists), the lack of preview basically means that you 
have no way of predicting what the your HTML will render like.

> 
> I think that this documentation should be HTML.  I don’t think we should 
> expend a lot of energy to formatting it nicely.  The point of this document 
> is for it to be read by engineers while they hack code.

I landed raw HTML documentation: http://trac.webkit.org/changeset/195841 
<http://trac.webkit.org/changeset/195841>

I filed this bug about improving its style: 
https://bugs.webkit.org/show_bug.cgi?id=153674 
<https://bugs.webkit.org/show_bug.cgi?id=153674>

-Filip


> 
> -Filip
> 
> 
>> On Jan 29, 2016, at 10:12 AM, Timothy Hatcher <timo...@apple.com 
>> <mailto:timo...@apple.com>> wrote:
>> 
>> I also added:
>> 
>> https://webkit.org/documentation/b3/air/ 
>> <https://webkit.org/documentation/b3/air/> loads 
>> /docs/b3/assembly-intermediate-representation.md
>> 
>>> On Jan 29, 2016, at 10:05 AM, Filip Pizło <fpi...@apple.com 
>>> <mailto:fpi...@apple.com>> wrote:
>>> 
>>> Thank you!  I'll convert them today. 
>>> 
>>> -Filip
>>> 
>>> On Jan 29, 2016, at 10:02 AM, Timothy Hatcher <timo...@apple.com 
>>> <mailto:timo...@apple.com>> wrote:
>>> 
>>>> Markdown is pretty similar to the wiki formatting and very simple.
>>>> 
>>>> You can look at a cheatsheet if you login to the blog: 
>>>> https://webkit.org/wp/wp-admin/post.php?post=4300=edit 
>>>> <https://webkit.org/wp/wp-admin/post.php?post=4300=edit>
>>>> 
>>>> I have also used this HTML to Markdown converter before: 
>>>> http://domchristie.github.io/to-markdown/ 
>>>> <http://domchristie.github.io/to-markdown/>
>>>> 
>>>> The pages are created:
>>>> 
>>>> https://webkit.org/documentation/b3/ 
>>>> <https://webkit.org/documentation/b3/> loads /docs/b3/bare-bones-backend.md
>>>> https://webkit.org/documentation/b3/intermediate-representation/ 
>>>> <https://webkit.org/documentation/b3/intermediate-representation/> loads 
>>>> /docs/b3/intermediate-representation.md
>>>> 
>>>> Once those files are added to SVN, they will get picked up by the site. I 
>>>> can change those to point to other names if you want something different.
>>>> 
>>>> — Timothy Hatcher
>>>> 
>>>>> On Jan 29, 2016, at 9:34 AM, saam barati <saambara...@gmail.com 
>>>>> <mailto:saambara...@gmail.com>> wrote:
>>>>> 
>>>>> I'm happy to convert the document to markdown. Can you send me your 
>>>>> latest revision or post it to the website?
>>>>> 
>>>>> I usually look at:
>>>>> http://daringfireball.net/projects/markdown/syntax 
>>>>> <http://daringfireball.net/projects/markdown/syntax>
>>>>> For a refresher on the syntax.
>>>>> 
>>>>> Tim, could you create a page that loads the markdown file?
>>>>> 
>>>>> Thanks,
>>>>> Saam
>>>>> 
>>>>> On Jan 29, 2016, at 12:06 AM, Filip Pizło <fpi...@apple.com 
>>>>> <mailto:fpi...@apple.c

Re: [webkit-dev] Some text about the B3 compiler

2016-01-29 Thread Filip Pizlo
I started coverting the documentation to Markdown.  I don’t think this is a 
good idea.

- Markdown has no definition lists.  The entire IR document is a definition 
list.  I don’t want B3’s documentation to be blocked on this issue.
- Markdown’s conversion step makes the workflow awkward.  I’m not going to use 
some Markdown editing app - that will prevent me from being able to properly 
format code examples.  I need a code editor for that.

I think that this documentation should be HTML.  I don’t think we should expend 
a lot of energy to formatting it nicely.  The point of this document is for it 
to be read by engineers while they hack code.

-Filip


> On Jan 29, 2016, at 10:12 AM, Timothy Hatcher <timo...@apple.com> wrote:
> 
> I also added:
> 
> https://webkit.org/documentation/b3/air/ 
> <https://webkit.org/documentation/b3/air/> loads 
> /docs/b3/assembly-intermediate-representation.md
> 
>> On Jan 29, 2016, at 10:05 AM, Filip Pizło <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> Thank you!  I'll convert them today. 
>> 
>> -Filip
>> 
>> On Jan 29, 2016, at 10:02 AM, Timothy Hatcher <timo...@apple.com 
>> <mailto:timo...@apple.com>> wrote:
>> 
>>> Markdown is pretty similar to the wiki formatting and very simple.
>>> 
>>> You can look at a cheatsheet if you login to the blog: 
>>> https://webkit.org/wp/wp-admin/post.php?post=4300=edit 
>>> <https://webkit.org/wp/wp-admin/post.php?post=4300=edit>
>>> 
>>> I have also used this HTML to Markdown converter before: 
>>> http://domchristie.github.io/to-markdown/ 
>>> <http://domchristie.github.io/to-markdown/>
>>> 
>>> The pages are created:
>>> 
>>> https://webkit.org/documentation/b3/ <https://webkit.org/documentation/b3/> 
>>> loads /docs/b3/bare-bones-backend.md
>>> https://webkit.org/documentation/b3/intermediate-representation/ 
>>> <https://webkit.org/documentation/b3/intermediate-representation/> loads 
>>> /docs/b3/intermediate-representation.md
>>> 
>>> Once those files are added to SVN, they will get picked up by the site. I 
>>> can change those to point to other names if you want something different.
>>> 
>>> — Timothy Hatcher
>>> 
>>>> On Jan 29, 2016, at 9:34 AM, saam barati <saambara...@gmail.com 
>>>> <mailto:saambara...@gmail.com>> wrote:
>>>> 
>>>> I'm happy to convert the document to markdown. Can you send me your latest 
>>>> revision or post it to the website?
>>>> 
>>>> I usually look at:
>>>> http://daringfireball.net/projects/markdown/syntax 
>>>> <http://daringfireball.net/projects/markdown/syntax>
>>>> For a refresher on the syntax.
>>>> 
>>>> Tim, could you create a page that loads the markdown file?
>>>> 
>>>> Thanks,
>>>> Saam
>>>> 
>>>> On Jan 29, 2016, at 12:06 AM, Filip Pizło <fpi...@apple.com 
>>>> <mailto:fpi...@apple.com>> wrote:
>>>> 
>>>>> I'm all for this but I don't know anything about markdown. 
>>>>> 
>>>>> What's the best way to proceed?
>>>>> 
>>>>> -Filip
>>>>> 
>>>>> On Jan 28, 2016, at 9:24 PM, Timothy Hatcher <timo...@apple.com 
>>>>> <mailto:timo...@apple.com>> wrote:
>>>>> 
>>>>>> They should be markdown files like we do for the code style and policy 
>>>>>> documents.
>>>>>> 
>>>>>> https://trac.webkit.org/browser/trunk/Websites/webkit.org/code-style.md 
>>>>>> <https://trac.webkit.org/browser/trunk/Websites/webkit.org/code-style.md>
>>>>>> 
>>>>>> We can then make Wordpress pages on the site that load the markdown.
>>>>>> 
>>>>>> Maybe put them in a /docs/b3/ directory?
>>>>>> 
>>>>>> — Timothy Hatcher
>>>>>> 
>>>>>> On Jan 28, 2016, at 4:48 PM, Filip Pizlo <fpi...@apple.com 
>>>>>> <mailto:fpi...@apple.com>> wrote:
>>>>>> 
>>>>>>> I guess we could put it in Websites/webkit.org/b3 
>>>>>>> <http://webkit.org/b3>.  Then patches could edit both B3 and the 
>>>>>>> documentation in one go, and the documentation would go live when it’s 
>>>>>>> committed.
>>>>>&g

Re: [webkit-dev] Some text about the B3 compiler

2016-01-28 Thread Filip Pizlo
I guess we could put it in Websites/webkit.org/b3 <http://webkit.org/b3>.  Then 
patches could edit both B3 and the documentation in one go, and the 
documentation would go live when it’s committed.

Does anyone object to this?

-Filip


> On Jan 28, 2016, at 4:39 PM, Saam barati <sbar...@apple.com> wrote:
> 
> Yeah. That’d be the easiest way to keep it up IMO.
> 
> Saam
> 
>> On Jan 28, 2016, at 4:37 PM, Filip Pizło <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> +1
>> 
>> Do you think we should move the documentation to a file in svn so that it 
>> can be reviewed as part of patch review?
>> 
>> -Filip
>> 
>> On Jan 28, 2016, at 4:32 PM, Saam barati <sbar...@apple.com 
>> <mailto:sbar...@apple.com>> wrote:
>> 
>>> This is great. Thanks Fil.
>>> 
>>> I propose that we do all that we can to keep this updated.
>>> I suggest that all patches that change to the IR should also include with 
>>> it 
>>> a change to the documentation, and that reviewers should require this.
>>> 
>>> It’d also be great if other significant changes that seem like the deserve
>>> a mention in the documentation also get added as part of patches.
>>> 
>>> Saam
>>> 
>>>> On Jan 28, 2016, at 4:23 PM, Filip Pizlo <fpi...@apple.com 
>>>> <mailto:fpi...@apple.com>> wrote:
>>>> 
>>>> Hi everyone,
>>>> 
>>>> We’ve been working on a new compiler backend for the FTL JIT, which we 
>>>> call B3.  It stands for “Bare Bones Backend”.  We recently enabled it on 
>>>> X86/Mac, and we’re working hard to enable it on other platforms.
>>>> 
>>>> If you’re interested in how it works, I’ve started writing documentation.  
>>>> I’ll be adding more to it soon!
>>>> https://trac.webkit.org/wiki/BareBonesBackend 
>>>> <https://trac.webkit.org/wiki/BareBonesBackend>
>>>> https://trac.webkit.org/wiki/B3IntermediateRepresentation 
>>>> <https://trac.webkit.org/wiki/B3IntermediateRepresentation>
>>>> 
>>>> -Filip
>>>> 
>>>> ___
>>>> webkit-dev mailing list
>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>> 
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Xabier Rodriguez-Calvar and Michael Catanzaro are now WebKit reviewers

2015-12-27 Thread Filip Pizlo
Congrats guys!

-Filip


> On Dec 27, 2015, at 6:11 PM, Gyuyoung Kim  wrote:
> 
> Congrats Xabier and Michael !
> 
> Gyuyoung.
> 
> On Wed, Dec 23, 2015 at 2:58 AM, Mark Lam  > wrote:
> Hi everyone,
> 
> With pleasure, I would like to announce that Xabier Rodriguez-Calvar and 
> Michael Catanzaro are now WebKit reviewers.
> 
> Congratulations to Xabier and Michael.
> 
> Mark
> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org 
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> 
> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Preferred style for checking for undefined in our built-in JavaScript code?

2015-11-30 Thread Filip Pizlo
I’ve also been guilty of:

if (xxx === void 0)

This is slightly better than saying “undefined”, since that’s not actually a 
reserved word.

I believe that all of these should perform the same.  We should pick one based 
on what looks nicest and what has the most clear semantics.

-Filip



> On Nov 30, 2015, at 11:37 AM, Darin Adler  wrote:
> 
> I see the following in some code:
> 
>if (xxx === undefined)
> 
> And I see the following in some other code:
> 
>if (typeof xxx == “undefined”)
> 
>or
> 
>if (typeof xxx === “undefined”)
> 
> Is one preferred over the other, style-wise? Is one more efficient than the 
> other?
> 
> — Darin
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Thought about Nix JavaScriptCore port

2015-11-10 Thread Filip Pizlo
I like this idea. It makes sense to simplify building JSC as a standalone on 
Linux. 

-Filip

> On Nov 10, 2015, at 11:04 PM, Yusuke SUZUKI  wrote:
> 
> Hello WebKittens,
> 
> JavaScriptCore use in non-OSX environment looks emerging[1].
> In addition to that, sometimes, people would like to build JavaScriptCore to 
> see what is happning in JavaScriptCore development[2].
> However, if you don't have an OSX machine, it is a little bit difficult.
> One possible solution is GTK+ port, it is nice way to build JSC in Linux.
> But it involves many dependencies (Mesa, glib etc.), that are not necessary 
> for JavaScriptCore and this is a barrier to join JSC development from non OSX 
> world.
> 
> While building whole WebKit requires many dependencies, JavaScriptCore does 
> not.
> In Nix environment, JavaScriptCore only requires
> 
> 1. ICU (for WTF unicode libraries and i18n in JSC)
> 2. LLVM (for FTL!)
> 
> That's all. Maintaining the brand new port is tough work.
> But maintaining Nix port only for JSC is much much easier since WTF and JSC 
> are well written for Nix.
> In fact, I can build JSC in Nix environment with very small effort, 
> essentially just adding CMakeLists.txt. Here is my initial attempt[2].
> 
> So, the proposal is, how about adding new port NixJSC?
> I think it encourages OSS development for JSC.
> Unfortunately I cannot attend Fall WebKit MTG, but this could become a nice 
> topic for the MTG :)
> 
> Here is the plan about the NixJSC port.
> 
> 1. Add new port NixJSC. It provides JavaScriptCore build for Nix environment. 
> Not provide WebCore, WebKit, WebKit2, etc. It will just build WTF, bmalloc, 
> and JSC.
> 2. This port should be suitable for development. So I think aggressively 
> enabling features is nice for this port; like, enabling FTL!
> 3. I think this would become a nice playground for OSX guys to enable 
> features in Nix environment. GTK+ and EFL are somewhat production port. But 
> NixJSC is intended to be used for development.
> 
> If it is introduced, at least, I can support this port.
> 
> [1]: 
> https://github.com/facebook/react-native/blob/master/ReactAndroid/src/main/jni/third-party/jsc/Android.mk
> [2]: https://twitter.com/BrendanEich/status/652108576836743168
> [3]: 
> https://github.com/Constellation/webkit/commit/a2908e97939ea2881e15ba2d820c377f9bd09297
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


  1   2   3   4   >