Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Nicholas Nethercote
On Fri, Jul 20, 2018 at 7:37 AM, Daniel Veditz  wrote:

> ​Prefs might be a terrible way to implement that functionality, but it's
> been used that way as long as we've had prefs in Mozilla so there seems to
> be a need for it. Early offenders: printer setup, mail accounts, external
> protocol handlers (and I believe content-type handlers, but those moved out
> long ago). Possibly inspired by the way the Windows registry was used.
>

The Windows registry is a very good comparison, alas.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Jeff Gilbert
Using a classic read/write exclusive lock, we would only every contend
on read+write or write+write, which are /rare/.

It's really, really nice when we can have dead-simple threadsafe APIs,
instead of requiring people to jump through hoops or roll their own
dispatch code. (fragile) IIRC most new APIs added to the web are
supposed to be available in Workers, so the need for reading prefs
off-main-thread is only set to grow.

I don't see how this can mutate into a foot-gun in ways that aren't
already the case today without off-main-thread access.

Anyway, I appreciate the work that's been done and is ongoing here. As
you burn down the pref accesses in start-up, please consider
unblocking this feature request. (Personally I'd just eat the 400us in
exchange for this simplifying architectural win)

On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione  wrote:
> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>
>> We should totally be able to afford the very low cost of a
>> rarely-contended lock. What's going on that causes uncached pref reads
>> to show up so hot in profiles? Do we have a list of problematic pref
>> keys?
>
>
> So, at the moment, we read about 10,000 preferences at startup in debug
> builds. That number is probably slightly lower in non-debug builds, bug we
> don't collect stats there. We're working on reducing that number (which is
> why we collect statistics in the first place), but for now, it's still quite
> high.
>
>
> As for the cost of locks... On my machine, in a tight loop, the cost of a
> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
> ideal circumstances, on a single core of a very fast CPU, with very fast
> RAM, everything cached, and no contention. If we could extrapolate that to
> normal usage, it would be about a third of a ms of additional overhead for
> startup. I've fought hard enough for 1ms startup time improvements, but
> *shrug*, if it were that simple, it might be acceptable.
>
> But I have no reason to think the lock would be rarely contended. We read
> preferences *a lot*, and if we allowed access from background threads, I
> have no doubt that we would start reading them a lot from background threads
> in addition to reading them a lot from the main thread.
>
> And that would mean, in addition to lock contention, cache contention and
> potentially even NUMA issues. Those last two apply to atomic var caches too,
> but at least they generally apply only to the specific var caches being
> accessed off-thread, rather than pref look-ups in general.
>
>
> Maybe we could get away with it at first, as long as off-thread usage
> remains low. But long term, I think it would be a performance foot-gun. And,
> paradoxically, the less foot-gunny it is, the less useful it probably is,
> too. If we're only using it off-thread in a few places, and don't have to
> worry about contention, why are we bothering with locking and off-thread
> access in the first place?
>
>
>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
>> wrote:
>>>
>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:


 On 13/07/2018 21:37, Kris Maglione wrote:
>
>
> tl;dr: A major change to the architecture preference service has just
> landed, so please be on the lookout for regressions.
>
> We've been working for the last few weeks on rearchitecting the
> preference service to work better in our current and future
> multi-process
> configurations, and those changes have just landed in bug 1471025.



 Looks like a great step forward!

 While we're thinking about the prefs service, is there any possibility
 we
 could enable off-main-thread access to preferences?
>>>
>>>
>>>
>>> I think the chances of that are pretty close to 0, but I'll defer to
>>> Nick.
>>>
>>> We definitely can't afford the locking overhead—preference look-ups
>>> already
>>> show up in profiles without it. And even the current limited exception
>>> that
>>> we grant Stylo while it has the main thread blocked causes problems (bug
>>> 1474789), since it makes it impossible to update statistics for those
>>> reads,
>>> or switch to Robin Hood hashing (which would make our hash tables much
>>> smaller and more efficient, but requires read operations to be able to
>>> move
>>> entries).
>>>
 I am aware that in simple cases, this can be achieved via the
 StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
 from other threads. But this doesn't help in my use case, where I need
 another thread to be able to query an extensible set of pref names that
 are
 not fully known at compile time.

 Currently, it looks like to do this, I'll have to iterate over the
 relevant prefs branch(es) ahead of time (on the main thread) and copy
 all
 the entries to some other place that is then available to my worker
 threads.
 For my use case, at least, the other threads 

PSA: Default thread stack size decreased to 256K

2018-07-19 Thread Kris Maglione
tl;dr: Bug 1476828 significantly decreased the default thread stack size. If 
you notice any thread abort issues, please file bugs blocking that bug.


For some time, our default stack size for thread pools has been 256K on most 
platforms, but the stack size for other threads has remained set to the 
platform default. On Windows and Android, that's about 1MB. On desktop Linux, 
it defaults to 2MB, unless overridden by a ulimit.


One wouldn't generally expect this to cause problems, since thread stacks are 
generally lazily committed as they grow. On Linux, however, the 2MB default 
causes a specific problem: it matches the size of VM huge pages, which causes 
the kernel to sometimes allocate an entire 2MB region for them, in a single 
huge page, the first time they're touched.


Decreasing the number to anything lower than 2MB solves this problem, but 256K 
is much closer to what we actually expect them to reasonably use, and matches 
the defaults we generally use elsewhere, so that's the number we chose. It's 
possible that certain specific threads may need more, however, so if you 
notice any thread crashes, please report them.


Thanks.

-Kris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Kris Maglione

On Thu, Jul 19, 2018 at 02:37:07PM -0700, Justin Dolske wrote:

I know we've had code that, instead of reading a pref directly, checks the
pref once in an init() and uses pref observers to watch for any changes to
it. (i.e., basically mirrors the pref into some module-local variable, at
which point you can roll your own locking or whatever to make it
threadsafe). Is that a pattern that would work here, if people really want
OMT access but we're not ready to bake support for that into the pref
service? [Perhaps with some simple helper glue / boilerplate to make it
easier.]


We already have helper glue for this. For C++, we have VarCache prefs, and for 
JS, we have XPCOMUtils.defineLazyPreferenceGetter. In general, it's probably 
better to use those rather than hand-rolled observers when possible, since I 
have optimizations planned for both.



On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
wrote:


On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:


We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?



So, at the moment, we read about 10,000 preferences at startup in debug
builds. That number is probably slightly lower in non-debug builds, bug we
don't collect stats there. We're working on reducing that number (which is
why we collect statistics in the first place), but for now, it's still
quite high.


As for the cost of locks... On my machine, in a tight loop, the cost of a
entering and exiting MutexAutoLock is about 37ns. This is pretty close to
ideal circumstances, on a single core of a very fast CPU, with very fast
RAM, everything cached, and no contention. If we could extrapolate that to
normal usage, it would be about a third of a ms of additional overhead for
startup. I've fought hard enough for 1ms startup time improvements, but
*shrug*, if it were that simple, it might be acceptable.

But I have no reason to think the lock would be rarely contended. We read
preferences *a lot*, and if we allowed access from background threads, I
have no doubt that we would start reading them a lot from background
threads in addition to reading them a lot from the main thread.

And that would mean, in addition to lock contention, cache contention and
potentially even NUMA issues. Those last two apply to atomic var caches
too, but at least they generally apply only to the specific var caches
being accessed off-thread, rather than pref look-ups in general.


Maybe we could get away with it at first, as long as off-thread usage
remains low. But long term, I think it would be a performance foot-gun.
And, paradoxically, the less foot-gunny it is, the less useful it probably
is, too. If we're only using it off-thread in a few places, and don't have
to worry about contention, why are we bothering with locking and off-thread
access in the first place?


On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 

wrote:


On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:



On 13/07/2018 21:37, Kris Maglione wrote:



tl;dr: A major change to the architecture preference service has just
landed, so please be on the lookout for regressions.

We've been working for the last few weeks on rearchitecting the
preference service to work better in our current and future
multi-process
configurations, and those changes have just landed in bug 1471025.




Looks like a great step forward!

While we're thinking about the prefs service, is there any possibility
we
could enable off-main-thread access to preferences?




I think the chances of that are pretty close to 0, but I'll defer to
Nick.

We definitely can't afford the locking overhead—preference look-ups
already
show up in profiles without it. And even the current limited exception
that
we grant Stylo while it has the main thread blocked causes problems (bug
1474789), since it makes it impossible to update statistics for those
reads,
or switch to Robin Hood hashing (which would make our hash tables much
smaller and more efficient, but requires read operations to be able to
move
entries).

I am aware that in simple cases, this can be achieved via the

StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
from other threads. But this doesn't help in my use case, where I need
another thread to be able to query an extensible set of pref names that
are
not fully known at compile time.

Currently, it looks like to do this, I'll have to iterate over the
relevant prefs branch(es) ahead of time (on the main thread) and copy
all
the entries to some other place that is then available to my worker
threads.
For my use case, at least, the other threads only need read access;
modifying prefs could still be limited to the main thread.




That's probably your best option, yeah. Although I will say that those
kinds
of extensible preference sets aren't great for performance or memory
usage,
so switching to some 

Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Daniel Veditz
On Tue, Jul 17, 2018 at 9:23 PM, Nicholas Nethercote  wrote:

> This is a good example of how prefs is a far more general mechanism than I
> would like, leading to all manner of use and abuse. "All I want is a
> key-value store, with fast multi-threaded access, where the keys aren't
> known ahead of time."
>

​Prefs might be a terrible way to implement that functionality, but it's
been used that way as long as we've had prefs in Mozilla so there seems to
be a need for it. Early offenders: printer setup, mail accounts, external
protocol handlers (and I believe content-type handlers, but those moved out
long ago). Possibly inspired by the way the Windows registry was used.

-Dan Veditz
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Justin Dolske
I know we've had code that, instead of reading a pref directly, checks the
pref once in an init() and uses pref observers to watch for any changes to
it. (i.e., basically mirrors the pref into some module-local variable, at
which point you can roll your own locking or whatever to make it
threadsafe). Is that a pattern that would work here, if people really want
OMT access but we're not ready to bake support for that into the pref
service? [Perhaps with some simple helper glue / boilerplate to make it
easier.]

Justin

On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione 
wrote:

> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>
>> We should totally be able to afford the very low cost of a
>> rarely-contended lock. What's going on that causes uncached pref reads
>> to show up so hot in profiles? Do we have a list of problematic pref
>> keys?
>>
>
> So, at the moment, we read about 10,000 preferences at startup in debug
> builds. That number is probably slightly lower in non-debug builds, bug we
> don't collect stats there. We're working on reducing that number (which is
> why we collect statistics in the first place), but for now, it's still
> quite high.
>
>
> As for the cost of locks... On my machine, in a tight loop, the cost of a
> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
> ideal circumstances, on a single core of a very fast CPU, with very fast
> RAM, everything cached, and no contention. If we could extrapolate that to
> normal usage, it would be about a third of a ms of additional overhead for
> startup. I've fought hard enough for 1ms startup time improvements, but
> *shrug*, if it were that simple, it might be acceptable.
>
> But I have no reason to think the lock would be rarely contended. We read
> preferences *a lot*, and if we allowed access from background threads, I
> have no doubt that we would start reading them a lot from background
> threads in addition to reading them a lot from the main thread.
>
> And that would mean, in addition to lock contention, cache contention and
> potentially even NUMA issues. Those last two apply to atomic var caches
> too, but at least they generally apply only to the specific var caches
> being accessed off-thread, rather than pref look-ups in general.
>
>
> Maybe we could get away with it at first, as long as off-thread usage
> remains low. But long term, I think it would be a performance foot-gun.
> And, paradoxically, the less foot-gunny it is, the less useful it probably
> is, too. If we're only using it off-thread in a few places, and don't have
> to worry about contention, why are we bothering with locking and off-thread
> access in the first place?
>
>
> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione 
>> wrote:
>>
>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>

 On 13/07/2018 21:37, Kris Maglione wrote:

>
> tl;dr: A major change to the architecture preference service has just
> landed, so please be on the lookout for regressions.
>
> We've been working for the last few weeks on rearchitecting the
> preference service to work better in our current and future
> multi-process
> configurations, and those changes have just landed in bug 1471025.
>


 Looks like a great step forward!

 While we're thinking about the prefs service, is there any possibility
 we
 could enable off-main-thread access to preferences?

>>>
>>>
>>> I think the chances of that are pretty close to 0, but I'll defer to
>>> Nick.
>>>
>>> We definitely can't afford the locking overhead—preference look-ups
>>> already
>>> show up in profiles without it. And even the current limited exception
>>> that
>>> we grant Stylo while it has the main thread blocked causes problems (bug
>>> 1474789), since it makes it impossible to update statistics for those
>>> reads,
>>> or switch to Robin Hood hashing (which would make our hash tables much
>>> smaller and more efficient, but requires read operations to be able to
>>> move
>>> entries).
>>>
>>> I am aware that in simple cases, this can be achieved via the
 StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
 from other threads. But this doesn't help in my use case, where I need
 another thread to be able to query an extensible set of pref names that
 are
 not fully known at compile time.

 Currently, it looks like to do this, I'll have to iterate over the
 relevant prefs branch(es) ahead of time (on the main thread) and copy
 all
 the entries to some other place that is then available to my worker
 threads.
 For my use case, at least, the other threads only need read access;
 modifying prefs could still be limited to the main thread.

>>>
>>>
>>> That's probably your best option, yeah. Although I will say that those
>>> kinds
>>> of extensible preference sets aren't great for performance or memory
>>> usage,
>>> so switching 

Re: C++ standards proposal for a embedding library

2018-07-19 Thread Mike Hommey
On Wed, Jul 18, 2018 at 12:45:30PM -0400, Botond Ballo wrote:
> Hi everyone,
> 
> With the proposal for a standard 2D graphics library now on ice [1],
> members of the C++ standards committee have been investigating
> alternative ways of giving C++ programmers a standard way to write
> graphical and interactive applications, in a way that leverages
> existing standards and imposes a lower workload on the committee.
> 
> A recent proposal along these lines is for a standard embedding
> facility called "web_view", inspired by existing embedding APIs like
> Android's WebView:
> 
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
> 
> As we have some experience in the embedding space here at Mozilla, I
> was wondering if anyone had feedback on this embedding library
> proposal. This is an early-stage proposal, so high-level feedback on
> the design and overall approach is likely to be welcome.

Other than everything that has already been said in this thread,
something bugs me with this proposal: a web view is a very UI thing.
And I don't think there's any proposal to add more basic UI elements
to the standard library. So even if a web view is a desirable thing in
the long term (and I'm not saying it is!), there are way more things
that should come first.

Come to think of it, don't many UI libraries come with a web view
anyways? (or is Qt the only one that does?)

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Kris Maglione

On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:

We should totally be able to afford the very low cost of a
rarely-contended lock. What's going on that causes uncached pref reads
to show up so hot in profiles? Do we have a list of problematic pref
keys?


So, at the moment, we read about 10,000 preferences at startup in debug 
builds. That number is probably slightly lower in non-debug builds, bug we 
don't collect stats there. We're working on reducing that number (which is why 
we collect statistics in the first place), but for now, it's still quite high.



As for the cost of locks... On my machine, in a tight loop, the cost of a 
entering and exiting MutexAutoLock is about 37ns. This is pretty close to 
ideal circumstances, on a single core of a very fast CPU, with very fast RAM, 
everything cached, and no contention. If we could extrapolate that to normal 
usage, it would be about a third of a ms of additional overhead for startup. 
I've fought hard enough for 1ms startup time improvements, but *shrug*, if it 
were that simple, it might be acceptable.


But I have no reason to think the lock would be rarely contended. We read 
preferences *a lot*, and if we allowed access from background threads, I have 
no doubt that we would start reading them a lot from background threads in 
addition to reading them a lot from the main thread.


And that would mean, in addition to lock contention, cache contention and 
potentially even NUMA issues. Those last two apply to atomic var caches too, 
but at least they generally apply only to the specific var caches being 
accessed off-thread, rather than pref look-ups in general.



Maybe we could get away with it at first, as long as off-thread usage remains 
low. But long term, I think it would be a performance foot-gun. And, 
paradoxically, the less foot-gunny it is, the less useful it probably is, too. 
If we're only using it off-thread in a few places, and don't have to worry 
about contention, why are we bothering with locking and off-thread access in 
the first place?



On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione  wrote:

On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:


On 13/07/2018 21:37, Kris Maglione wrote:


tl;dr: A major change to the architecture preference service has just
landed, so please be on the lookout for regressions.

We've been working for the last few weeks on rearchitecting the
preference service to work better in our current and future multi-process
configurations, and those changes have just landed in bug 1471025.



Looks like a great step forward!

While we're thinking about the prefs service, is there any possibility we
could enable off-main-thread access to preferences?



I think the chances of that are pretty close to 0, but I'll defer to Nick.

We definitely can't afford the locking overhead—preference look-ups already
show up in profiles without it. And even the current limited exception that
we grant Stylo while it has the main thread blocked causes problems (bug
1474789), since it makes it impossible to update statistics for those reads,
or switch to Robin Hood hashing (which would make our hash tables much
smaller and more efficient, but requires read operations to be able to move
entries).


I am aware that in simple cases, this can be achieved via the
StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
from other threads. But this doesn't help in my use case, where I need
another thread to be able to query an extensible set of pref names that are
not fully known at compile time.

Currently, it looks like to do this, I'll have to iterate over the
relevant prefs branch(es) ahead of time (on the main thread) and copy all
the entries to some other place that is then available to my worker threads.
For my use case, at least, the other threads only need read access;
modifying prefs could still be limited to the main thread.



That's probably your best option, yeah. Although I will say that those kinds
of extensible preference sets aren't great for performance or memory usage,
so switching to some other model might be better.


Possible? Or would the overhead of locking be too crippling?



The latter, I'm afraid.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


--
Kris Maglione
Senior Firefox Add-ons Engineer
Mozilla Corporation

On two occasions I have been asked, "Pray, Mr. Babbage, if you put
into the machine wrong figures, will the right answers come out?" I am
not able rightly to apprehend the kind of confusion of ideas that
could provoke such a question.
--Charles Babbage

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


System add-ons no longer visible by default in about:debugging

2018-07-19 Thread Mark Striemer
Bug 1425347 [1] will update about:debugging so that system add-ons are no
longer shown in official builds of Firefox.

If you build Firefox locally you will still see system add-ons in their own
section. If you would like to show system add-ons in an official build you
can flip the `devtools.aboutdebugging.showSystemAddons` pref.

Cheers,
Mark

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1425347
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Myk Melez

Nicholas Nethercote wrote on 2018-07-17 21:23:

This is a good example of how prefs is a far more general mechanism than I
would like, leading to all manner of use and abuse. "All I want is a
key-value store, with fast multi-threaded access, where the keys aren't
known ahead of time."
Agreed, the prefs service has been overloaded for too long due to the 
lack of alternatives for storing key-value data.


I've been investigating the Lightning Memory-Mapped Database (LMDB) 
storage engine for such use cases. It supports multi-threaded (and 
multi-process) access and is optimized for fast reads of arbitrary keys.


It could conceivably handle this subset of prefs usage, along with a 
variety of other KV storage use cases that currently use JSONFile or 
other bespoke storage engines/formats.


Follow along in bug 1445451 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1445451) for status 
updates on validating, landing, and using LMDB (via the rkv Rust crate).


-myk

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-19 Thread Peter Saint-Andre
Hear, hear and well said!

On 7/19/18 11:53 AM, Ted Mielczarek wrote:
> On Wed, Jul 18, 2018, at 12:45 PM, Botond Ballo wrote:
>> Hi everyone,
>>
>> With the proposal for a standard 2D graphics library now on ice [1],
>> members of the C++ standards committee have been investigating
>> alternative ways of giving C++ programmers a standard way to write
>> graphical and interactive applications, in a way that leverages
>> existing standards and imposes a lower workload on the committee.
>>
>> A recent proposal along these lines is for a standard embedding
>> facility called "web_view", inspired by existing embedding APIs like
>> Android's WebView:
>>
>> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
>>
>> As we have some experience in the embedding space here at Mozilla, I
>> was wondering if anyone had feedback on this embedding library
>> proposal. This is an early-stage proposal, so high-level feedback on
>> the design and overall approach is likely to be welcome.
> 
> I've joked about this a bit, but in seriousness: an API for web embedding is 
> a difficult thing to get right. We don't even have one currently for desktop 
> Firefox. The proposal references things like various WebKit bindings, but 
> glosses over the fact that Apple revamped WebKit APIs as WebKit2 to better 
> handle process separation. For all the buzz about WebKit being a popular web 
> embedding, most people seem to have switched to embedding Chromium in some 
> form these days, and even there the most popular projects are Chromium 
> Embedded Framework and Electron, neither of which is actually maintained by 
> Google and both of which have gone through significant API churn. That is all 
> to say that I don't have confidence that the C++ standards committee (or 
> maybe anyone, really) has the ability to spec a useful API for web embedding 
> that can both encompass the broad set of issues involved and also remain 
> useful over time as rendering engines evolve.
> 
> I understand the committee's point of view--the C++ standard library does not 
> provide any facilities for writing applications that do more than console 
> input and output. I would submit that this is OK, because UI programming in 
> any form is a complicated topic and it's unlikely that the standard could 
> include anything that would actually be useful to most people.
> 
> Honestly I think at this point growth of the C++ standard library is an 
> anti-feature. The committee should figure out how to get modules specified 
> (which I understand is a difficult thing, I'm not trying to minimize the work 
> there) so that tooling can be built to provide a first-class module ecosystem 
> for C++ like Rust and other languages have. The language should provide a 
> better means for extensibility and code reuse so that the standard library 
> doesn't have to solve everyone's problems.
> 
> I would make this same argument if someone were to propose a similar API for 
> inclusion into the Rust standard library--it doesn't belong there, it belongs 
> on crates.io.
> 
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-19 Thread Ted Mielczarek
On Wed, Jul 18, 2018, at 12:45 PM, Botond Ballo wrote:
> Hi everyone,
> 
> With the proposal for a standard 2D graphics library now on ice [1],
> members of the C++ standards committee have been investigating
> alternative ways of giving C++ programmers a standard way to write
> graphical and interactive applications, in a way that leverages
> existing standards and imposes a lower workload on the committee.
> 
> A recent proposal along these lines is for a standard embedding
> facility called "web_view", inspired by existing embedding APIs like
> Android's WebView:
> 
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1108r0.html
> 
> As we have some experience in the embedding space here at Mozilla, I
> was wondering if anyone had feedback on this embedding library
> proposal. This is an early-stage proposal, so high-level feedback on
> the design and overall approach is likely to be welcome.

I've joked about this a bit, but in seriousness: an API for web embedding is a 
difficult thing to get right. We don't even have one currently for desktop 
Firefox. The proposal references things like various WebKit bindings, but 
glosses over the fact that Apple revamped WebKit APIs as WebKit2 to better 
handle process separation. For all the buzz about WebKit being a popular web 
embedding, most people seem to have switched to embedding Chromium in some form 
these days, and even there the most popular projects are Chromium Embedded 
Framework and Electron, neither of which is actually maintained by Google and 
both of which have gone through significant API churn. That is all to say that 
I don't have confidence that the C++ standards committee (or maybe anyone, 
really) has the ability to spec a useful API for web embedding that can both 
encompass the broad set of issues involved and also remain useful over time as 
rendering engines evolve.

I understand the committee's point of view--the C++ standard library does not 
provide any facilities for writing applications that do more than console input 
and output. I would submit that this is OK, because UI programming in any form 
is a complicated topic and it's unlikely that the standard could include 
anything that would actually be useful to most people.

Honestly I think at this point growth of the C++ standard library is an 
anti-feature. The committee should figure out how to get modules specified 
(which I understand is a difficult thing, I'm not trying to minimize the work 
there) so that tooling can be built to provide a first-class module ecosystem 
for C++ like Rust and other languages have. The language should provide a 
better means for extensibility and code reuse so that the standard library 
doesn't have to solve everyone's problems.

I would make this same argument if someone were to propose a similar API for 
inclusion into the Rust standard library--it doesn't belong there, it belongs 
on crates.io.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSP Violation DOM Events

2018-07-19 Thread amarchesini
I'm going to enable CSP Violation Events by default in Firefox 63. Bug 1432523.

I and ckerschb have done a good job to make our code compliant with the latest 
CSP3 spec and we pass (almost) all the related WPT tests.

The only remaining bit is related to  (bug 
1473630) but we decided to work on it as follow up.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform