Intent to prototype and ship: signal property on AddEventListenerOptions

2020-12-03 Thread smaug

Summary: https://github.com/whatwg/dom/issues/911 proposes to add signal 
property to AddEventListenerOptions
const ac = new AbortController();
target.addEventListener('fooEvent', (e) => { ... }, { signal: ac.signal } );
so that one can easily remove the event listener when AbortController is 
aborted: ac.abort();

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1679204

Standard: https://github.com/whatwg/dom/issues/911
Platform coverage: All

Preference: No pref. Web sites need to explicitly opt-in to use this new 
feature..
DevTools bug: NA. I guess devtools could have some UI for various 
AddEventListenerOptions properties, but that is a generic issue.
Other browsers: Implemented in blink 
https://bugs.chromium.org/p/chromium/issues/detail?id=1146467, not yet enabled 
by default
Also in Node.js https://github.com/whatwg/dom/pull/919#issuecomment-737293442

web-platform-tests: wpt tests are being reviewed 
https://github.com/web-platform-tests/wpt/pull/26472


The patch will land only after the wpt tests are in mozilla-central, and not 
before the next merge, so targeting 86.





-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Profiler now supports recording IPC messages

2019-10-30 Thread smaug

FWIW, apparently the UI is in the devtools profiler UI, not in the profiler 
addon.
https://profiler.firefox.com/ still tells users to install the addon from there.

I was told that one can get the button similar to the addon by enabling
devtools.performance.popup.enabled boolean pref and then using 'Customize...'

Anyhow, great stuff. Seeing IPC messages in the profiles can be really handy.
(and so far I've been positively surprised that we don't seem to send that many 
IPC messages)


-Olli



On 10/30/19 10:14 PM, Jim Porter wrote:
Recently, we landed a new feature for the Firefox Profiler: the ability to record IPC messages for monitored threads. This should be useful for 
evaluating IPC-related performance issues as we make progress on Project Fission. To enable this feature, just check the "IPC Messages" feature in the 
profiler popup and collect a profile! Then, IPC messages on all monitored threads will be recorded to the profile.


For an example of what this looks like, see this profile of a user (me) opening 
mozilla.org in a new tab: .

Since IPC messages are (obviously) cross-process, each IPC message is actually comprised of two profiler markers: one for the sending thread and one 
for the receiving thread. The profiler frontend then examines all the collected IPC markers and correlates the sending and receiving sides. After 
correlating each side, we can then determine the latency of the IPC message: this is defined to be the time between when the message is sent (i.e. 
when `SendMessage` or similar is called) and when it's received (i.e. once the recipient thread has constructed a `Message` object).


Sometimes, IPC messages will have an unknown duration. This means that the profiler marker for the other side of the IPC call wasn't recorded (either 
the thread wasn't profiled at all or the other side occurred outside of the time range of the profile).


As you can probably see from the example profile, the user interface is fairly basic for now: each thread just has a new timeline track to display its 
IPC messages, with outgoing messages in teal and incoming messages in purple. Of course, there's lots of room for improvement here, so if you have 
ideas for a visualization that would be useful to you, just file a bug and CC me on it!


Happy profiling!
- Jim

P.S.: For those who are curious about how we correlate each side of an IPC message, we compare the source and destination PIDs, the message's type, 
and its seqno. This is enough to uniquely identify any IPC message, though it does mean that reply messages are considered a separate message. If 
people find it useful, it should be straightforward to correlate initial and reply messages with each other as well.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-28 Thread smaug

On 10/24/19 9:17 AM, Marcos Caceres wrote:

On Thursday, October 24, 2019 at 6:46:08 AM UTC+11, Christopher Manchester 
wrote:

As announced in this week's project call, sccache-dist schedulers have been
deployed to mozilla offices, and those with hardware available can enroll
servers based on the documentation here
.


Just a reminder to not forget about us remotees! We are dying out here with 1+ 
hour compile times so anything would really help (even 20-20% improvement would 
go a long way). A lot of us have spare old macs, and if we can put them to good 
use with this, that would be amazing.




Quite often one has just a laptop. Not compiling tons of Rust stuff all the 
time would be really nice.
(I haven't figured out when stylo decides to recompile itself - it seems to be 
somewhat random.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Of writing asynchronous tests

2019-10-16 Thread smaug

Hi all,

during the past year, while optimizing page load by changing how various tasks 
within Gecko are scheduled,
intermittently failing tests have often caused trouble and required fixes to 
them or occasionally
disabling them. Now with Fission we are seeing even more issues and we will do 
page load optimizations on top of
Fission, and that will probably reveal yet more issues.

A somewhat old page 
https://developer.mozilla.org/en-US/docs/Mozilla/QA/Avoiding_intermittent_oranges
 has still a pretty good list
of common pitfalls. I recommend everyone writing asynchronous tests to take a 
look at that
(and possibly add more cases there, if you think the page is missing something).

This is not to blame anyone - writing asynchronous tests can be hard. I 
certainly have been fixing recently tests written by myself ;)
But I'm hoping that we can improve the quality of the tests over the time and 
thus make it easier to make changes to Gecko's
task and/or process scheduling.


Thanks,



-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Finding windows and docshells leaked until shutdown

2019-10-03 Thread smaug

On 10/3/19 1:18 AM, Andrew McCreight wrote:

On Wed, Oct 2, 2019 at 2:35 PM Geoff Lankow  wrote:


Hi all, I need some advice.

I'm trying to enable Mochitest on debug builds of Thunderbird. It works
fine, except for the leak check, which finds a number of leaked windows
and docshells. Obviously we don't want to leak these things, but I can't
find what's holding onto them.

I've ruled out a number of likely candidates and I'm quickly running out
of ideas. Can you offer any tips or tricks that might help?

Here's an example log:

https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=267576044=try-comm-central=8051



What this leak checker means is that a docshell or window was created while
the test was running, but was not destroyed until after the test finished
(and we ran a bunch of GCs and CCs). However, these docshells and windows
don't show up in the leakcheck output, which means that they were cleaned
up before we shut down.


Ah, good, runtime leak :)
Runtime leaks are in general way easier to debug than leaks which are there 
until shutdown.
Add a conditional break point to the Release method of the relevant object
and see all the cases Release is called. One (or more) of the Release calls ends
up revealing where the leak is coming from.

To get access to the right C++ object, CC log is one way to get the address.





A typical cause for these is that there's some top level data structure,
often in JS but it could be in C++, too, that keeps track of every docshell
or window that is created, and does so by just adding a reference to it.
When we start shutdown, the data structure gets torn down, and we don't
leak things forever.

One small tip is that if you look at the line after the "leaked until
shutdown" message and you'll see something like this:
[task 2019-09-20T03:42:16.942Z] 03:42:16 INFO - TEST-INFO |
comm/calendar/test/browser/browser_calendarList.js | windows(s) leaked:
[pid = 1155] [serial = 38], [pid = 1155] [serial = 36], [pid = 1155]
[serial = 39], [pid = 1155] [serial = 37]

The entries like "[pid = 1155] [serial = 38]" are the descriptors we use
for windows (and docshells have a similar thing). If you search for "[pid =
1155] [serial = 38]" in the log, you can see where exactly each window was
created and destroyed. Sometimes that can give a hint about what is going
wrong.

Like Emilio said, the best approach to fixing these leaks is going to be
getting GC/CC logs from the point where it complains about the window being
alive, and then you can figure out from there what is keeping the window
alive, which may or may not point at what the leak is.

In practice, doing all of this leak fixing is rather a big project by
itself, so I'd recommend that you disable this leak checking in Thunderbird
somehow, and then just try to get the rest of the debug tests up and
running, and then maybe come back to it later. This leak checking is done
in this file (by doing postprocessing of the browser log):
testing/mochitest/leaks.py





GL
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Range-based for and STL-style iterators for nsClassHashtable, nsDataHashtable, nsInterfaceHashtable

2019-10-02 Thread smaug

And as with other data structures, be careful with modifications when using 
range-based for.
We had plenty of crash issue with nsTArray + range-based for when we started to 
use it.


-Olli

On 10/2/19 1:15 PM, Simon Giesecke wrote:

Hi,

I recently [1] added STL-style iterators and begin/cbegin/end/cend
member functions to nsBaseHashtable. This means that it is now
possible to use range-based for and STL algorithms operating on
iterators with all of its subclasses, which include nsClassHashtable,
nsDataHashtable, nsInterfaceHashtable, nsJSThingHashtable and
nsRefPtrHashtable.

I also added an overview on existing support for range-based for and
STL-style iterators in Mozilla data structures on MDN  at [2].

If you have any questions or issues, please let me know or NI me on Bugzilla.

Best wishes
Simon

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1575479
[2] 
https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code#Mozilla_data_structures_and_standard_C_ranges_and_iterators



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Of writing asynchronous tests

2019-09-24 Thread smaug

Hi all,

during the past year, while optimizing page load by changing how various tasks 
within Gecko are scheduled,
intermittently failing tests have often caused trouble and required fixes to 
them or occasionally
disabling them. Now with Fission we are seeing even more issues and we will do 
page load optimizations on top of
Fission, and that will probably reveal yet more issues.

A somewhat old page 
https://developer.mozilla.org/en-US/docs/Mozilla/QA/Avoiding_intermittent_oranges
 has still a pretty good list
of common pitfalls. I recommend everyone writing asynchronous tests to take a 
look at that
(and possibly add more cases there, if you think the page is missing something).

This is not to blame anyone - writing asynchronous tests can be hard. I 
certainly have been fixing recently tests written by myself ;)
But I'm hoping that we can improve the quality of the tests over the time and 
thus make it easier to make changes to Gecko's
task and/or process scheduling.


Thanks,



-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement & ship: queueMicrotask

2019-05-24 Thread smaug

Summary: queueMicrotask exposes the platform primitive to queue microtasks. 
Without the API, web pages have used hacks around MutationObserver or Promise.
Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1480236
Link to standard: 
https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#microtask-queuing
Platform coverage: everywhere
Estimated or target release: 69
Preference behind which this will be implemented: without pref
Is this feature enabled by default in sandboxed iframes?  yes
DevTools bug: NA
Do other browser engines implement this? Shipped in Blink and Webkit
web-platform-tests:
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/html/webappapis/microtask-queuing
Is this feature restricted to secure contexts? No. This is just exposing the 
scheduling primitive of MutationObserver and Promise as an API.




Note to the users of the API (this applies to Promises and MutationObserver 
too). Microtasks are from UA point of view synchronous, so don't do heavy
operations in them, or schedule lots of them in a row - that might cause jank.





-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MOZ_ALLOW_DOWNGRADE environment variable

2019-03-06 Thread smaug

Hi all,


looks like I didn't send email about, IMO, rather useful environment variable 
[1]:
MOZ_ALLOW_DOWNGRADE does the same thing as --allow-downgrade passed to firefox,
bypasses the profile downgrade protection.
(At least I need to bypass that all the time when testing various local and/or 
nightly builds.)



-Olli


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1528502
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Disabling document.createEvent("TouchEvent"), document.createTouch* and ontouch* event handlers on desktop

2019-03-06 Thread smaug

Hi all,


I'm trying to disable document.createEvent("TouchEvent"), document.createTouch* 
and ontouch* event handlers on desktop in
order to follow what has been done in other browsers.
See https://bugzilla.mozilla.org/show_bug.cgi?id=1412485
The issue is that some web pages use those features to detect whether touch is 
enabled, and if so, don't
expect mouse events.
https://github.com/w3c/touch-events/issues/64 has also some background.

The behavior will be behind dom.w3c_touch_events.legacy_apis.enabled pref, set 
to false by default on desktop.
(Touch API itself is enabled if touchscreen is detected)



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser Architecture Newsletter #7 (S02E02)

2018-09-20 Thread smaug

On 09/20/2018 04:21 PM, Mike Hommey wrote:

On Thu, Sep 20, 2018 at 12:18:49PM +0300, smaug wrote:

On 09/19/2018 08:34 PM, Nicholas Alexander wrote:

2.

Making the main browser window be an HTML document with (mostly) HTML
DOM elements instead of a XUL document with (mostly) XUL DOM elements.


It is still mystery to me how the performance can be anywhere close to XUL when
starting browser or opening a new window.
XUL's prototype cache was explicitly added because of performance long ago.
That is why we need to only load already parsed document in a binary format,
or in case of new window, just clone from the prototype.
Last time I heard, moving to use HTML document does cause significant 
regressions.


I'm reminded of https://bugzilla.mozilla.org/show_bug.cgi?id=618912 but
IIRC there were similar experiments back then on desktop, and basic html
chrome was significantly faster than basic xul chrome.

That bug seems to be more about the layout.

https://screenshotscdn.firefoxusercontent.com/images/d1753829-3ebd-4c42-a757-14757051accf.png
 is
the latest numbers I've seen. That isn't pgo, so may or many not be very 
accurate, but the regression is
very significant.






Mike



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser Architecture Newsletter #7 (S02E02)

2018-09-20 Thread smaug

On 09/19/2018 08:34 PM, Nicholas Alexander wrote:

   2.

   Making the main browser window be an HTML document with (mostly) HTML
   DOM elements instead of a XUL document with (mostly) XUL DOM elements.


It is still mystery to me how the performance can be anywhere close to XUL when
starting browser or opening a new window.
XUL's prototype cache was explicitly added because of performance long ago.
That is why we need to only load already parsed document in a binary format,
or in case of new window, just clone from the prototype.
Last time I heard, moving to use HTML document does cause significant 
regressions.




-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Plan for Sunsetting MozReview

2018-09-08 Thread smaug

On 09/05/2018 06:40 AM, Kris Maglione wrote:

On Tue, Sep 04, 2018 at 07:37:28PM +0200, Dão Gottwald wrote:

This may have been discussed before since it's kind of an obvious question:

Was there a conscious decision not to post phabricator review comments to
bugzilla? It's a somewhat significant change from how we've used bugzilla.
I can see a potential upside of separating review comments from other
planning and chatter. Then again, the line isn't always drawn easily. Plus,
it makes it harder to migrate away from phabricator should we want that at
some unknown point; that MozReview posted comments to bugzilla turns out to
be quite valuable now.

I'd prefer if review comments stayed in bugzilla with an option to hide
them.


Concur. Aside from future-proofing things, reading comments in phabricator is pretty painful, especially for bugs with multiple patches. With the old 
flow, I could look at all of them in one place. Now, I have to open a half dozen separate pages, and then scroll through the entire patch squinting 
for comments, since none of the comments at the top of the review page provide even the slightest bit of context.




Yeah, this new work mode is really not working well enough, at least from 
reviewer's point of view.


-Olli



2018-07-26 20:37 GMT+02:00 Mark Côté :


To follow up on some previous threads, here is the plan for deprecating,
archiving, and decommissioning MozReview.

The MozReview shutdown deadline is approaching. Although enhanced
commit-series support is still in progress (see update below), MozReview
users should start familiarizing themselves with Phabricator now. We have a
guide specifically for MozReview users to ease the transition (which will
be updated when the commit-series work is finalized): https://moz-conduit.
readthedocs.io/en/latest/mozreview-migration-guide.html

From August 6 to August 20, use of MozReview will be restricted to updates
to existing reviews. In other words, review cycles in progress will be
given until August 20 to be completed, but no new commit series will be
permitted.

On August 20, we will remove public access to MozReview and archive
patches. Every landed, in-progress, and abandoned patch will be downloaded
from MozReview and stored in an S3 bucket. The “stub attachments” in
Bugzilla that currently redirect to MozReview will be updated to link to
the appropriate S3 bucket. Review flags and bug comments will be preserved.

After archiving is complete and verified, the MozReview servers will be
decommissioned.

We will also be writing a simple redirection service to map specific
MozReview reviews to associated BMO comments, review requests to associated
bugs, and review-request diffs to the appropriate S3 buckets. This service
will be up shortly after MozReview is decommissioned.

We realize and apologize that this is a fairly short timeline; however,
the current location of the MozReview servers is in the process of being
shut down, and thus it is urgent that we decommission this service soon to
allow an orderly exit.

As for enhanced support for series of commits in Phabricator, the new
command-line interface to submit, update, and apply series (bug 1471678) is
currently in review. The first release will support Mercurial only, but
git-cinnabar support will follow shortly (the code is designed with it in
mind). Work on series support in Lando (bug 1457525) is progressing; we
will be posting screenshots of the new UI shortly. It should be ready in
2-3 weeks.

Please note that we eventually plan to decommission Splinter as well;
however, we know we need some time to work out the kinks in Phabricator.
Splinter will remain operational near-term, but we ask everybody to
familiarize themselves with Phabricator and file bugs when things don't
work. *Please do not wait for Splinter EOL to try Phabricator; we need your
feedback before then in order to act on it.*

Mark


___
firefox-dev mailing list
firefox-...@mozilla.org
https://mail.mozilla.org/listinfo/firefox-dev





___
firefox-dev mailing list
firefox-...@mozilla.org
https://mail.mozilla.org/listinfo/firefox-dev





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extending the length of an XPCOM string when writing to it via a raw pointer

2018-08-30 Thread smaug

On 08/30/2018 11:21 AM, Henri Sivonen wrote:

We have the following that a pattern in our code base:

  1) SetCapacity(newCapacity) is called on an XPCOM string.
  2) A pointer obtained from BeginWriting() is used for writing more
than Length() but no more than newCapacity code units to the XPCOM
string.
  3) SetLength(actuallyWritten)  is called on the XPCOM string such
that actuallyWritten > Length() but actuallyWritten <= newCapacity.

This is an encapsulation violation, because the string implementation
doesn't know that the content past Length() is there.

How so? The whole point of capacity is that the string has that much capacity.



 The caller

assumes that step #3 will not reallocate and will only write a
zero-terminator at actuallyWritten and set mLength to actuallyWritten.
(The pattern is common enough that clearly people have believed it to
be a valid pattern. However, I haven't seen any in-tree or on-wiki
string documentation endorsing the pattern.)

It should be non-controversial that this is an encapsulation
violation,

Well, I'm not seeing any encapsulation violation ;)



but does the encapsulation violation matter? It matters if
we want SetLength() to be able to conserve memory by allocating a
smaller buffer when actuallyWritten code units would fit in a smaller
mozjemalloc bucket.


Please be very very careful when doing allocations and deallocations. They are 
very slow, showing
up all the time in performance profiles.


 In order for the above pattern to work if

SetLength() can reallocate in such case, SetLength() has to memcpy the
whole buffer in case someone has written content that the string
implementation is unaware of instead of just memcpying the part of the
buffer that the string implementation knows to be in use. Pessimizing
the number of code units to memcpy is bad for performance.

It's unclear if trying to use a smaller mozjemalloc bucket it is a
worthwhile thing. It obviously is for large long-lived strings and it
obviously isn't for short-lived strings even if they are large.
SetLength() doesn't know what the future holds for the string. :-( But
in any case, it's bad if we can't make changes that are sound from the
perspective of the string encapsulation, because we have other code
violating the encapsulation.

After the soft freeze, I'd like to change things so that we memcpy
only the part of the buffer that the string implementation knows is in
use. To that end, we should stop using the above pattern that is an
encapsulation violation.


What is then the point of SetCapacity anymore?



For m-c, I've filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1472113 and worked on
fixing the bugs that block it. For c-c, I've filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1486706 but don't intend
to do the work of investigating or fixing the string usage in c-c.

As for fixing the above pattern, there are two alternatives. The first one is:

  1) SetLength(newCapacity)
  2) BeginWriting()
  3) Truncate(actuallyWritten) (or SetLength(actuallyWritten), Truncate
simply tells to the reader that the string isn't being made longer)

With this pattern, writing happens to the part of the buffer that the
string implementation believes to be in use. This has the downside
that the first SetLength() call (like, counter-intuitively,
SetCapacity() currently!) writes the zero terminator, which from the
point of view of CPU caches is an out-of-the-blue write that's not
part of a well-behaved forward-only linear write pattern and not
necessarily near recently-accessed locations.

The second alternative is BulkWrite() in C++ and bulk_write() in Rust.

The API doesn't seem to be too user friendly.



This is new API that is well-behaved in terms of the cache access
pattern and is also more versatile in the sense that it lets the
caller know how newCapacity was rounded up, which is relevant to
callers that ask for best-case capacity and then ask more capacity if
there turns out to be more to write. When the caller is made aware of
the rounding, a second request for added capacity can be avoided if
the amount that actually needs to be written exceeds the best case
estimate but fits within the rounded-up capacity.

In Rust, bulk_write() is rather nicely misuse-resistant.  However, on
the C++ side the lack of a borrow checker as well as mozilla::Result
not working with move-only types
(https://bugzilla.mozilla.org/show_bug.cgi?id=1418624) pushes more
things to documentation. The documentation can be found at
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Guide/Internal_strings#Bulk_Write
https://searchfox.org/mozilla-central/rev/2fe43133dbc774dda668d4597174d73e3969181a/xpcom/string/nsTSubstring.h#1190
https://searchfox.org/mozilla-central/rev/2fe43133dbc774dda668d4597174d73e3969181a/xpcom/string/nsTSubstring.h#32

P.S. GetMutableData() is redundant with BeginWriting() and
SetLength(). It's used very rarely and I'd like to remove it as
redundant, so please don't use 

Re: Intent to Ship: Shadow DOM and Custom Elements

2018-08-15 Thread smaug

On 08/15/2018 06:06 PM, darin.hens...@gmail.com wrote:

On Wednesday, August 15, 2018 at 5:56:06 AM UTC-5, smaug wrote:

On 08/15/2018 01:54 PM, smaug wrote:

On 08/14/2018 09:43 PM, darin wrote:

When it ships in version 63 will shadowdom and webcomponent be shipped disabled 
or enabled by default? The beta is throwing me off.

Thank you,
-Darin




Current beta is FF62. The aim is to ship in FF63, which goes to beta early 
September.



And "ship" means, enabled by default. Most of the code has been there for 
several months even in release builds, but just disabled.



Why does https://caniuse.com/#feat=shadowdomv1 show Firefox 63 show not 
supported by default?





No idea. I don't know who maintains caniuse and when the they update it. Please 
report to them that they may want to update the page.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Some of the Phabricator review requests don't show up in Bugzilla dashboard

2018-08-15 Thread smaug

Hi all,

I think it is good to let people know that some of the review requests in 
Phabricator don't show up in Bugzilla.
https://bugzilla.mozilla.org/show_bug.cgi?id=1482110



So, if some review seems to take too much time, better to ping the requestee.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship: Shadow DOM and Custom Elements

2018-08-15 Thread smaug

On 08/13/2018 12:25 PM, David Burns wrote:

Do we have sufficient tests

There are some tests, but as always, not enough. And there are areas which are 
not spec'ed, like
dir-attribute handling on HTML elements with Shadow DOM and session history of 
iframes which are in shadow DOM.


for this to guarantee webcompat and interop
with other browsers?CustomElements is easier, since it doesn't really change 
much, just adds a way to add couple of callbacks to user defined elements.

And the behavior of those callbacks is somewhat easy to test.

Shadow DOM is trickier, since the specs aren't always quite up-to-date, or 
leave things undefined.

Also, in general web components are relatively rarely used feature. Google has 
been a big user, but it relies on
v0 implementations, meaning basically Blink, and uses polyfills with other 
browsers.

So, there is definitely some risk breaking web sites, but there isn't too much 
we can do other than trying to ship.





David

On 10 August 2018 at 15:49, smaug  wrote:


I'm planning to keep Shadow DOM and Custom Elements turned on on
beta/release builds.
Target release is Firefox 63.
prefs are dom.webcomponents.customelements.enabled and
dom.webcomponents.shadowdom.enabled.

Custom elements has been enabled in Nightly since January and Shadow DOM
since late May.

Bugs:
https://bugzilla.mozilla.org/show_bug.cgi?id=1471947 (Shadow DOM)
https://bugzilla.mozilla.org/show_bug.cgi?id=1471948 (Custom Elements)

Even though the APIs are totally distinct, web pages expect either neither
be there or both, so they
need to ship together.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to Ship: Shadow DOM and Custom Elements

2018-08-10 Thread smaug

I'm planning to keep Shadow DOM and Custom Elements turned on on beta/release 
builds.
Target release is Firefox 63.
prefs are dom.webcomponents.customelements.enabled and 
dom.webcomponents.shadowdom.enabled.

Custom elements has been enabled in Nightly since January and Shadow DOM since 
late May.

Bugs:
https://bugzilla.mozilla.org/show_bug.cgi?id=1471947 (Shadow DOM)
https://bugzilla.mozilla.org/show_bug.cgi?id=1471948 (Custom Elements)

Even though the APIs are totally distinct, web pages expect either neither be 
there or both, so they
need to ship together.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread smaug

On 07/12/2018 11:08 PM, Randell Jesup wrote:

I do hope that the 100 process figures scenario that was given is a worse case 
scenario though...


It's not.  Worst case is a LOT worse.

Shutting down threads/threadpools when not needed or off an idle timer
is a Good thing.  There may be some perf hit since it may mean starting
a thread instead of just sending a message at times; this may require
some tuning in specific cases, or leaving 1 thread or more running
anyways.

Stylo will be an interesting case here.

We may need to trade first-load time against memory use by lazy-initing
more things than now, though we did quite a bit on that already for
reducing startup time.




One thing to remember that some of the child processes will be more important than others. For example all the processes used for browsing contexts in 
the foreground tab should probably prefer performance over memory (in cases that is something we can choose from), but if a process

is only used for browsing contexts in background tabs and isn't playing any 
audio or such, it can probably use less memory hungry approaches.
Like, could stylo use fewer threads when used in 
background-tabs-only-processes, and once the process becomes foreground, more 
threads are created.
We have similar approach in many cases for performance and responsiveness 
reasons, but less often for memory usage reasons.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Clear-Site-Data header

2018-06-20 Thread smaug

On 06/20/2018 04:14 PM, Andrea Marchesini wrote:

Summary: The Clear-Site-Data header allows a secure origin to send a header
requesting the deletion of its own browsing data.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1268889


Link to standard: https://w3c.github.io/webappsec-clear-site-data/


Platform coverage: all

Estimated or target release: FF63

Preference behind which this will be implemented:
dom.clearSiteData.enabled

Do other browser engines implement this?
Shipped in Chrome 61.


It shipped when there are rather major open spec issues...
Ok, that is the Blink model for development, but we really should
figure out how to convince them to get specs into a reasonable state before 
they ship.



-Olli





web-platform-tests:
https://github.com/web-platform-tests/wpt/tree/master/clear-site-data


Secure contexts: Yes.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: new CopyableErrorResult class

2018-04-29 Thread smaug

On 04/27/2018 12:14 AM, Ben Kelly wrote:

Hi all,

I just pushed another helper class that I thought others might find useful.

CopyableErrorResult is a specialized form of ErrorResult.  Its intended to
allow slightly more rich error values to be passed through things like ipdl
structure and MozPromise.  This is useful when implementing web exposed
features that need to execute code in the parent process and surface useful
TypeError messages back to the page.  A simple nsresult loses the developer
friendly messages.  If you are building something like this, then you might
want to consider CopyableErrorResult.

If in doubt, though, please use ErrorResult itself.  It has stricter
checking and should be preferred.  Particularly if there is any chance the
error might contain a js exception.

The CopyableErrorResult has a few properties:

1. Asserts if you try to store a js exception.  It may only contain simple
errors or errors with messages.  In release builds it converts the js
exception to an NS_ERROR_FAILURE.
2. Has a copy constructor, assignment operator, etc.
3. May be safely transferred across threads.
4. Automatically suppresses errors instead of asserting if its not consumed.

Hmm, (4) is worrisome. The name of the class doesn't hint this kind of behavior,
and we have IgnoredErrorResult.
Why does CopyableErrorResult have this behavior? I understand not asserting if
the error was copied from some other ErrorResult to a CopyableErrorResult 
instance,
but shouldn't we assert if that didn't happen?



These are significant because they mean you can create an ipdl structure or
union type that contains a CopyableErrorResult.  This is not possible with
an ordinary ErrorResult because there is no copy constructor, etc.

It is safer to use CopyableErrorResult in a MozPromise reaction because its
thread safe and does not assert if its destroyed before consumed.  These
are important because MozPromise reactions can cross thread boundaries and
a disconnected promise may not consume its reaction value, etc.

Using CopyableErrorResult allows you to provide error messages along with
TypeError, etc.  Passing just an nsresult provides a worse developer
experience for webdevs.

One quirk you may run into is that CopyableErrorResult may not be used as a
value argument to a method.  This is because it contains a js MOZ_NON_PARAM
value in a union type even through it does runtime checking to avoid ever
actually using that type.  To work around this you do this:

  DoFoo(const CopyableErrorResult& aRv) {
ConsumeResult(CopyableErrorResult(aRv));
  }

This is often needed in MozPromise reaction handlers.

Please let me know if you run into any problems.

Thanks.

Ben



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2018-04-25 Thread smaug

On 04/25/2018 08:38 PM, Bobby Holley wrote:

On Wed, Apr 25, 2018 at 10:21 AM, Ted Mielczarek  wrote:


On Wed, Apr 25, 2018, at 12:32 PM, Jeff Muizelaar wrote:

At minimum we should make --enable-profiling build with rust-opt.


This sounds reasonable, although the quirk is that we default
--enable-profiling on for nightly[1], so anyone building m-c will have it
enabled. We could make the build system only do this for "explicitly
enabled --enable-profiling", it just might be slightly confusing. (For
reference, the rustc opt level is controlled here[2].)



We could also change this default. In general, Nightly is usually fine for
profiling, so I think the set of people doing local builds for profiling is
small enough that we could reasonably expect them to set the flag in their
mozconfig.



Somewhat off-topic, but we tried hard last year to increase the number of 
people doing profiling,
and Quantum Flow is still ongoing and performance as important as ever.
Profiling should be part of daily development work.
Because of that it would be great to have local builds profile-able by default.

But still, I kind of like the idea of build profiles. Something which screams 
in the terminal what kind of build one is about
to get. At the beginning of build process and I guess end too there would be a 
message hinting whether the build is good for profiling or so.







IIRC the slowdown from using opt-level=2 vs. 1 is primarily in LLVM? It
would probably be useful if we instrumented the build to record compilation
times for all of the crates we build at various optimization levels and see
where the biggest issues are. Perhaps we could get someone on the Rust team
to make some improvements if we know what the worst offenders are.



Well, there's the opt level thing, but there's also the much larger bit
that we don't do Rust LTO without --enable-release, which IIUC has a
substantial performance impact.





-Ted

1. https://dxr.mozilla.org/mozilla-central/rev/
26e53729a10976f52e75efa44e17b5e054969fec/js/moz.configure#243
2. https://dxr.mozilla.org/mozilla-central/rev/
26e53729a10976f52e75efa44e17b5e054969fec/build/moz.
configure/toolchain.configure#1397
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Deprecating JS-Implemented WebIDL

2018-04-24 Thread smaug

On 04/24/2018 09:25 AM, Andreas Tolfsen wrote:

Also sprach Bobby Holley:


For reasons outlined in bug 1450827, the DOM peers have decided to
deprecate support for JS-implemented WebIDL APIs. This means that
new additions of |JSImplementation="foo"| are no longer permitted.



Out of curiosity, and mostly because my knowledge of how this works
is limited, this wouldn’t affect C++ implementations that internally
do_GetService to some XPCOM service implemented in JS?




No, it wouldn't. C++ doesn't really know that a service is implemented in JS.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Phabricator and Bugzilla

2018-03-31 Thread smaug

Hi all,

just some random notes about Phabricator.

I've been reviewing now a bunch of patches in Phabricator and the initial 
feeling from
reviewer's point of view is that it is ok. Not great, but ok.
MozReview's interdiff, when it works, is easier to use or at least to discover 
than Phabricator's.
MozReview does also show the method name in which the code lives. This latter 
thing is actually a big
+ to MozReview. (Same works in Bugzilla too when reviewing raw -P patches at 
least. No idea about splinter, since I never use it).
If Pabricator has the feature, I haven't found it yet - its UI is a tad 
confusing occasionally.

What is currently rather broken is the flag synchronization between bugzilla and Phabricator.  One doesn't see in Bugzilla whether review has been 
asked or whether review has been denied (r-). I believe people asking reviews from me set the r? flag manually in bugzilla.

Having an easy way to see that r- has been given to a patch is very valuable to 
the reviewer when looking at the bug again.
Oddly enough, approving the patch in Phabricator does set r+ in Bugzilla.

Not mirroring comments from Phabricator to Bugzilla has already made following 
reasoning for certain code changes harder.
It is so easy to include non-code level information in review comments. So far 
I haven't had a case where I would have needed to look at the blame and
try to find relevant information both in bugzilla and in phabricator, but that 
will obviously happen if we don't do the mirroring and it will slow
down reviewing in the future (since looking at the history/reasoning for the 
code changes is a big part of code reviewing).

I'm sure I haven't found all the tricks Phabricator has to help reviewing. Is 
there some reviewing-in-Phabricator-for-dummies doc?


I haven't asked reviews in Phabricator (nor in MozReview), so can't comment on 
how they compare from patch author's point of view.



br,



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use cases for invalid-Unicode atoms

2018-03-20 Thread smaug

On 03/19/2018 09:30 PM, Kris Maglione wrote:

On Mon, Mar 19, 2018 at 08:49:10PM +0200, Henri Sivonen wrote:

It appears that currently we allow atomicizing invalid UTF-16 string,
which are impossible to look up by UTF-8 key and we don't allow
atomicizing invalid UTF-8.

I need to change some things in this area in response to changing
error handling of UTF-8 to UTF-16 XPCOM string conversions to be more
secure, so I want to check if I should change things a bit more.

I can well imagine that the current state is exactly what we want:
Bogosity on the UTF-16 side round-trips and bogus UTF-8 doesn't
normally reach the atom machinery.

Am I correct in assuming we don't want changes here?

(One imaginable change would be replacing invalid sequences in both
UTF-16 and UTF-8 with U+FFFD and then atomicizing the result.)


Leaving aside the question of whether validation is desirable, I'd worry about the performance impact. We atomize UTF-16 strings all over the place in 
DOM code (and even have a main-thread pseudo-hashtable optimization for them). Validation might be relatively cheap, but I'd still expect that 
relative cheapness to add up fairly quickly.


Yeah, all the atom handling is very hot code.  Unless there is some actual 
serious bug to fix, I wouldn't
change the handling.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improved wpt-sync now running experimentally

2018-02-12 Thread smaug

On 02/09/2018 10:39 PM, James Graham wrote:

On 09/02/2018 19:59, Josh Bowman-Matthews wrote:

On 2/9/18 1:26 PM, James Graham wrote:

* One bug per PR we downstream, filed in a component determined by the files 
changed in the PR.


What does this mean exactly? What is the desired outcome of these bugs?


They're tracking the process and will be closed when the PR lands in central. They are used for notifying gecko developers about the incoming change, 
and in particular contain the information about tests that went from passing to failing, and other problems during the import.


I guess I don't understand the bugmail. Most of the time I don't see any 
information about something failing. Am I supposed to look at the commit?
Or are new failures in bugmail like
"
Ran 2 tests and 44 subtests
OK : 2
PASS   : 34
FAIL   : 10
"

Are those 10 failures new failures, or failures from the test total?


-Olli



They are not essential to the sync so if they end up not working well at 
keeping people informed we can revisit the approach.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread smaug

On 01/16/2018 11:41 PM, Mike Hommey wrote:

On Tue, Jan 16, 2018 at 10:02:12AM -0800, Ralph Giles wrote:

On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard 
wrote:

But I would be interested in knowing how long that same Lenovo P710 takes

to compile *today*….



On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux

debug -Og build with gcc: 12:34
debug -Og build with clang: 12:55
opt build with clang: 11:51

Interestingly, I can almost no longer get any benefits when using icecream,

with 36 cores it saves 11s, with 52 cores it saves 50s only…



Are you staturating all 52 cores during the buidls? Most of the increase in
build time is new Rust code, and icecream doesn't distribute Rust. So in
addition to some long compile times for final crates limiting the minimum
build time, icecream doesn't help much in the run-up either. This is why
I'm excited about the distributed build feature we're adding to sccache.


Distributed compilation of rust won't help unfortunately. That won't
solve the fact that the long pole of rust compilation is a series of
multiple long single-threaded processes that can't happen in parallel
because each of them depends on the output of the previous one.

Mike



Distributed compilation won't also help those remotees who may not have 
machines to setup
icecream or distributed sscache.
(I just got a new laptop because of rust compilation being so slow. )
I'm hoping rust compiler gets some heavy optimizations itself.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Refactoring proposal for the observer service

2018-01-03 Thread smaug

On 01/04/2018 12:30 AM, Ben Kelly wrote:

On Wed, Jan 3, 2018 at 5:09 PM, Gabriele Svelto  wrote:


So after validating my approach in that bug (which is almost ready) I've
thought that it might be time to give the observer service the same
treatment. First of all we'd have a list of topics (I've picked YAML for
the list but it could be anything that fits the bill):



Could we use our existing idl/webidl/ipdl for this?  It would be nice not
to have to maintain another code generator in the tree if possible.


Yeah, I was thinking this too. I really wish we don't add yet another code 
generator.
idl or webidl should be ok.
But I see artifact builds being rather big issue here(, even though I don't 
personally use them).
Building FF has become so very slow, that avoiding extra build steps whenever 
possible should be a goal.
We should try to make our platform easier to approach, not harder.




-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Next year in web-platform-tests

2017-12-15 Thread smaug

Great work on wpt!

On 12/15/2017 05:38 PM, James Graham wrote:

Following the summary of what we achieved in wpt in the last year, I'd
like to solicit input from the gecko community to inform the
priorities of various pieces of wpt work for the next year.

In order to maximise the compatibility benefit we have two long term
objectives for the web-platform-tests work at Mozilla:

* Make writing writing web-platform-tests the default for all
  web-exposed features, so that we only write unshared tests in
  exceptional cases (e.g. where there is a need to poke at Gecko
  internals).

* Ensure that gecko developers are using the web-platform-tests
  results to avoid interoperability issues in the features they
  develop.

Obviously we are some way from achieving those goals. I have a large
list of things that I think will make a difference, but obviously I
have a different perspective to gecko developers, so getting some
feedback on the priorities that you have would be good (I know I have
already have conversations with several people, but it seems good to
open up the question to a broader audience). In particular
it would help to hear about things that you would consider blockers to
removing tests from non-wpt suites that are duplicated in wpt
(assuming exact duplication), and limitations either in the
capabilities of wpt


Not being able to send (DOM) events which mimic user input prevents converting
many event handling tests to wpt.
Not sure if WebDriver could easily do some of this, or should browsers have 
some testing mode which exposes
APIs for this kinds of cases.


-Olli



or in the workflow that lead to you writing other
test types for cross-browser features.

Thanks

(Note: I set the reply-to for the email version of this message to be off-list 
as an experiment to see if that avoids the anchoring effect where early
replies set the direction of all subsequent discussion. But I'm very happy to 
have an on-list conversation about anything that you feel merits a
broader audience).


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Problems of Microtasks & Promises in Firefox

2017-12-08 Thread smaug

On 12/08/2017 09:21 AM, Bevis Tseng wrote:

Hi Everyone,

Microtask is a fundamental piece of the EventLoop defined in HTML standard
to allow algorithms to run something asynchronously that was just settled
asap without the latency of enqueuing a new task.  The most well-known use
cases of Microtasks are Promises, await of async function and
MutationObservers.
After Bug 1193394 <https://bugzilla.mozilla.org/show_bug.cgi?id=1193394>
revealed the problem about how Firefox performs microtasks differently from
what has be specified in HTML standard,

Just a small correction, we have had the correct microtask handling since the 
beginning of microtasks.
It has been used with MutationObserver.
Then later another different mechanism, also called microtask internally was 
added for Promises, but
it did something very different to the original microtask handling.



we've spent some time since the end
of last year to push this forward by providing a correct microtask
scheduling (Thanks Olli(:smaug) for the major work of the new scheduling
patch) and fixing the test failures in our implementation discovered by the
scheduling change.
Good news is that we have all WPT green now plus removing some expected
failures. However, we still have some failures
<https://bugzilla.mozilla.org/show_bug.cgi?id=1193394#c62> in chrome test,
mochitest, bc, dt and mda.
I'd like to say thanks here again to Tooru(:arai) and Hiroyuki(:hiro) who
have helped to clean up a lot of failures in browser-chrome and
dom/animation.

However, after more tests were fixed, more concern is raised to remind me
to write this mail to tell everyone the story why you should understand
more about the microtask algorithm defined in the event loop of HTML and
how different it is performed in current m-c, so each module owner could
spend some time to review if your implementation will be compatible with
the new scheduling even though there might be no test failure captured on
try when new scheduler is applied.

You can read the attached slides to understand more about the difference:

[image: Google Slides]
Microtask checkpoints
<https://docs.google.com/a/mozilla.com/presentation/d/1momsC3suU8m-CrdZyYD_6QATATehjzZHbkGmL6KsmSk/edit?usp=drivesdk>

In short, inside Firefox,
1. There are 2 queues for different types of microtasks instead of single
queue:
- One is for MutationObservers while the other is for Promise callbacks.
- The Promise one is performed after the Mutation one is done.
2. For the Promise one,
a. we don't perform microtask at the end of a JS callback or an event
listener but perform only at the end of current task.
b. there is no reentrant check of performing microtask checkpoint.
3. Some implementation heavily relies on the executing order of current
scheduling between MutationObservers and Promise callbacks.

Hence, everyone shall be aware that
1. It shall be possible to reenter immediately to your implementation from
a JS callback or an even listener instead of being accessed at the end of
current task especially if your test cases are only written with promises
or *await* async function without the awareness of the re-entrant problems.
2. There is no particular order between mutation observers and promise
callbacks. They shall be consumed in FCFS order in a single queue.

Some people might ask why we didn't land this fix to m-c directly or land
it but disable it by default to fix it gradually:
1. We have done some experiments and we found that replacing the scheduler
is risky which cause several crashes and function broken due to the
expectation of the behavior in old scheduling.
2. In addition, it make the new scheduling un-maintainable and unreadable
if we use the new scheduler to simulate the old behavior. (See bug 1420816
<https://bugzilla.mozilla.org/show_bug.cgi?id=1420816#c9>)
3. We should fix them as soon as we can instead of fixing it gradually.
Otherwise, without test coverage before landing this main patch, we expect
more failures to be suppressed while adding new features or more WPT
failures contributed by other vendors which is expected to be passed in
correct scheduling approach.

We still have several failures open without assignee yet and I will keep
updating the rebased patches with try results weekly for tracking:
Bug 1414170 <https://bugzilla.mozilla.org/show_bug.cgi?id=1414170>: Fix
failure in toolkit/components/extensions/test/mochitest/ that replies
non-comformant Promise handling
Bug 1414176 <https://bugzilla.mozilla.org/show_bug.cgi?id=1414176>: Fix
failure WebRTC tests relying on non-comformant Promise handling
Bug 1414177 <https://bugzilla.mozilla.org/show_bug.cgi?id=1414177>: Fixed
failed tests in devtools which relies on non-comformant Promise handling
Bug 1415781 <https://bugzilla.mozilla.org/show_bug.cgi?id=1415781>: Fix
failure in layout/style/test/chrome/ relying on non-comformant Promise
handling
Bug 1415787 <https://bugzilla.mozilla.org/sh

Re: Hiding 'new' statements - Good or Evil?

2017-11-28 Thread smaug

On 11/28/2017 06:33 AM, Boris Zbarsky wrote:

On 11/27/17 7:45 PM, Eric Rescorla wrote:

As for the lifetime question, can you elaborate on the scenario you are
concerned about.


Olli may have a different concern, but I'm thinking something like this:

  for (auto foo : myFoos) {
foo->bar();
  }



That was pretty much what I had in mind.
Though, using auto without range-for, so just
auto foo = getFoo();
foo->bar(); // is this safe?




where bar() can run arbitrary script.  Is "foo" held alive across that call?  
Who knows; you have to go read the definition of the iterators on the
type of myFoos to find out.

One possible answer is that the right solution for this type of issue is the 
MOZ_CAN_RUN_SCRIPT static analysis annotation on bar(), which will make
this code not compile if the type of "foo" is a raw pointer But this 
annotation is only added to a few functions in our codebase so far, and we'll
see how well we manage at adding it to more.  We have a _lot_ of stuff in our 
codebase that can run random script.  :(

-Boris


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-27 Thread smaug

On 11/28/2017 12:53 AM, Jeff Gilbert wrote:

ranged-for issues are the same as those for doing manual iteration,

It is not, in case you iterate using
for (i = 0; i < foo.length(); ++i)
And that is the case which has been often converted to ranged-for, with bad 
results.

And auto makes code reading harder. It hides important information like 
lifetime management.
It happens easily with auto that one doesn't even start to think whether 
nsCOMPtr/RefPtr should be used there.

I'm saying this all with my reviewer's hat on, after reading code which got 
committed to m-c with proper r+
(even from experienced engineers).





and your complaints about auto are caused by deficient code/design
review.

The further we deviate from standards, the harder it is for
contributors (and not just new ones) to do the right thing. The
default should be to move towards the standard, not cringe back from
it. Sure, prove it out. But we really don't need more moz-specific
constructs. We should choose our deviations carefully.

On Mon, Nov 27, 2017 at 3:24 AM, smaug <sm...@welho.com> wrote:

On 11/27/2017 01:05 PM, Nicolas B. Pierron wrote:


On 11/26/2017 12:45 AM, smaug wrote:


On 11/24/2017 06:35 PM, Eric Rescorla wrote:


On Thu, Nov 23, 2017 at 4:00 PM, smaug <sm...@welho.com> wrote:


On 11/23/2017 11:54 PM, Botond Ballo wrote:


I think it makes sense to hide a 'new' call in a Make* function when
you're writing an abstraction that handles allocation *and*
deallocation.

So MakeUnique makes sense, because UniquePtr takes ownership of the
allocated object, and will deallocate it in the destructor. MakeRefPtr
would make sense for the same reason.


I almost agree with this, but, all these Make* variants hide the
information that they are allocating,
and since allocation is slow, it is usually better to know when
allocation
happens.
I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear
about
the functionality.



This seems like a reasonable argument in isolation, but I think it's
more
important to mirror the standard C++ mechanisms and C++-14 already
defines
std::make_unique.



Is it? Isn't it more important to write less error prone code and code
where the reader
sees the possible performance issues.
Recent-ish C++ additions have brought quite a few bad concepts which we
shouldn't use
without considering whether they actually fit into our needs.
Something being new and hip and cool doesn't mean it is actually a good
thing.



One example, is that the standard has a single std::function type which
always allocate, and does not report failures because we disable exceptions.

In this particular case I would recommend to have a stack-function wrapper
and a heap-function wrapper. By diverging from the standard we could avoid
bugs such as refusing stack-references for lambda being wrapped in
heap-function (use-after-free/unwind), and adding extra checks that
stack-function
wrapper can only be fields of stack-only classes.  C++ compilers have no
way to understand the code enough to provide similar life-time errors.


(This broken record keeps reminding about the security issues and memory
leaks auto and ranged-for have caused.)



Do you have pointers? Is this basically when we have a non-standard
implementation of begin/end methods which are doing allocation?



ranged-for causes issues when you modify the data structure while iterating.
This used to be bad in nsTArray case at least, now we just crash safely.


And auto hides the type, so reading the code and understanding lifetime
management becomes harder.
This has caused security sensitive crashes and leaks. (And there was some
other type of error too recently, but it escapes my mind now)








As a post-script, given that we now can use C++-14, can we globally
replace
the MFBT clones of C++-14 mechanisms with the standard ones?

-Ekr





-Olli




But in cases where a library facility is not taking ownership of the
object, and thus the user will need to write an explicit 'delete', it
makes sense to require that the user also write an explicit 'new', for
symmetry.

NotNull is a bit of a weird case because it can wrap either a raw
pointer or a smart pointer, so it doesn't clearly fit into either
category. Perhaps it would make sense for MakeNotNull to only be
usable with smart pointers?

Cheers,
Botond



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform








___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement (again): Shadow DOM

2017-11-27 Thread smaug

This is basically an after the fact notification that
we're in progress of implementing Shadow DOM again, this time v1[1].
While doing this, the v0 implementation, which was never exposed to the web, 
will be removed.
v1 is luckily way simpler, so this all should simplify various bits in DOM.

FF60 might be quite realistic target to ship this, but there will be 
intent-to-ship
before that.

Many parts of the spec (DOM) is stable, but there are still couple of tricky 
issues in HTML, like
session history handling with shadow DOM. However Chrome and Safari are 
shipping v1 already.

Devtools will be able to see into the Shadow DOM.

Currently the work is under the old pref "dom.webcomponents.enabled"
but perhaps we should change that, so that people who were testing v0 don't get
v1 APIs.


-Olli


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1405934
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement (again): Shadow DOM

2017-11-27 Thread smaug

On 11/27/2017 02:20 PM, smaug wrote:

This is basically an after the fact notification that
we're in progress of implementing Shadow DOM again, this time v1[1].
While doing this, the v0 implementation, which was never exposed to the web, 
will be removed.
v1 is luckily way simpler, so this all should simplify various bits in DOM.

FF60 might be quite realistic target to ship this, but there will be 
intent-to-ship
before that.

Many parts of the spec (DOM) is stable, but there are still couple of tricky 
issues in HTML, like
session history handling with shadow DOM. However Chrome and Safari are 
shipping v1 already.

Devtools will be able to see into the Shadow DOM.

Currently the work is under the old pref "dom.webcomponents.enabled"
but perhaps we should change that, so that people who were testing v0 don't get
v1 APIs.


-Olli


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1405934


https://bugzilla.mozilla.org/show_bug.cgi?id=1205323
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-27 Thread smaug

On 11/27/2017 01:05 PM, Nicolas B. Pierron wrote:

On 11/26/2017 12:45 AM, smaug wrote:

On 11/24/2017 06:35 PM, Eric Rescorla wrote:

On Thu, Nov 23, 2017 at 4:00 PM, smaug <sm...@welho.com> wrote:


On 11/23/2017 11:54 PM, Botond Ballo wrote:


I think it makes sense to hide a 'new' call in a Make* function when
you're writing an abstraction that handles allocation *and*
deallocation.

So MakeUnique makes sense, because UniquePtr takes ownership of the
allocated object, and will deallocate it in the destructor. MakeRefPtr
would make sense for the same reason.


I almost agree with this, but, all these Make* variants hide the
information that they are allocating,
and since allocation is slow, it is usually better to know when allocation
happens.
I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear about
the functionality.



This seems like a reasonable argument in isolation, but I think it's more
important to mirror the standard C++ mechanisms and C++-14 already defines
std::make_unique.


Is it? Isn't it more important to write less error prone code and code where 
the reader
sees the possible performance issues.
Recent-ish C++ additions have brought quite a few bad concepts which we 
shouldn't use
without considering whether they actually fit into our needs.
Something being new and hip and cool doesn't mean it is actually a good thing.


One example, is that the standard has a single std::function type which always 
allocate, and does not report failures because we disable exceptions.

In this particular case I would recommend to have a stack-function wrapper and 
a heap-function wrapper. By diverging from the standard we could avoid
bugs such as refusing stack-references for lambda being wrapped in 
heap-function (use-after-free/unwind), and adding extra checks that 
stack-function
wrapper can only be fields of stack-only classes.  C++ compilers have no way to 
understand the code enough to provide similar life-time errors.


(This broken record keeps reminding about the security issues and memory leaks 
auto and ranged-for have caused.)


Do you have pointers? Is this basically when we have a non-standard 
implementation of begin/end methods which are doing allocation?


ranged-for causes issues when you modify the data structure while iterating.
This used to be bad in nsTArray case at least, now we just crash safely.


And auto hides the type, so reading the code and understanding lifetime 
management becomes harder.
This has caused security sensitive crashes and leaks. (And there was some other 
type of error too recently, but it escapes my mind now)







As a post-script, given that we now can use C++-14, can we globally replace
the MFBT clones of C++-14 mechanisms with the standard ones?

-Ekr





-Olli




But in cases where a library facility is not taking ownership of the
object, and thus the user will need to write an explicit 'delete', it
makes sense to require that the user also write an explicit 'new', for
symmetry.

NotNull is a bit of a weird case because it can wrap either a raw
pointer or a smart pointer, so it doesn't clearly fit into either
category. Perhaps it would make sense for MakeNotNull to only be
usable with smart pointers?

Cheers,
Botond



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform








___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-25 Thread smaug

On 11/24/2017 06:35 PM, Eric Rescorla wrote:

On Thu, Nov 23, 2017 at 4:00 PM, smaug <sm...@welho.com> wrote:


On 11/23/2017 11:54 PM, Botond Ballo wrote:


I think it makes sense to hide a 'new' call in a Make* function when
you're writing an abstraction that handles allocation *and*
deallocation.

So MakeUnique makes sense, because UniquePtr takes ownership of the
allocated object, and will deallocate it in the destructor. MakeRefPtr
would make sense for the same reason.


I almost agree with this, but, all these Make* variants hide the
information that they are allocating,
and since allocation is slow, it is usually better to know when allocation
happens.
I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear about
the functionality.



This seems like a reasonable argument in isolation, but I think it's more
important to mirror the standard C++ mechanisms and C++-14 already defines
std::make_unique.


Is it? Isn't it more important to write less error prone code and code where 
the reader
sees the possible performance issues.
Recent-ish C++ additions have brought quite a few bad concepts which we 
shouldn't use
without considering whether they actually fit into our needs.
Something being new and hip and cool doesn't mean it is actually a good thing.

(This broken record keeps reminding about the security issues and memory leaks 
auto and ranged-for have caused.)




As a post-script, given that we now can use C++-14, can we globally replace
the MFBT clones of C++-14 mechanisms with the standard ones?

-Ekr





-Olli




But in cases where a library facility is not taking ownership of the
object, and thus the user will need to write an explicit 'delete', it
makes sense to require that the user also write an explicit 'new', for
symmetry.

NotNull is a bit of a weird case because it can wrap either a raw
pointer or a smart pointer, so it doesn't clearly fit into either
category. Perhaps it would make sense for MakeNotNull to only be
usable with smart pointers?

Cheers,
Botond



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-23 Thread smaug

On 11/23/2017 11:54 PM, Botond Ballo wrote:

I think it makes sense to hide a 'new' call in a Make* function when
you're writing an abstraction that handles allocation *and*
deallocation.

So MakeUnique makes sense, because UniquePtr takes ownership of the
allocated object, and will deallocate it in the destructor. MakeRefPtr
would make sense for the same reason.

I almost agree with this, but, all these Make* variants hide the information 
that they are allocating,
and since allocation is slow, it is usually better to know when allocation 
happens.
I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear about the 
functionality.



-Olli




But in cases where a library facility is not taking ownership of the
object, and thus the user will need to write an explicit 'delete', it
makes sense to require that the user also write an explicit 'new', for
symmetry.

NotNull is a bit of a weird case because it can wrap either a raw
pointer or a smart pointer, so it doesn't clearly fit into either
category. Perhaps it would make sense for MakeNotNull to only be
usable with smart pointers?

Cheers,
Botond



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSP Violation DOM Events

2017-11-17 Thread smaug

On 11/17/2017 12:55 AM, Chung-Sheng Fu wrote:

Content Security Policy suggests Security Policy Violation DOM Events [1].
In case any of the directives within a policy are violated, such a
SecurityPolicyViolationEvent is generated and sent out to a reporting
endpoint associated with the policy. We are working on implementing those
violation events here [2] and plan to ship them within Firefox 59.

Thanks,

Chung-Sheng Fu, Christoph Kerschbaumer

[1] https://w3c.github.io/webappsec-csp/#violation-events

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1037335




Looks like there are quite a few spec issues still. Probably better to get 
those fixed first and then
update our implementation.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread smaug

On 11/02/2017 09:58 PM, Kris Maglione wrote:

Related: I've been thinking for a long time that we need better tools for 
tracking what sites/usage patterns are responsible for the time we spend in
CC (and possibly GC, but I think that tends to be less of a problem).

I've noticed, in particular, that having multiple Amazon tabs open, and keeping 
them open for a long time, tends to lead to many, long CC pauses. But
I have no idea why, or how to even begin profiling it. And the only reason I 
even know/suspect that Amazon is the problem is that I've noticed a
strong pattern.


Have you filed a bug? Web sites do often leak, and by leak I mean keeping tons 
of such objects alive that they don't need anymore. If CC times go up
when a web site is leaking, it probably means a missing "black-bit-propagation" 
optimization.
Google Reader was a canonical example of a web page which leaked basically 
everything, and we did optimize things out from CC graph so that
at least CC times stayed low enough, even though memory usage went up.



Leaks certainly make the problem worse, but I'm not sure that they're all of the 
problem. Either way, I think being able to say "this compartment n
milliseconds of CC time in the last minute" would be a pretty good way for us 
to pick out memory problems in particular web pages.


On Thu, Nov 02, 2017 at 10:34:22AM -0400, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the 

Re: Website memory leaks

2017-11-02 Thread smaug

On 11/02/2017 10:01 PM, Kris Maglione wrote:

On Thu, Nov 02, 2017 at 05:37:30PM +0200, smaug wrote:

This has been an issue forever, and there aren't really good tools on any 
browser, as far as
I know, for web devs to debug their leaks.
Internally we do have useful data (CC and GC graphs and such), but would need 
quite some ux skills to design some good
UI to deal with leaks. Also, the data to deal with is often massive, so the 
tool should be implemented that in mind.


We do have memory tools in devtools now, and the dominator tree

We've had that for quite some time, but that is missing the native side of the 
object graph, which is often rather crucial.
And I think it is hard to query various things from the tool.



in particular can provide some useful data for tracking down leaks. But those 
tools
aren't especially well-maintained at the moment, and I honestly think they're 
too flaky at this point to recommend to webdevs for general use :(

It would be nice if we could prioritize them again.


On 11/02/2017 04:34 PM, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identif

Re: Website memory leaks

2017-11-02 Thread smaug

This has been an issue forever, and there aren't really good tools on any 
browser, as far as
I know, for web devs to debug their leaks.
Internally we do have useful data (CC and GC graphs and such), but would need 
quite some ux skills to design some good
UI to deal with leaks. Also, the data to deal with is often massive, so the 
tool should be implemented that in mind.

Ads not destroying the performance is a bit different issue in general, and is
somewhat part of the Quantum DOM and iframe throttling. We don't have all of 
that enabled yet.
But sure, ads using too much memory is something to think some more.

We do have pids in e10s-Nightly tab tooltips. There was just a regression for 
couple of days.

On 11/02/2017 04:34 PM, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identify
patterns that point to a leak before it gets out of hand.  Also tools
that help website builders identify if their pages are leaking in the
field (memory-use triggers? within-tab about:memory dumps available to
the page?)

Perhaps we can also push to limit memory use (CPU use??) in embedded
ads/restricted-iframes/etc, so sites can stop ads from 

Re: Visual Studio 2017 coming soon

2017-10-30 Thread smaug

On 10/30/2017 10:03 PM, Kris Maglione wrote:

On Mon, Oct 30, 2017 at 08:28:39AM -0700, Jim Blandy wrote:

Okay, this is half the argument. The second half would be:

- Does auto cause such mistakes more often than it prevents them? The
benefit claimed for auto is that it usually makes code more legible.
Hopefully that prevents mistakes, on the balance.


My feeling is that these features generally prevent more errors than they cause.


My gut feeling while having my reviewer's hat on, is that I haven't really seen 
them preventing issues, but they have definitely caused issues.



The case for `auto` probably isn't as strong as the cases for other features, 
but I don't think the case against it is as strong either. And there are
places where, without `auto`, the extra qualified type noise spreads a simple 
assignment across multiple, dense lines, and makes the code much more
difficult to
follow. And makes refactoring the specific types of (e.g.,) smart pointer 
arrays more complicated, and somewhat more dangerous.

Also, I'm quite looking forward to the time when I can write:

 GetString(char*, nsContentUtils::PropertiesFile aBundle = 
auto::eBRAND_PROPERTIES)

rather than:

 GetString(char*, nsContentUtils::PropertiesFile aBundle = 
nsContentUtils::PropertiesFile::eBRAND_PROPERTIES)


- Is ranged-for more prone to iterator invalidation errors than the older
form? I believe I've seen .Length() calls hoisted out of old-form loop
conditions pretty frequently. The advantage of ranged-for is claimed to be
that it depends on the operand's iteration API, instead of requiring the
programmer to invent an iteration technique each time they write a loop.


Again, my feeling here is that the opposite is true. If we have a single, de 
facto way of writing for loops, it makes it much easier to make sure we
have the correct behavior everywhere. The alternative is separate, ad-hoc 
implementations everywhere we iterate over a collection, many of which will
make their own sets of mistakes.


- Are closures more prone to ownership mistakes than the pre-closure
technique? How does this compare with their benefits to legibility?


I think this is the clearest case where the benefits far outweigh the risks. 
There are definitely easy-to-overlook lambda capture footguns, but our
static analysis tools are good at preventing those. But there are also huge 
benefits.

The ScopeExit class is the best example I can think of. Before we had that 
helper, in the best cases, we wound up writing tons of special-purpose RAII
helpers that were hard to think about and maintain. In the worst cases, we 
wound up with tons of code that either did not handle early return
correctly at all, or added fragile, duplicated cleanup code in every failure 
check `if`. ScopeExit makes our code much safer and more maintainable.

And in the more general case, our pre-lambda code tends to wind up with logic 
and data ownership spread across multiple methods and special-purpose
classes (e.g., Runnables), that often get separated in the source, and become 
difficult to follow and reason about. Post-lambda, we have abstractions
like MozPromise that make async code much easier to follow, and the data it 
owns much more obvious.

For example, this code that I wrote fairly recently:

   RefPtr self(this);
   RunOnMainThread(FUNC, [=] {
 self->mChannel->Resume();

 RunOnActorThread(FUNC, [=] {
   if (self->IPCActive()) {
 self->CheckResult(self->SendResumed());
   }
 });
   });

Pre-lambda, in the best case would have expanded to something like:

IPCResult
StreamFilterParent::RecvResume()
{
 RunOnMainThread(
   NewRunnableMethod("StreamFilterParent::Resume1",
 this,
 StreamFilterParent::Resume1));
 return IPC_OK();
}

void
StreamFilterParent::Resume1()
{
 mChannel->Resume();
 RunOnActorThread(
   NewRunnableMethod("StreamFilterParent::Resume2",
 this,
 StreamFilterParent::Resume2));
}

void
StreamFilterParent::Resume2()
{
 if (IPCActive()) {
   CheckResult(SendResumed());
 }
}

Which is much more difficult to follow (which function runs on which thread? 
what data is kept alive across the entire event chain? where do we check
for errors?) and maintain (what happens when I want to add a new event in the 
middle of the chain? do I renumber everything and add a new method
declaration to the header?). And that's for a fairly simple case where the only 
object held alive is `this`.


When evaluating the impact of new features, we should not let the
familiarity of the mistakes we've been making in C++98 for twenty years
cause us to focus only on the risks from change. That misjudgment would
hurt the quality of the code.


+1


On Mon, Oct 30, 2017 at 8:03 AM, smaug <sm...@welho.com> wrote:


On 10/30/2017 04:52 PM, Simon Sapin wrote:


On 30/10/17 15:05, smaug wrote:


And let's be careful with the new C++ features,

Re: Visual Studio 2017 coming soon

2017-10-30 Thread smaug

On 10/30/2017 04:52 PM, Simon Sapin wrote:

On 30/10/17 15:05, smaug wrote:

And let's be careful with the new C++ features, pretty please. We
managed to not be careful when we started to use auto, or ranged-for
or lambdas. I'd prefer to not fix more security critical bugs or
memory leaks just because of fancy hip and cool language features ;)


Careful how? How do new language features lead to security bugs? Is new 
compiler code not as well tested and could have miscompiles? Are specific
features easy to misuse?




With auto we've managed to hide the ownership of some objects from 
reader/reviewer (and I guess also from the patch author),
and this has lead to both to security issues and memory leaks.

Ranged-for lead to security critical crashes when we converted some old style
for (i = 0; i < array.Length(); ++i) to use it, since ranged-for doesn't play 
well when the array changes underneath you.
These days we crash safely there.

With lambdas understanding who owns what becomes harder, and before some 
checks, we had (I think rather short while) issues when
there was a raw pointer to a refcounted object captured in a lambda and the 
lambda was then dispatched to the event loop.
Nothing guaranteed the captured object to stay alive.

Basically, some "new" features have hidden important aspects of the lifetime 
management of objects, and by
doing that, made it easier to write broken code and harder by the reviewers to 
catch the mistakes.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio 2017 coming soon

2017-10-30 Thread smaug

And let's be careful with the new C++ features, pretty please.
We managed to not be careful when we started to use auto, or ranged-for or 
lambdas.
I'd prefer to not fix more security critical bugs or memory leaks just because 
of fancy hip and cool
language features ;)


-Olli



On 10/30/2017 05:27 AM, Jim Blandy wrote:

How will this affect the matrix of specific C++ features we can use?
https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code

(At the moment I'm dying for generic lambdas, which are C++14. I'd been
using std::function as a workaround, but I also need control over the
allocation policy, which std::function no longer offers.)


On Wed, Oct 25, 2017 at 2:48 PM, David Major  wrote:


I'm planning to move production Windows builds to VS2017 (15.4.1) in bug
1408789.

VS2017 has optimizer improvements that produce faster code. I've seen 3-6%
improvement on Speedometer. There is also increased support for C++14 and
C++17 language features:
https://docs.microsoft.com/en-us/cpp/visual-cpp-language-conformance

These days we tend not to support older VS for too long, so after some
transition period you can probably expect that VS2017 will be required to
build locally, ifdefs can be removed, etc. VS2017 Community Edition is a
free download and it can coexist with previous compilers. Installation
instructions are at:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_
guide/Build_Instructions/Windows_Prerequisites#Visual_Studio_2017

If you have concerns, please talk to me or visit the bug. Thanks!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: De-XBL Plans

2017-10-22 Thread smaug

On 10/21/2017 01:14 PM, Philipp Kewisch wrote:

On 10/20/17 7:47 PM, Dave Townsend wrote:

For some time now we've been talking about moving away from XUL and XBL.
The browser architecture team has been hard at work figuring out how to go
about doing that and we're ready to share the first of our proposals more
widely. We have developed a plan to remove XBL from Firefox. It's been
through a successful design review with some of the key engineers and now
is the time for more comments if you have them. We're planning to start some
of the work this quarter with it really ramping up next quarter.

Take a look at the plan

and let us know what you think. There are a couple of areas where we are
still investigating concerns:


I very much welcome this plan, especially the fact that Web Components
is part of the replacement. Last time I asked, it sounded like Web
Components was still on the way of being reimplemented and pending some
spec work. In following the webcomponents bug I see there has been
constant progress.

Nevertheless, I'd appreciate if someone could comment on how far along
the Web Components implementation is. Is it now following the agreed
upon version of the spec (I suspect yes), and is the implementation
stable enough that you would consider it ready to ship? What is the next
big milestone for Web Components?


Custom elements implementation is quite close to be ready, and the spec should 
be
reasonable stable too. There are some bugs and some performance work in the 
implementation.

Shadow DOM is further behind, especially the implementation, but also the specs 
tend to
have issues with it, so getting interoperable implementation with other 
browsers will be a tad
harder. I'm still optimisticly thinking we could have Shadow DOM (for the web) 
in Nightly by the end of the year -
depends on how much we can focus on it during November.






Thunderbird/Lightning uses a lot of XBL components as well that I would
love to get rid of, I am looking forward to making things more
compatible with the future.

Thanks,
Philipp





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: De-XBL Plans

2017-10-22 Thread smaug

On 10/21/2017 11:45 PM, Yura Zenevich wrote:

I would also like to bring to the team's attention another force worth
being on the radar (in terms of "forces on the system") - accessibility.
One theme that seems to consistently happen with re-writes such as the ones
from xul to React is regressions in terms of accessibility of newly
re-written components.


Yeah, this is important. I would imagine custom elements in XUL (which might 
then internally use shadow DOM too) would implement the same
a11y interfaces what XBL implements now.




thanks,
yura

On Sat, Oct 21, 2017 at 6:14 AM, Philipp Kewisch  wrote:


On 10/20/17 7:47 PM, Dave Townsend wrote:

For some time now we've been talking about moving away from XUL and XBL.
The browser architecture team has been hard at work figuring out how to

go

about doing that and we're ready to share the first of our proposals more
widely. We have developed a plan to remove XBL from Firefox. It's been
through a successful design review with some of the key engineers and now
is the time for more comments if you have them. We're planning to start

some

of the work this quarter with it really ramping up next quarter.

Take a look at the plan


and let us know what you think. There are a couple of areas where we are
still investigating concerns:


I very much welcome this plan, especially the fact that Web Components
is part of the replacement. Last time I asked, it sounded like Web
Components was still on the way of being reimplemented and pending some
spec work. In following the webcomponents bug I see there has been
constant progress.

Nevertheless, I'd appreciate if someone could comment on how far along
the Web Components implementation is. Is it now following the agreed
upon version of the spec (I suspect yes), and is the implementation
stable enough that you would consider it ready to ship? What is the next
big milestone for Web Components?

Thunderbird/Lightning uses a lot of XBL components as well that I would
love to get rid of, I am looking forward to making things more
compatible with the future.

Thanks,
Philipp



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reviews for in-tree documentation (was: Builds docs on MDN)

2017-10-19 Thread smaug

Sounds very reasonable.

(Hoping the r=documentation flag won't be misused ;))


On 10/19/2017 04:37 PM, Andreas Tolfsen wrote:

Some time ago there was a discussion on dev-builds@ regarding
the state of our in-tree source code documentation.  The main
focus was that MDN, moving forward, will mainly revolve around web
platform documentation and would actively start de-emphasising Gecko
contribution docs.

Now, that discussion paints the backdrop for this new thread, but it
is well worth reading on its own and had a lot of good ideas in it
that never materialised:

https://groups.google.com/d/msg/mozilla.dev.builds/cp4bJ1QJXTE/Xqy_nHV5DAAJ

The reality four months on is that more documentation than ever
lives in the tree, and there is a sentiment that imposing the
same rigorous peer review process we have for source code on
documentation changes is overkill.

bz made a modest proposal that documentation changes should not
require bugs or reviews, and that they could be annotated with a
special review flag to pass pre-receive hooks.  I’m including his
original email below.

If we still feel this is a good idea I would like to know what steps
to take next to make that policy.

-- >8 --
From: Boris Zbarsky 
Date: June 16, 2017 15:40
Subject: Re: Builds docs on MDN
To: dev-bui...@lists.mozilla.org

On 6/16/17 9:33 AM, Ehsan Akhgari wrote:

I certainly feel like the barrier for filing bugs, creating a
patch, figuring out how to use readthedocs infrastructure, getting
reviews, etc. isn't really worth it


I believe we should not require filing bugs, reviews, or any of
that for in-tree docs.  Just edit the doc, commit, push.  Add
"r=documentation" if needed to placate hooks.  Just because it's
in-tree doesn't mean it needs to use the whole heavyweight process.
And if we can make these things auto-DONTBUILD, that's even better,
of course.

I agree it's still slower than a wiki. :(


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: New default action, horizontal scroll, of wheel with Shift key (except Firefox for macOS)

2017-10-18 Thread smaug

On 10/18/2017 08:08 AM, Jet Villegas wrote:

SGTM. BTW, bug 143038 was filed 16 years ago. Is that a bugzilla record for
oldest fixed bug?


whoohoo, didn't realize it was that old.
One day I'll start reading the bugs I review ;)




That is, do I owe you another steak? :-)


On Wed, Oct 18, 2017 at 2:52 PM, Masayuki Nakano 
wrote:


From some users who use legacy mice which supports only vertical wheel, we
have a request to support horizontal scroll with vertical wheel operation
with a modifier.
https://bugzilla.mozilla.org/show_bug.cgi?id=143038

Now, legacy add-ons have gone since 57 and it may be difficult to override
default action of wheel events with WebExtensions, it is the time to
support horizontal scroll with vertical wheel operation with Shift key.

This will be enabled in default settings except on macOS.  The reason why
we don't need to use this feature on macOS is, macOS generates horizontal
wheel event if user uses legacy mouse and pressing Shift key.

And default action of wheel with Alt key is changed to history navigation
which is default action of wheel with Shift key in current release build.

So, now, we have consistent behavior between Firefox for macOS and the
others.

Note that this feature won't be exposed to attributes of "wheel" events.
This is just a default action change and same behavior as Chrome.

--
Masayuki Nakano 
Software Engineer, Mozilla
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experimenting with a shared review queue for Core::Build Config

2017-10-11 Thread smaug

On 10/11/2017 09:55 PM, Andreas Tolfsen wrote:

+tools-marionette

Also sprach Chris Cooper:


Many of the build peers have long review queues.

Is having a long review queue the actual issue? Isn't (too) high throughput
at least equally bad issue. Does the new setup somehow try to ensure
reviews actually do split more evenly among devs? (and that has nothing to do 
with
review queue lengths)


I'm not convinced

that all of the review requests going to any particular build peer
need to be exclusive.


I think it is great that you’re experimenting with this, and I’m
very excited about it!

In addition to peers doing review triage to balance out review
queues, there’s an argument to be made that for a certain category
of work we do, who reviews the code is of secondary importance.

Speaking from my own personal experience, the vast majority of
patches I write could feasibly be reviewed by any one of the peers
of Testing :: Marionette.  There is certainly the odd occasion
when I need to explicitly draw from the expertise of a particular
individual, but I have found this is more the exception than the
norm in the line of work I do on the remote protocol.

In a small team it can also be tricky to keep track of who is
around on any given day across multiple continents and timezones.
A first-come-first-served review system I think would also help
smaller teams with not-so-big review queues improve turnaround times
for patches.

Improving the time from patch written, through review, to
integration in mozilla-central, is doubly important when we receive
patches from new contributors who don’t know who to flag.


When you submit your next Build Config patch, simply select the
new, shared "user" core-build-config-revi...@mozilla.bugs as the
reviewer instead of a specific build peer. The build peers are
watching this new user, and will triage and review your patch
accordingly.


I would love to see the modules I’m peer of participate in the
same experiment.  Can I ask you to elaborate what a ‘user’ is in
this context and how practically this ‘triage’ happens?


This new arrangement should hopefully shorten patch queues for
specific reviewers and improve turnaround times for everybody. It
also has the added benefit of automatically working around
absences, vacations, and departures of build peers.


When I worked for Opera we had a similar system where a pool of
reviewers and watchers would get notified about incoming reviews.


How did the setup actually work?
I've asked this from farre too before, and IIRC, the reply was that is wasn't
working that well. Certain devs still ended up doing majority of the reviews.
But perhaps I misremember or certainly I don't know the details.

I'm all for trying new setups to split review load more evenly and
I'm very eager to hear how this experimentation works out!


-Olli



In the review tool you would save a glob-filter stating that you
were interested in either reviewing or “watching”, meaning you
only cared about the notification, a change in a certain subsection
of the repository.

More importantly, new contributors to the codebase didn’t have
to know who to flag for review.  They’d submit the patch and the
review tool would figure out who to notify.  The first respondent
reviewer would then, similarly to how you describe, triage the
change to the most appropriate person, or if it was a simple change,
simply do the review him- or herself.

In the danger of derailing this thread to talk about the new review
tool that is meant to replace Splinter and MozReview, I would be
absolutely thrilled if it had support for “pooled review queues”
like this.  If there was a way to annotate directories with OWNER
files or similar it would be even better.


From what I understand—and please feel free to correct me—the

“shared user” you talk about is simply another Bugzilla account?
That is great for experimentation, but it it turns out a success,
having better ergonomics would be desirable.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Follow up on clang-format

2017-09-26 Thread smaug

On 05/23/2014 04:29 AM, Anthony Jones wrote:

Some of you may remember the discussion on clang-format and the `mach
clang-format` command. What we have in place right now is very temporary
but it is functional enough to give it a try. I have not put the effort
into upstreaming my changes. Depending on the feedback I receive I will
either:

* Finish my existing changes and upstream them
* Remove the `mach clang-format` command altogether
* Do nothing

I have personally found it useful. However I would like to hear from
other people who have tried it to help me decide what to do next.

Anthony




clang-format messes up really badly many macros.
For example nsElementTable.cpp becomes unreadable.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-31 Thread smaug

On 07/31/2017 01:02 PM, Henri Sivonen wrote:

On Tue, Jul 18, 2017 at 7:01 AM, Jim Blandy <jbla...@mozilla.com> wrote:

BTW, speaking of training: Jason's and my book, "Programming Rust" will be
available on paper from O'Reilly on August 29th!


And already available on Safari Books Online (access available via
Service Now request subject to managerial approval).

On Mon, Jul 17, 2017 at 10:43 PM, Ted Mielczarek <t...@mielczarek.org> wrote:

From my perspective Rust is our
single-biggest competitive advantage for shipping Firefox, and every
time we choose C++ over Rust we throw that away.


I agree.


but we are quickly
going to hit a point where "I don't feel like learning Rust" is not
going to cut it anymore.


Indeed.

On Tue, Jul 11, 2017 at 4:37 PM, smaug <sm...@welho.com> wrote:

How is the performance when crossing Rust <-> C++ boundary? We need to make
everything faster, not slower.
If I understood emilio's explanation on IRC correctly having the performance
of an inlined (C++) function requires
handwritten rust bindings to access member variables of some C++ object.
That doesn't sound too good - hard to maintain and possibly easy to forget
to optimize.

I don't claim to understand anything about the current setup, but
has anyone written down what would be needed to have fast and easy to
maintain Rust <-> C++ boundary in
such way that also memory handling is easy to manage (aka, how to deal with
CC/GC).
I think it would be better to sort out this kind of low level issues rather
soon before we have too much
Rust code in tree, or perhaps we won't see much Rust usage before those
issues are sorted out.

(I'm looking this all from DOM point of view, where pretty much all the
objects need to be cycle collectable JS holders, but perhaps Rust would fit
better in code outside DOM)


Rust indeed is, at least at present, a better fit for code outside the
area of cycle-collectable DOM objects.

The performance issue you mention applies if the usage scenario is
that Rust code needs to get or set a lot of fields on a C++ object.
While we do have code that, if implemented in Rust, would have to do
performance-sensitive field access on C++ objects, we also have areas
for which that would not be a concern. For example, in the case of
encoding_rs, the data that crosses the FFI boundary is structurally
simple (mozilla::Span / Rust slices decomposing to pointer to an array
of primitives and a length for FFI crossing) and the amount of work
done on the Rust side is substantial compared to the frequency of
crossing the FFI boundary.

In the absence of the Stylo-like optimization of frequent
performance-sensitive access of fields of a foreign-language object,
the FFI story that one can expect involves three functions/methods per
logical method.

Either (for C++ caller and Rust callee)
 1) C++ method wrapping the C function to hide the unsafety and bad
ergonomics of raw C.
 2) C function declared in C++ and implemented in Rust.
 3) Rust method: the actual callee that does something useful.
Or (for Rust caller and C++ callee)
 1) Rust method wrapping the C function to hide the unsafety and bad
ergonomics of raw C.
 2) C function declared in Rust and implemented in C++.
 3) C++ method: the actual callee that does something useful.

So there's the real callee method, there's a small C function that
wraps that method in a C ABI-compatible way and then there is a
wrapper for the C function that provides the ergonomics that one would
expect in the calling language.

The caller-side wrapper around the C function is trivial to make
inline and as a matter of code size is likely harmless or even
strictly beneficial to make inline.

The compilers don't have visibility across the declaration definition
of the C function, since that's where the cross-language linkage
happens, so currently one needs to assume that the C function always
has the cost of an actual function call.

As for inlining the actual callee method of interest in the language
being called into the implementation of the C function, it may or may
not happen automatically and when it doesn't happen automatically,
forcing it to happen manually might be a problem in terms of code
size.

So when the callee that actually does the work we care about doesn't
get inlined into its C wrapper, one should approximate a call from
Rust to C++ or from C++ to Rust have the cost of two *non-virtual*
function calls instead of one. (It would be interesting to contrast
this to the cost of over-use of virtual calls due to nsIFoo
interfaces.)

- -

Ideally, both the caller-language-side wrapper around the C function
and the C function itself would get inlined so that the cross-language
call would on the machine code level look just like a normal (if not
also inlined!) call to a method of the callee language. For that to
happen, we'd need link-time inlining across object files produced by
different-language compilers.

Naïvely

Extensions and Gecko specific APIs

2017-07-25 Thread smaug

Hi all,


recently in couple of bugs there has been requests to add Gecko specific APIs 
for extensions.
It isn't clear to me why, and even less clear to me is what the plan is there.
I thought WebExtensions should work in several browsers, but the more we add 
Gecko specific APIs, the less likely
that will be.

Could someone familiar WebExtensions clarify this a bit? Do we have some policy here. Should we be as strict as with web APIs, or allow some 
exceptions or do whatever people want/need?





-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator Update, July 2017

2017-07-12 Thread smaug

On 07/12/2017 04:20 PM, Ben Kelly wrote:

On Tue, Jul 11, 2017 at 11:49 PM, Martin Thomson  wrote:


On Wed, Jul 12, 2017 at 1:34 PM, Byron Jones  wrote:

instead of disabling splinter for phabricator backed products, we could

make

it a read-only patch viewer.


Given the number of bugs that exist with patches attached, that would
be greatly appreciated.  I would also assume that attaching patches
won't stop completely either.



It would be nice if patch information was always contained within the bug.
Scattering it across tools makes it harder to figure out happened later
when you are researching why changes were made.

Consider that we are talking about "turning off" mozreview now.  Will all
the bugzilla links to those reviews go dead?  Or do we have to maintain a
second service in read-only mode forever?  This would not be a problem if
the patch/review information was also stored in bugzilla instead of solely
placed in a separate tool.




Yeah, I live in the assumption that bugzilla bugs will contain all the review 
information also
in phabricator era. But indeed having also the patches in bugzilla would be 
good.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-11 Thread smaug

On 07/11/2017 04:27 PM, Ben Kelly wrote:

On Tue, Jul 11, 2017 at 4:57 AM, Nicholas Nethercote 
wrote:


If I were the owner of that module I would consider implementing a policy

something like the following:

"When a person writes a new compiled-code component, or majorly rewrites
an existing one, they should strongly consider writing it in Rust

instead

of C or C++. Such a decision will depend on the nature of the component.
Components that are relatively standalone and have simple interfaces to
other components are good candidates to be written in Rust, as are
components such as parsers that process untrusted input."



I think this is pretty uncontroversial. The high-level strategic decision
to bet on Rust has already been made, and the cost of depending on the
language is already sunk. Now that we're past that point, I haven't heard
anyone arguing why we shouldn't opt for memory safety when writing new
standalone code. If there are people out there who disagree and think

they

have the arguments/clout/allies to make the case, please speak up.



As Gibson said: "The future is already here — it's just not very evenly
distributed."

The paragraph you wrote is obvious to you, but I suspect it's not obvious
to everyone -- Mozilla has a lot of contributors. There may still be some
Firefox developers who think that Rust something that other people do, and
that isn't relevant to them or their component(s). There may be some
Firefox developers who are interested in Rust, but feel intimidated or
uncertain about using it. There may be some developers who are keen to use
Rust, but haven't realized that we are past the experimental stage.



(Picking a somewhat random response to reply here...)

It would be really nice to have IPDL codegen support for rust.  I
considered using rust for my current task, but decided not to since I
would've had to build even more boilerplate to work with IPC c++ actors.  I
would've ended up with more glue code than actual functional code.

Another advantage of building rust IPDL codegen targets would be that we
could build service oriented code in the parent process in rust.  This
would be an incremental improvement that could offer additional safety in
the parent process without having to tackle webidle, cycle collection,
etc.  In particular, PBackground parent actors already cannot interface
with xpcom since they are OMT and would be a good candidate here.


OMT doesn't mean no XPCOM. You're thinking xpconnect.




Anyway, I think fixing these kinds of integration pain points would
accelerate our ability to use rust in gecko.  I would hesitate to make any
kind of mandates until these issues are addressed.

Thanks.

Ben



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-11 Thread smaug

On 07/10/2017 01:29 PM, Nicholas Nethercote wrote:

Hi,

Firefox now has multiple Rust components, and it's on track to get a bunch
more. See https://wiki.mozilla.org/Oxidation for details.

I think this is an excellent trend, and I've been thinking about how to
accelerate it. Here's a provocative goal worth considering: "when writing a
new compiled-code component, or majorly rewriting an existing one, Rust
should be considered / preferred / mandated."

What are the obstacles? Here are some that I've heard.

- Lack of Rust expertise for both writing and reviewing code. We have some
pockets of expertise, but these need to be expanded greatly. I've heard
that there has been some Rust training in the Paris and Toronto offices.
Would training in other offices (esp. MV and SF, given their size) be a
good idea? What about remoties?

- ARM/Android is not yet a Tier-1 platform for Rust. See
https://forge.rust-lang.org/platform-support.html and
https://internals.rust-lang.org/t/arm-android-to-tier-1/5227 for some
details.

- Interop with existing components can be difficult. IPDL codegen rust
bindings could be a big help.

- Compile times are high, especially for optimized builds.

Anything else?



How is the performance when crossing Rust <-> C++ boundary? We need to make 
everything faster, not slower.
If I understood emilio's explanation on IRC correctly having the performance of 
an inlined (C++) function requires
handwritten rust bindings to access member variables of some C++ object.
That doesn't sound too good - hard to maintain and possibly easy to forget to 
optimize.

I don't claim to understand anything about the current setup, but
has anyone written down what would be needed to have fast and easy to maintain Rust 
<-> C++ boundary in
such way that also memory handling is easy to manage (aka, how to deal with 
CC/GC).
I think it would be better to sort out this kind of low level issues rather 
soon before we have too much
Rust code in tree, or perhaps we won't see much Rust usage before those issues 
are sorted out.

(I'm looking this all from DOM point of view, where pretty much all the objects need to be cycle collectable JS holders, but perhaps Rust would fit 
better in code outside DOM)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-10 Thread smaug

On 07/10/2017 09:04 PM, zbranie...@mozilla.com wrote:

One more thought. There's a project that fitzgen told me about that aims to 
allow for components to communicate between JS and Rust using Streams.

If we could get to the point where instead of WebIDL/XPIDL we could just plug 
streams between JS/CPP and Rust in Gecko, I believe the scope of Gecko 
components that can be written in Rust would skyrocket.

zb.



FWIW, that sounds rather heavy weight to me, and we should be optimizing 
everything.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Converting const char ()[N] to nsACString

2017-07-04 Thread smaug

On 07/04/2017 04:47 PM, Dexter wrote:

Hi everyone!

Could you tell me please, how I can clearly convert char ()[N] to 
nsACString?

I tried this:

char* cname = new char[N];
memcpy(cname, , N);
nsAutoString strName;
strName.AssignWithConversion(cname, N);

I can't find out how to get nsACString. I could use nsACString instead of 
nsAutoString, but nsACString hasn't AssignWithConversion(), as I know.



nsAutoString isn't nsACString. Note the 'C' in the name.

But you can do for example
nsAutoCString foo;
foo.AppendASCII(cname);
or
foo.AppendLiteral(some_literal)
or maybe you want to use nsDependentCString?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How can I run Firefox programatically in fullscreen?

2017-06-27 Thread smaug

On 06/27/2017 12:12 AM, Armen Zambrano Gasparnian wrote:

Asking around, looking on dxr or MDN did not yield something easily.

I don't want to have to use Marionette in this specific automation context.

Thanks in advance,
Armen



Do you mean fullscreen in chrome level, or running web pages in fullscreen mode 
or perhaps just window maximized?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How are events passed between chrome and content?

2017-06-16 Thread smaug

On 06/16/2017 02:32 AM, Jim Porter wrote:

On 6/15/17 4:12 PM, Kartikaya Gupta wrote:

Not quite. For e10s, mouse events are sent across the process boundary
using the PBrowser ipdl protocol. On the parent side they go into
EventStateManager::DispatchCrossProcessEvent [1] which picks up the
TabParent and sends it over IPC to the TabChild. The TabChild then
dispatches it to the PuppetWidget in the content process.


Thanks for the code reference! That at least gives me a rough idea of
where all this is happening so I can maybe debug it.


If you set the right flags on the event it should get through.
Presumably you're creating this event from JS - you want to make sure
that it creates a WidgetMouseEvent with eContextMenu message type. I
don't know if that can be done without using the DOMWindowUtils APIs.
Posting your code snippet for event creation might help.


Here's the code I'm currently using to generate a contextmenu event:
.

That just dispatches the event to dom. No forwarding to child process will 
happen.
You need to use methods in nsIDOMWindowUtils to dispatch events in a way which 
simulates real events better
and causes forwarding to happen.



As mentioned, it works fine for creating a context menu in chrome, but
not content. (It also works fine in non-e10s content, but that shouldn't
be surprising.) I might be missing something, but when I looked in the
inspector, I didn't see any obvious differences between this event and
the original contextmenu event that I suppressed.

Maybe there's some issue with the event target? When the original
contextmenu event is fired, it says the target is the XUL tabbrowser
(#content) and the originalTarget is the XUL browser element. I've tried
dispatching my synthetic event to both, but neither does anything.

I've also tried keeping the original contextmenu event around that I
suppressed and re-dispatching that, but it doesn't work either. However,
since I called .preventDefault()/.stopPropagation() on that object, I'm
not sure I'd expect it to work anyway.

- Jim



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Avoiding jank in async functions/promises?

2017-05-21 Thread smaug

On 05/18/2017 09:25 PM, Domenic Denicola wrote:

On Thursday, May 18, 2017 at 4:34:37 AM UTC-4, smaug wrote:

FWIW, I just yesterday suggested in #whatwg that the platform should have 
something like IdlePromise or AsyncPromise.
And there is the related spec bug 
https://github.com/whatwg/html/issues/512#issuecomment-171498578


In my opinion there's no need for a whole new promise subclass, just make 
requestIdleCallback() return a promise when there's no argument passed.




That wouldn't really catch my idea. I was hoping to have a whole Promise chain using async or idle scheduling. Running tons of Promises even at rIC 
time is still running tons of promises and possibly causing jank.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Avoiding jank in async functions/promises?

2017-05-18 Thread smaug

FWIW, I just yesterday suggested in #whatwg that the platform should have 
something like IdlePromise or AsyncPromise.
And there is the related spec bug 
https://github.com/whatwg/html/issues/512#issuecomment-171498578


On 05/18/2017 04:22 AM, Mark Hammond wrote:

Given our recent performance push, I thought I'd bring this topic up again.

When writing loops using async functions in browser code, it's still very easy 
to cause jank. For example, a trivial function:


async function janky() {
  for (let i = 0; i < 10; i++) {
await Promise.resolve();
  }
}

janky().then(() => console.log("done"));


will cause the browser to hang. While that's not considered a bug in the 
implementation of the promise scheduler, and might not be particularly
surprising in that trivial example, lots of real-world code can still hit this 
case - which both isn't obvious from casual inspection, and even if it
was, doesn't have an obvious solution.

Concretely, we struck this a couple of years ago in bug 1186714 - creating a 
backup of all bookmarks ends up looking alot like the loop above. In
addition, the Sync team is moving away from nested event loops towards 
promises, and our work to date is hitting this problem.

In bug 1186714, we solved the problem by inserting code into the loop that 
looks like:


   if (i % 50 == 0) {
 await new Promise(resolve => {
   Services.tm.dispatchToMainThread(resolve);
 });
   }


http://searchfox.org/mozilla-central/rev/f55349994fdac101d121b11dac769f3f17fbec4b/toolkit/components/places/PlacesUtils.jsm#2022

so we explicitly yield to the event loop every 50 iterations. However, this 
isn't optimal as the 50 is obviously a magic number, determined by
experimentation on a couple of machines - when running on low spec hardware, 
this loop is almost certainly still janky. If we try and err on the side
of caution (eg, yielding every iteration) we see the wall time of the loop take 
a massive hit (around twice as long in that bug).

I'm wondering if there are any ideas about how to solve this optimally? Naively, it seems 
that the (broadest sense of the term) "platform" might be
able to help here using it's knowledge of the event-loop state - eg, a function that 
indicates "are we about to starve the event loop and become
janky?", or possibly even the whole hog (ie, "please yield to the event loop if 
you think now is a good time, otherwise just hand back a pre-resolved
promise").

Or maybe there are simpler options I've overlooked?

Thanks,

Mark


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread smaug

On 05/09/2017 04:52 PM, smaug wrote:

On 05/09/2017 01:55 PM, Mike Hommey wrote:

On Tue, May 09, 2017 at 01:31:33PM +0300, Henri Sivonen wrote:

On Tue, May 9, 2017 at 12:58 PM, Emilio Cobos Álvarez <emi...@crisal.io> wrote:

I think references help to encode that a bit more in the type system,
and help reasoning about the code without having to look at the
implementation of the function you're calling into, or without having to
rely on the callers to know that you expect a non-null argument.

Personally, I don't think that the fact that they're not used as much as
they could/should is a good argument to prevent their usage, but I don't
know what's the general opinion on that.


The relevant bit of the Core Guidelines is
https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rf-ptr-ref
and says:
"A pointer (T*) can be a nullptr and a reference (T&) cannot, there is
no valid "null reference". Sometimes having nullptr as an alternative
to indicated "no object" is useful, but if it is not, a reference is
notationally simpler and might yield better code."

As a result, I have an in-flight patch that takes T& instead of
NotNull<T*> where applicable, even though I do use NotNull<T*> to
annotate return values.

I agree that in principle it makes sense to use the type system
instead of relying on partial debug-build run-time measures to denote
non-null arguments when possible. That said, having to dereference a
smart pointer with prefix * in order to pass its referent to a method
that takes a T& argument feels a bit odd when one is logically
thinking of passing a pointer still, but then, again, &*foo seems like
common pattern on the Rust side of FFI to make a reference out of a
pointer and effectively asserting to the human reader that the pointer
is null.


Note that if you dereference a pointer to pass as a reference, all hell
breaks loose and your reference might just as well be null, but the
function taking the reference won't be protected against it because the
compiler will have assumed, when compiling it, that the reference can't
be null.


This is what I'm a bit worried. We're moving some null checks from callee to 
caller, so need to
learn to be more careful with null checks on caller side.
But in general, using references sounds ok.




I could add still, that using references does add another thing to reviewers' 
checklist to ensure
that null pointer isn't ever dereferenced, and it also means that there will be 
more null checks, since
parameters need to be validated on callers' side, not in callee.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread smaug

On 05/09/2017 01:55 PM, Mike Hommey wrote:

On Tue, May 09, 2017 at 01:31:33PM +0300, Henri Sivonen wrote:

On Tue, May 9, 2017 at 12:58 PM, Emilio Cobos Álvarez  wrote:

I think references help to encode that a bit more in the type system,
and help reasoning about the code without having to look at the
implementation of the function you're calling into, or without having to
rely on the callers to know that you expect a non-null argument.

Personally, I don't think that the fact that they're not used as much as
they could/should is a good argument to prevent their usage, but I don't
know what's the general opinion on that.


The relevant bit of the Core Guidelines is
https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rf-ptr-ref
and says:
"A pointer (T*) can be a nullptr and a reference (T&) cannot, there is
no valid "null reference". Sometimes having nullptr as an alternative
to indicated "no object" is useful, but if it is not, a reference is
notationally simpler and might yield better code."

As a result, I have an in-flight patch that takes T& instead of
NotNull where applicable, even though I do use NotNull to
annotate return values.

I agree that in principle it makes sense to use the type system
instead of relying on partial debug-build run-time measures to denote
non-null arguments when possible. That said, having to dereference a
smart pointer with prefix * in order to pass its referent to a method
that takes a T& argument feels a bit odd when one is logically
thinking of passing a pointer still, but then, again, &*foo seems like
common pattern on the Rust side of FFI to make a reference out of a
pointer and effectively asserting to the human reader that the pointer
is null.


Note that if you dereference a pointer to pass as a reference, all hell
breaks loose and your reference might just as well be null, but the
function taking the reference won't be protected against it because the
compiler will have assumed, when compiling it, that the reference can't
be null.


This is what I'm a bit worried. We're moving some null checks from callee to 
caller, so need to
learn to be more careful with null checks on caller side.
But in general, using references sounds ok.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Ambient Light Sensor API

2017-04-27 Thread smaug

On 04/25/2017 04:38 PM, Ehsan Akhgari wrote:

On 04/24/2017 06:04 PM, Martin Thomson wrote:

I think that 60Hz is too high a rate for this.

I suggest that we restrict this to top-level, foreground, and secure
contexts.  Note that foreground is a necessary precondition for the
attack, so that restriction doesn't really help here.  Critically,
rate limit access much more than the 60Hz recommended for the
accelerometer.  5Hz might be sufficient here, maybe even lower.


How does restricting this to secure contexts help?

This is a good point to remember in this kinds of attacks. So often has use of 
secure context suggested as the way to
fix issues, when it often doesn't really help at all with the core issue.

And the initial email did have
"Unshipping for non-secure context and making it HTTPS-only wouldn't address the 
attack."



(Also, secure context itself is in its current design rather broken, IMO, yet 
hinting in its name that it is somehow secure.
 BroadcastChannel or localStorage etc are easy ways to transfer data from 
secure context to non-secure. )


  If I understand correctly the attacks being discussed in the article 
referenced here are stealing

cross origin data and user's history.  These aren't things that we can expose 
to secure contexts any more than we can expose to insecure contexts.

Since the amount of information that can be extracted is a function of
the number of times the API is called and the accuracy of the reading,
we should consider also reducing the accuracy of the reading.  The
attack reduced its information extraction to 1 bit per reading, but
you can't assume that this is the actual limit.  Fuzzing is much
harder than it seems if your intent is to deal with an attack.  I can
walk through some strategies if someone is interested in doing this.

That all assumes that you aren't willing to add a dialog for this,
which seems like the right answer.  That said, if the mitigation
strategies - rate limiting in particular - can't be implemented in a
reasonable time-frame, I would consider preffing this off (though the
pref name suggests that there would be collateral).


It seems hard to explain the risks of granting this permission to a site to a 
user correctly.  :-/  A permission prompt that doesn't allow us do that
isn't very useful.  Also given that the API is an event handler, it doesn't 
really lend itself to a permission prompt anyway.

But also given the event handler nature of the API, not dispatching the event 
makes it very unlikely to break pages, unless if the page's
functionality explicitly depends on the ambient light, I think, which works in 
our favor if we decide in that direction.  I think I'm leaning in that
way personally rather than coming up with a complicated heuristic here and 
potentially getting it wrong.


On Tue, Apr 25, 2017 at 12:06 AM, Jonathan Kingston  wrote:

As a follow up, it looks like the device motion events defined in the
device sensors:
http://searchfox.org/mozilla-central/source/dom/system/nsDeviceSensors.cp
should also be restricting based on isSecureContext.

The spec mentions "should take into consideration the following suggestions"
:
https://www.w3.org/TR/orientation-event/#security-and-privacy

We don't do those measures from what I can see.

Can we make the webIDL represent this requirement for requiring secure
context instead?

Thanks
Jonathan

On Mon, Apr 24, 2017 at 2:41 PM, Jonathan Kingston  wrote:


As mentioned a permission prompt isn't great.

In it's current state it should probably be considered a "powerful
feature" that we can remove just for secure context. Granted this doesn't
fix the exploit mentioned here though.

Freddy highlighted that the spec itself suggests the Generic Sensor API is
the security model which requires: https://www.w3.org/TR/generic-
sensor/#secure-context I can't see that as a restriction in our codebase
though?

This looks like a specification violation here.

Thanks
Jonathan

On Mon, Apr 24, 2017 at 2:38 PM, Frederik Braun 
wrote:


The Ambient Light spec defers its security and privacy considerations to
the generic sensors specification, which states


all interfaces defined by this specification or extension

specifications must only be available within a secure context.


Would we require telemetry before we restricted this to secure contexts?



On 24.04.2017 15:24, Frederik Braun wrote:

Hi,

there is a relatively recent blog post [1] by Lukasz Olejnik and Artur
Janc that explains how one can steal sensitive data using the Ambient
Light Sensor API [2].

We ship API and its enabled by default [3,4] and it seems we have no
telemetry for this feature.


Unshipping for non-secure context and making it HTTPS-only wouldn't
address the attack.

The API as implemented is using the 'devicelight' event on window.
I suppose one might also be able to implement a prompt for this, but
that doesn't sound very appealing (prompt fatigue, etc., 

Re: A reminder about commit messages: they should be useful

2017-04-25 Thread smaug

On 04/25/2017 08:20 PM, Boris Zbarsky wrote:

On 4/25/17 1:07 PM, Alexander Surkov wrote:

I bet there's always room for improvements, and I hope this was a counterpoint 
for the example only, not for the bug organization approach.


Sort of.

It was a counterpoint to "just check the bug; all the info is there". Often 
it's not there, or not there in usable form.

If people included a summary of the discussion in the bug right about when they 
commit, or had bugs that actually explained what's going on clearly. I
would be a lot more OK with the "check the bug" approach.


Overall it feels with me that long comments vs check-the-bug is rather 
different styles


To be clear, I don't think commit messages need to be _long_.  They need to be 
_useful_.

Exactly. Recently I've seen some commit messages to be long but not useful, in the style 
"I did this and that", but not really explaining why.


A commit message pointing to a particular bug comment that
explains all the ins and outs is no worse, from my point of view, than a commit 
message that explains the ins and outs.

A major issue comes from bugs where the work is split to several patches. That 
is why bugs really need to explain it all too.



The problem I started this thread to address is things like a commit message that says "flip 
this bit" and references a bug entitled "flip this bit",
with no explanation of what the bit does or why it should be flipped.  I hope 
we can all agree that _that_ is not OK.  And it's far too common.


Yeah, definitely. The bug should be clear about what it is about to fix, and 
the commit message should also be clear what it is doing and why.
In some cases bug title is clear enough to explain why something is broken or 
should be changed, but not usually.
(I have perhaps a bit bad habit to file bugs with just descriptive title, especially in 
case of "Consider to do foo" bugs)



-Boris



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: IPDL now supports async returns with MozPromise

2017-04-20 Thread smaug

On 04/20/2017 05:15 AM, Bevis Tseng wrote:

A soft reminder of using AbstractThread::GetCurrent()/MainThread()
​ after some design change for Quantum-DOM.

If this message/callback is to be handled on *the main thread of the
content process*, please use the replacement called AbstractMainThreadFor
 instead.

If you're in background thread or not in content process, you are totally
fine to use AbstractThread::GetCurrent()/MainThread().


Why is using AbstractThread::MainThread() ok in background threads?



Regards,
Bevis Tseng


On Thu, Apr 20, 2017 at 2:54 AM, Kan-Ru Chen  wrote:


Hello!

With bug 1313200 landed, async IPC messages can now return data via
MozPromises.

The IPDL syntax:

protocol PFoo {
child:
async Bar() returns (bool param);
};

will generate IPC code that allow the send method like this:

SendBar()->Then(AbstractThread::GetCurrent(), __func__,
[](bool param) {
  // do something
},
[](PromiseRejectReason aReason) {
  // handle send failure
});

For a message named Foo it will receive a promise resolver with type
FooPromise. For example the receiving side of previous message
PFoo::Bar the handler looks like this:

mozilla::ipc::IPCResult
FooChild::RecvBarMessage(RefPtr&& aPromise)
{
bool result;
// do some computation
aPromise->Resolve(result);
}

If there are multiple return values, they will be wrapped in a
mozilla::Tuple.

The usual MozPromise rule applies. You can store the promise and
resolve it later. You can direct the ThenFunction to be run on other
threads. If the channel is closed before all promises are resolved,
the pending promises will be rejected with
PromiseRejectReason::ChannelClosed. Promises resolved after channel
close are ignored.

It is discouraged for the receiving handler to reject the promise. It
should be reserved for the IPC system to signal errors. If you must
reject the promise, only PromiseRejectReason::HandlerRejected is valid
value.

Please give it a try. In theory this should work for all IPC actors. If
you encountered issues or have ideas to
improve this, please file a bug in Core :: IPC.

Thanks,
Kanru

P.S. Note that async constructor or destructor does not support return
promises because the semantic is still not clear.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-18 Thread smaug

On 04/18/2017 04:24 PM, Ehsan Akhgari wrote:

On 2017-04-18 12:30 AM, Mike Hommey wrote:

I've yet to see that to happen. What is crucial is fast way to browse
through the blame in time. So commit messages should be short and
descriptive. Telling what and why. (I very often need to go back to CVS era
code). I won't spend time reading several paragraphs of commit messages.
Those are just too long.
Basically first line should tell the essentials 'what' and maybe the most 
obvious 'why' and the next couple of lines explaining 'why' more precisely.
Don't write stories which will make blame-browsing slower.


What sort of blame-browsing makes more than the first line of the commit
message a burden? I'm curious, because that doesn't match my use of
blame.


I can't say I have had this issue, and I also do a lot of code
archaeology as well.  I suppose to a large extent this does depend on
what tools you use, perhaps.  These days I use searchfox.org for a lot
of blame browsing that I do, which displays only the first line in the
default UI and only displays the full commit message when you look at
the full commit.  I find this a pretty good balance.


And I've done my share of archeology and for old enough stuff you often
end up with one of:
- a completely useless commit message *and* useless bug (if there's even
  a bug number)
- a mostly useless commit message and some information in the bug.
  You're lucky if it's all contained in a few sentences.
- a mostly useless commit message and tons of information in the bug,
  that you have to spend quite some time parsing through.

In *all* cases, you have to go switch between your blame and bugzilla.
It's a PITA. Now, when you have everything in VCS, you just work with
your blame. Obviously, CVS-era commits are not going to change, but do
you really want to keep things in the same state for commits done now,
when you (or someone else) goes through blame in a few years?


Agreed.

Also even for newer code you often end up in the third category.  Maybe
it's just me but I spend a lot of time reading old bugs, and I find it a
huge time sink to read through tons of comments just to find where the
actual discussion about why the fix was done in the way that it was done
happened in the bug.  Sometimes I have to really read 100+ comments.
And the sad reality is that all too often I find very questionable
things in patches happen in bugs without any discussion whatsoever which
means that sometimes one can spend half an hour reading those 100+
comments very carefully to find out in the end that they have just
wasted their time and the bug actually did not contain the interesting
information in the first place.

I think I would probably sympathize more with the side of the argument
arguing for bugs as the source of the truth if this wasn't really true,
but I think part of the reason why people don't bother writing good
commit messages is that they assume that the reasons behind the
approaches they take and the design decisions and the like are obvious,
and while that may be true *now* and to them and the code reviewer, it
will not be true to everyone and in the future, and because of that if
you're not careful the important information is just not captured
anywhere.  I hope we can all agree that it's important that we capture
this information for the maintainability of our code, even if we can't
agree where the information needs to be captured.  :-)



The important part of documentation should be in code comments, not commit 
messages.
We aren't too good commenting code.


Another thing, today I was looking at a bug which has several patches, and realized one can't really understand the big picture by looking at commit 
messages of some particular commit. There needs to be some centralized place for that, and it is the bug. Going through each patches individually and 
looking at the commit messages would be really painful way to see what and why was changed.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread smaug

On 04/18/2017 03:12 AM, gsquel...@mozilla.com wrote:

On Tuesday, April 18, 2017 at 11:58:11 AM UTC+12, smaug wrote:

On 04/18/2017 02:36 AM, Gregory Szorc wrote:

On Mon, Apr 17, 2017 at 4:10 PM, smaug <sm...@welho.com> wrote:


On 04/17/2017 06:16 PM, Boris Zbarsky wrote:


A quick reminder to patch authors and reviewers.

Changesets should have commit messages.  The commit message should
describe not just the "what" of the change but also the "why".  This is
especially
true in cases when the "what" is obvious from the diff anyway; for larger
changes it makes sense to have a summary of the "what" in the commit
message.

As a specific example, if your diff is a one-line change that changes a
method call argument from "true" to "false", having a commit message that
says
"change argument to mymethod from true to false" is not very helpful at
all.  A good commit message in this situation will at least mention the
meaning for the argument.  If that does not make it clear why the change
is being made, the commit message should explain the "why".

Thank you,
Boris

P.S.  Yes, this was prompted by a specific changeset I saw.  This
changeset had been marked r+, which means neither the patch author not the
reviewer
really thought about this problem.




And reminder, commit messages should *not* be stories about how you ended
up with this particular change. They should just tell "what" and "why".
It seems like using mozreview leads occasionally writing stories (which is
totally fine as a bugzilla comment).



I disagree somewhat. As a reviewer, I often appreciate the extra context if
it helps me - the reviewer - or a future me - an archeologist or patch
author - understand the situation better.


That is why we have links to the bug. Bug should always be the unite of truth 
telling
why some change was done. Bugs tend to have so much more context about the 
change than any individual commit message can or should have.


But then some bugs may accumulate hundreds of comments and it becomes unreasonable to 
expect future maintainers to read through them all, when the commit descriptions could 
just present a selectively-useful history of the "how we got to this solution".


I've yet to see that to happen. What is crucial is fast way to browse through the blame in time. So commit messages should be short and descriptive. 
Telling what and why. (I very often need to go back to CVS era code). I won't spend time reading several paragraphs of commit messages. Those are just 
too long.

Basically first line should tell the essentials 'what' and maybe the most 
obvious 'why' and the next couple of lines explaining 'why' more precisely.
Don't write stories which will make blame-browsing slower.







If that context prevents the
reviewer or a future patch author from asking "why didn't we do X [and
spending a few hours waiting for a reply or trying to find an answer]" then
that context was justified. I do agree there are frivolous stories that do
little more than entertain (e.g. the first paragraphs of
https://hg.mozilla.org/mozilla-central/rev/6b064f9f6e10) and we should not
be encouraging that style.



Overlong unrelated commit messages just make it harder to read blame.



This is the tail wagging the dog. Please file a bug against the tool for
letting best practices interfere with an important workflow.





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread smaug

On 04/18/2017 02:36 AM, Gregory Szorc wrote:

On Mon, Apr 17, 2017 at 4:10 PM, smaug <sm...@welho.com> wrote:


On 04/17/2017 06:16 PM, Boris Zbarsky wrote:


A quick reminder to patch authors and reviewers.

Changesets should have commit messages.  The commit message should
describe not just the "what" of the change but also the "why".  This is
especially
true in cases when the "what" is obvious from the diff anyway; for larger
changes it makes sense to have a summary of the "what" in the commit
message.

As a specific example, if your diff is a one-line change that changes a
method call argument from "true" to "false", having a commit message that
says
"change argument to mymethod from true to false" is not very helpful at
all.  A good commit message in this situation will at least mention the
meaning for the argument.  If that does not make it clear why the change
is being made, the commit message should explain the "why".

Thank you,
Boris

P.S.  Yes, this was prompted by a specific changeset I saw.  This
changeset had been marked r+, which means neither the patch author not the
reviewer
really thought about this problem.




And reminder, commit messages should *not* be stories about how you ended
up with this particular change. They should just tell "what" and "why".
It seems like using mozreview leads occasionally writing stories (which is
totally fine as a bugzilla comment).



I disagree somewhat. As a reviewer, I often appreciate the extra context if
it helps me - the reviewer - or a future me - an archeologist or patch
author - understand the situation better.


That is why we have links to the bug. Bug should always be the unite of truth 
telling
why some change was done. Bugs tend to have so much more context about the 
change than any individual commit message can or should have.




If that context prevents the
reviewer or a future patch author from asking "why didn't we do X [and
spending a few hours waiting for a reply or trying to find an answer]" then
that context was justified. I do agree there are frivolous stories that do
little more than entertain (e.g. the first paragraphs of
https://hg.mozilla.org/mozilla-central/rev/6b064f9f6e10) and we should not
be encouraging that style.



Overlong unrelated commit messages just make it harder to read blame.



This is the tail wagging the dog. Please file a bug against the tool for
letting best practices interfere with an important workflow.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread smaug

On 04/17/2017 06:16 PM, Boris Zbarsky wrote:

A quick reminder to patch authors and reviewers.

Changesets should have commit messages.  The commit message should describe not just the 
"what" of the change but also the "why".  This is especially
true in cases when the "what" is obvious from the diff anyway; for larger changes it 
makes sense to have a summary of the "what" in the commit message.

As a specific example, if your diff is a one-line change that changes a method call argument from 
"true" to "false", having a commit message that says
"change argument to mymethod from true to false" is not very helpful at all.  A 
good commit message in this situation will at least mention the
meaning for the argument.  If that does not make it clear why the change is being made, 
the commit message should explain the "why".

Thank you,
Boris

P.S.  Yes, this was prompted by a specific changeset I saw.  This changeset had 
been marked r+, which means neither the patch author not the reviewer
really thought about this problem.



And reminder, commit messages should *not* be stories about how you ended up with this particular 
change. They should just tell "what" and "why".
It seems like using mozreview leads occasionally writing stories (which is 
totally fine as a bugzilla comment).
Overlong unrelated commit messages just make it harder to read blame.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling Pointer Events in Firefox (desktop) Nightly on Mac and Linux

2017-04-10 Thread smaug

On 04/05/2017 07:34 PM, Tom Ritter wrote:

On Tue, Apr 4, 2017 at 10:29 PM,   wrote:

Security & Privacy Concerns: none


It looks like this exposes pointerType, which reveals whether the user
is using a mouse, pen, or touch input.



Note, that is already available through old moz-prefixed API (which we very 
much should remove)
http://searchfox.org/mozilla-central/rev/eace920a0372051a11d8d275bd9b8f14f3024ecd/dom/webidl/MouseEvent.webidl#84-92




It also exposes detailed information about the geometry of the input
(size of the thing pointing, pressure, tilt, twist.

All of these are more detailed information that websites currently
receiving, meaning that this can be used as a mechanism for
fingerprinting (and tracking) users.

-tom



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please write good commit messages before asking for code review

2017-03-13 Thread smaug

On 03/10/2017 12:43 AM, Ben Kelly wrote:

On Thu, Mar 9, 2017 at 5:35 PM, Mike Hommey  wrote:


On Thu, Mar 09, 2017 at 02:46:53PM -0500, Ehsan Akhgari wrote:

I review a large number of patches on a typical day, and usually I have

to

spend a fair amount of time to just understand what the patch is doing.

As

the patch author, you can do a lot to help make this easier by *writing
better commit messages*.  Starting now, I'm going to try out a new

practice

for a while: I'm going to first review the commit message of all patches,
and if I can't understand what the patch does by reading the commit

message

before reading any of the code, I'll r- and ask for another version of

the

patch.


Sometimes, the commit message does explain what it does in a sufficient
manner, but finding out why requires reading the bug, assuming it's
written there. I think this information should also be in the commit
message.



(Just continuing the thread here.)

Personally I prefer looking at the bug for the full context and single
point of truth.  Also, security bugs typically can't have extensive commit
messages and moving a lot of context to commit messages might paint a
target on security patches.



Exactly. In practice I tend to just read the first line of a commit message and 
the most important part there is the bug number.
I've never been a fan of overlong commit messages.
MozReview makes this a bit different since it decouples bugs and changes, so 
there seeing more explanation in the commit message is more useful.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-13 Thread smaug

On 03/10/2017 12:59 AM, Bobby Holley wrote:

At a high level, I think the goals here are good.

However, the tooling here needs to be top-notch for this to work, and the
standard approach of flipping on an MVP and dealing with the rest in
followup bugs isn't going to be acceptable for something so critical to our
productivity as landing code. The only reason more developers aren't
complaining about the MozReview+autoland workflow right now is that it's
optional.

The busiest and most-productive developers (ehsan, bz, dbaron, etc) tend
not to pay attention to new workflows because they have too much else on
their plate. The onus needs to be on the team deploying this to have the
highest-volume committers using the new system happily and voluntarily
before it becomes mandatory. That probably means that the team should not
have any deadline-oriented incentives to ship it before it's ready.

Also, getting rid of "r+ with comments" is a non-starter.


FWIW, with my reviewer hat on, I think that is not true, _assuming_ there is a 
reliable interdiff for patches.
And not only MozReview patches but for all the patches. (and MozReview 
interdiff isn't reliable, it has dataloss issues so it
doesn't really count there.).
I'd be ok to do a quick r+ if interdiff was working well.



-Olli



bholley


On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor  wrote:


(please direct followups to dev-planning, cross-posting to governance,
firefox-dev, dev-platform)


Nearly 19 years after the creation of the Mozilla Project, commit access
remains essentially the same as it has always been.  We've evolved the
vouching process a number of times, CVS has long since been replaced by
Mercurial & others, and we've taken some positive steps in terms of
securing the commit process.  And yet we've never touched the core idea of
granting developers direct commit access to our most important
repositories.  After a large number of discussions since taking ownership
over commit policy, I believe it is time for Mozilla to change that
practice.

Before I get into the meat of the current proposal, I would like to outline
a set of key goals for any change we make.  These goals have been informed
by a set of stakeholders from across the project including the engineering,
security, release and QA teams.  It's inevitable that any significant
change will disrupt longstanding workflows.  As a result, it is critical
that we are all aligned on the goals of the change.


I've identified the following goals as critical for a responsible commit
access policy:


- Compromising a single individual's credentials must not be sufficient
to land malicious code into our products.
- Two-factor auth must be a requirement for all users approving or
pushing a change.
- The change that gets pushed must be the same change that was approved.
- Broken commits must be rejected automatically as a part of the commit
process.


In order to achieve these goals, I propose that we commit to making the
following changes to all Firefox product repositories:


- Direct commit access to repositories will be strictly limited to
sheriffs and a subset of release engineering.
   - Any direct commits by these individuals will be limited to fixing
   bustage that automation misses and handling branch merges.
- All other changes will go through an autoland-based workflow.
   - Developers commit to a staging repository, with scripting that
   connects the changeset to a Bugzilla attachment, and integrates
with review
   flags.
   - Reviewers and any other approvers interact with the changeset as
   today (including ReviewBoard if preferred), with Bugzilla flags as
the
   canonical source of truth.
   - Upon approval, the changeset will be pushed into autoland.
   - If the push is successful, the change is merged to mozilla-central,
   and the bug updated.

I know this is a major change in practice from how we currently operate,
and my ask is that we work together to understand the impact and concerns.
If you find yourself disagreeing with the goals, let's have that discussion
instead of arguing about the solution.  If you agree with the goals, but
not the solution, I'd love to hear alternative ideas for how we can achieve
the outcomes outlined above.

-- Mike
___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-12 Thread smaug

On 03/12/2017 10:40 PM, Ehsan Akhgari wrote:

On 2017-03-11 9:23 AM, smaug via governance wrote:

On 03/11/2017 08:23 AM, Nicholas Nethercote wrote:

On Sat, Mar 11, 2017 at 2:23 PM, smaug via governance <
governa...@lists.mozilla.org> wrote:


I'd be ok to do a quick r+ if interdiff was working well.


Depending on the relative timezones of the reviewer and reviewee, that
could delay landing by 24 hours or even a whole weekend.


The final r+, if it is just cosmetic changes wouldn't need to be done by
the same reviewer.


OK, but what is the exact time zone efficient way to ensure that no huge
amount of delay is added for the turn around of this final review round?

Let's be realistic.  In practice, adding one extra person in the process
of landing of the patches that currently land with r+-with-nits *will*
slow them down.  And we should expect that it will slow them down by
factors such as time zones, people missing bugmail, etc,


Why we need to expect that? In my mind the process would be closer to
"I want to land this patch, asking ok-to-land on irc." And then someone who has 
the rights to say ok to that would
mark the patch after seeing that the interdiff doesn't add anything inherently 
bad.
(well working interdiff and tooling in general is rather critical)



all of the
reasons why currently one review can end up taking a week,

I see this being _very_ different to normal reviews.

I do understand that people don't want extra process. Overhead is always 
overhead.
But the overhead here might not be as bad as feared.


it may end up
being that final round of review after this proposed change.

And I still don't understand what the proposal means with rebases in
practice.  What if, after automation tries to land your change after you
got your final r+ the final rebase fails and you need to do a manual
rebase?  Do you need to get another r+?  What happens if you're touching
one of our giant popular headers and someone else manages to land a
conflict while your reviewer gets back to you?


Perhaps we shouldn't even call the last step a review. It would be
"ok-to-land".
r+ without asking any changes would implicitly contain that "ok-to-land".
(if rebasing causes some changes, that would then need explicit
"ok-to-land")


Are you suggesting a change to the nature of the review process for the
last step with the rename?


The last step here would be quite different to normal reviews. It is just a 
rather quick glance-over that no
obviously evil code is about to land. (as I said, well working interdiff is 
absolutely critical to make this working)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-11 Thread smaug

On 03/11/2017 08:23 AM, Nicholas Nethercote wrote:

On Sat, Mar 11, 2017 at 2:23 PM, smaug via governance <
governa...@lists.mozilla.org> wrote:


I'd be ok to do a quick r+ if interdiff was working well.


Depending on the relative timezones of the reviewer and reviewee, that
could delay landing by 24 hours or even a whole weekend.


The final r+, if it is just cosmetic changes wouldn't need to be done by the 
same reviewer.

Perhaps we shouldn't even call the last step a review. It would be "ok-to-land".
r+ without asking any changes would implicitly contain that "ok-to-land".
(if rebasing causes some changes, that would then need explicit "ok-to-land")





In general there seems to be a large amount of support in this thread for
continuing to allow the r+-with-minor-fixes option.


Yeah. I don't object that, but I also think that having final approval to land 
the patch might not really be that bad
(assuming the tools are working well enough).



Nick



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use Add*VarCache and not Get* when reading pref values in any even possibly warm code paths

2017-02-23 Thread smaug

On 02/23/2017 11:10 AM, Masayuki Nakano wrote:

Do you have any ideas of the cases we should use Add*VarCache?

For example, it's bad if using Get* when:

* every painting

possibly


* every mousemove

probably


* every user input except mousemove

well, touchmove is happening a lot too, and possibly wheel, so maybe cache 
should be used.
Writing a microbenchmark and profiling a bit shouldn't take more than couple of 
minutes and if the
Get* shows up in the profiles, perhaps worth to convert to Add*VarCache


* every focus move

happens quite rarely, so maybe not needed


* every DOM change

Definitely use cache


* every page load

This happens rarely, so probably doesn't matter



etc.

I wonder, if everybody uses Add*VarCache, doesn't it cause another performance 
problem when a pref is changed?

On 2017/02/22 20:18, smaug wrote:

Hi,

Preferences::GetBool is a slow hashtable lookup and has been showing up in 
performance profiles in many places.
Please use Preferences::AddBoolVarCache. Same with other pref types.
Or if the pref needs to be read just once, store the value in some static 
variable or so.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use Add*VarCache and not Get* when reading pref values in any even possibly warm code paths

2017-02-22 Thread smaug

Huh, that is a horrible bug.


On 02/22/2017 01:42 PM, Gijs Kruitbosch wrote:

Related note: add*varcache uses a pref observer behind the scenes. Pref 
observers always prefix-match, and the *varcache implementation doesn't bother
re-checking whether the pref for which it gets a notification matches the one you 
requested a cache for. So if you have prefs "blah.foo" and
"blah.foo.bar", changing blah.foo.bar will write garbage into your cache for 
blah.foo.

https://bugzilla.mozilla.org/show_bug.cgi?id=1208744

~ Gijs

On 22/02/2017 11:18, smaug wrote:

Hi,

Preferences::GetBool is a slow hashtable lookup and has been showing up
in performance profiles in many places.
Please use Preferences::AddBoolVarCache. Same with other pref types.
Or if the pref needs to be read just once, store the value in some
static variable or so.



-Olli




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Please use Add*VarCache and not Get* when reading pref values in any even possibly warm code paths

2017-02-22 Thread smaug

Hi,

Preferences::GetBool is a slow hashtable lookup and has been showing up in 
performance profiles in many places.
Please use Preferences::AddBoolVarCache. Same with other pref types.
Or if the pref needs to be read just once, store the value in some static 
variable or so.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Event.timeStamp as DOMHighResTimeStamp

2017-02-21 Thread smaug

On 02/21/2017 07:03 AM, Brian Birtles wrote:

As of Firefox 54, I intend to turn on, by default, the code that makes
Event.timeStamp a DOMHighResTimeStamp. This makes this member a double
whose value is the number of milliseconds since the time origin.

This has been developed behind the dom.event.highrestimestamp.enabled
preference. This preference has been set to true on Windows for
Nightly/DevEdition for nearly 3 years, similarly on Linux for 1.5
years, on Mac since last December, and on Android since today. It is
disabled on beta/release for all platforms, however.

Chrome have shipped this since April last year. There were
compatibility concerns at the time but Chrome have continued shipping
this and it appears that Edge are considering making this change, and
WebKit might if either Gecko or Edge ship this[1].

(The original work on this goes back to bug 77992 which I believe
predates our "Intent to implement" practice so there is no intent to
implement mail to point to.)

Bug to turn on by default: https://bugzilla.mozilla.org/show_bug.cgi?id=1026804

Link to standard: I believe Anne is waiting on a second browser to
ship this change before updating the DOM spec: [2]

[1] https://github.com/whatwg/dom/issues/23#issuecomment-249319401
[2] https://github.com/whatwg/dom/issues/23#issuecomment-212815896



Especially because this should fix the old event.timeStamp handling 
inconsistencies, sounds really good.

-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-19 Thread smaug

On 02/18/2017 07:08 PM, Eric Rescorla wrote:


I'd also note that if we're not going to use "this is what we have done
historically"
as a guide, then it seems like much bigger changes are on the table and we
would probably be better off adopting some other well-defined coding
standard
wholesale (e.g., the Google C++ guide), with the decision largely being made
on what has the best tooling available. Personally, I'd prefer this, but I
haven't
heard much enthusiasm for discarding a number of our style choices, however
idiosyncratic they may be (aFoo, I'm looking at you).

-Ekr




I don't care too much about &&/|| coding style, though the current style does 
feel easier to
read, per the reasoning dmajor gave.



But I greatly care about having different naming for different kinds of 
variables.
That reduces risk for errors and eases reviewing. So, removing aFoo? Please no! 
;)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla naming style vs. C++ standard library containers

2017-02-16 Thread smaug

On 02/16/2017 07:24 PM, Henri Sivonen wrote:

It seems that we already have MFBT code that has lower case methods
begin/end/cbegin/cend, etc., for compatibility with C++ standard
library iteration:
https://dxr.mozilla.org/mozilla-central/source/mfbt/ReverseIterator.h#136

I guess these methods shouldn't be capitalized, then.

It seems that even size() and empty() should be lower-case if one
wants a type to quack like a Container:
http://en.cppreference.com/w/cpp/concept/Container

Should these, too, no longer be capitalized?

If a containerish type has more methods that are uncapitalized for the
above reasons than methods whose names aren't constrained by standard
interop, should the remaining methods follow Mozilla naming or
standard library naming?




AFAIK, uncapitalized method names in MFBT are the old style, and new code 
should just
use Mozilla coding style.
This was discussed in some other thread in dev.platform, but I can't find it 
right now.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla naming style vs. C++ standard library containers

2017-02-16 Thread smaug

On 02/16/2017 08:05 PM, smaug wrote:

On 02/16/2017 07:24 PM, Henri Sivonen wrote:

It seems that we already have MFBT code that has lower case methods
begin/end/cbegin/cend, etc., for compatibility with C++ standard
library iteration:
https://dxr.mozilla.org/mozilla-central/source/mfbt/ReverseIterator.h#136

I guess these methods shouldn't be capitalized, then.

It seems that even size() and empty() should be lower-case if one
wants a type to quack like a Container:
http://en.cppreference.com/w/cpp/concept/Container

Should these, too, no longer be capitalized?

If a containerish type has more methods that are uncapitalized for the
above reasons than methods whose names aren't constrained by standard
interop, should the remaining methods follow Mozilla naming or
standard library naming?




AFAIK, uncapitalized method names in MFBT are the old style, and new code 
should just
use Mozilla coding style.
This was discussed in some other thread in dev.platform, but I can't find it 
right now.


It was in the thread about mozilla::Result
From njn:
> It was once a mess, but is not any more. MFBT has for some time used
>standard Mozilla style, with the following minor exception described in
> mfbt/STYLE:
>
>- Some of the files use a lower-case letter at the start of function names.
>  This is because MFBT used to use a different style, and was later
> converted
>  to standard Mozilla style. These functions have not been changed to use an
>  upper-case letter because it would cause a lot of churn in other parts of
> the
>  codebase. However, new files should follow standard Mozilla style and use
> an
>  upper-case letter at the start of function names.
>
>- Imported third-party code (such as decimal/*, double-conversion/*, and
> lz4*)
>  remains in its original style.






-Olli


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't add JS implemented telemetry probes to hot code paths. (eom)

2017-02-14 Thread smaug

On 02/15/2017 12:41 AM, Botond Ballo wrote:

But yes, don't implement probes as JS event listener when the event is some
rather commonly dispatched.
Same applies to observer service observers etc.


In bug 1238137, I added a SCROLL_INPUT_METHODS telemetry ping in
autoscrollLoop. Is that also inappropriate?

Thanks,
Botond

[1] 
http://searchfox.org/mozilla-central/rev/ac8a72f2d5204ced2004c93a9c193430c63a6a71/toolkit/content/browser-content.js#218


That is probably fine, since there we run JS anyhow.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't add JS implemented telemetry probes to hot code paths. (eom)

2017-02-12 Thread smaug

On 02/12/2017 08:52 PM, Jared Wein wrote:

The original email looks to be related to:
https://bugzilla.mozilla.org/show_bug.cgi?id=1338891
and
https://bugzilla.mozilla.org/show_bug.cgi?id=1338902

though I agree that the message could have been explained a little better.



Sorry about that. I thought the subject was clear enough :)

But yes, don't implement probes as JS event listener when the event is some 
rather commonly dispatched.
Same applies to observer service observers etc.

Bug 1338902 doesn't have much to do with this. It is just about the generic 
overhead probes add, also in C++.
Not very high overhead, but definitely measurable. So need to be careful also 
in C++ what to measure and how.

-Olli



- Jared

On Sun, Feb 12, 2017 at 12:41 PM, Gijs Kruitbosch 
wrote:


Can you clarify what you mean by "JS implemented" here? I was under the
impression the telemetry implementation was in C++...

Separately, was there a concrete reason for you posting this warning? I
don't wish to shame people, but a concrete example of a 'wrong' pattern
might help in making it more obvious what exactly is problematic about what
you're suggesting, and what should be done instead.

~ Gijs

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Please don't add JS implemented telemetry probes to hot code paths. (eom)

2017-02-12 Thread smaug


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-18 Thread smaug

On 01/18/2017 08:11 PM, J. Ryan Stinnett wrote:

Thanks for checking it out!

I guess the reading from / writing to disk is only for speeding up
initial load of the first window then?


reading is for speeding up the initial load, and once prototype has been 
loaded, no new
loads are needed, since the prototype can be used as such to generate document 
objects.

We don't have anything like that for html/xhtml documents, they are always 
loaded/parsed from scratch.
But, I wonder if we could use XUL documents with xhtml content. That should 
still use the prototype setup


-Olli




- Ryan


On Wed, Jan 18, 2017 at 12:50 AM, smaug <sm...@welho.com> wrote:

On 01/18/2017 08:28 AM, smaug wrote:


On 01/17/2017 10:51 PM, J. Ryan Stinnett wrote:


On Mon, Jan 16, 2017 at 3:08 PM, smaug <sm...@welho.com> wrote:


On 01/16/2017 10:43 PM, Dave Townsend wrote:



One of the things I've been investigating since moving back to the
desktop
team is how we can remove XUL from the application as much as possible.
The
benefits for doing this are varied, some obvious examples:

* XUL is a proprietary standard and we barely maintain it.
* Shallower learning curve for new contributors who may already know
and
use HTML.
* HTML rendering is more optimized in the platform than XUL.



But XUL has prototype cache which makes for parsing faster and opening
new
windows even faster
since one needs to just clone from the prototype, and not load anything.
So, be careful with the performance numbers.



I am not very familiar with the details of XUL prototype cache, but my
understanding of bug 1286082[1] is that currently the XUL prototype
cache is not actually working correctly (and possibly has been broken
since bug 592943[2] landed as part of Firefox 7 in 2011).

So, an even more careful investigation / comparison is warranted. The
prototype cache might not currently be doing anything.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1286082
[2]: https://bugzilla.mozilla.org/show_bug.cgi?id=592943

- Ryan



those looks like about reading/writing to disk, not about the cache
itself, which should still help quite a bit with new window performance.
We did get significant performance regression when starting to use html
for...hm, was it about:newtab or about:home or what.

And https://bugzilla.mozilla.org/show_bug.cgi?id=1286082 is only about
some case, not all, as far as I see, but let me test.




Just tested and we seem to load for example
chrome://browser/content/browser.xul
chrome://global/content/editMenuOverlay.xul
chrome://browser/content/baseMenuOverlay.xul
chrome://browser/content/places/placesOverlay.xul
loaded chrome://layoutdebug/content/layoutdebug-overlay.xul
chrome://browser/content/report-phishing-overlay.xul
successfully when starting browser.
And once those are loaded, they don't need to be loaded again when opening
new windows.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: URLSearchParams from array or object

2017-01-17 Thread smaug

So we'll get support for object too in FF53, not only array?

On 01/17/2017 07:11 PM, Andrea Marchesini wrote:

Summary: URLSearchParams constructor is changed in the latest URL spec.
Now it's possible to create URLSearchParams objects starting from a string,
an array and from an object.

Bugs:
https://bugzilla.mozilla.org/show_bug.cgi?id=1330678

https://bugzilla.mozilla.org/show_bug.cgi?id=1330699
https://bugzilla.mozilla.org/show_bug.cgi?id=1331580



bug 1321623 has nothing to do with this ;)


Link to standard:
https://url.spec.whatwg.org/#interface-urlsearchparams

Platform coverage: all platforms

Estimated or target release: Fx53

Preference behind which this will be implemented: none

DevTools bug: no need

Do other browser engines implement this? I don't know this, yet

Tests: WPT

Security & Privacy Concerns: none



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-17 Thread smaug

On 01/18/2017 08:28 AM, smaug wrote:

On 01/17/2017 10:51 PM, J. Ryan Stinnett wrote:

On Mon, Jan 16, 2017 at 3:08 PM, smaug <sm...@welho.com> wrote:

On 01/16/2017 10:43 PM, Dave Townsend wrote:


One of the things I've been investigating since moving back to the desktop
team is how we can remove XUL from the application as much as possible.
The
benefits for doing this are varied, some obvious examples:

* XUL is a proprietary standard and we barely maintain it.
* Shallower learning curve for new contributors who may already know and
use HTML.
* HTML rendering is more optimized in the platform than XUL.


But XUL has prototype cache which makes for parsing faster and opening new
windows even faster
since one needs to just clone from the prototype, and not load anything.
So, be careful with the performance numbers.


I am not very familiar with the details of XUL prototype cache, but my
understanding of bug 1286082[1] is that currently the XUL prototype
cache is not actually working correctly (and possibly has been broken
since bug 592943[2] landed as part of Firefox 7 in 2011).

So, an even more careful investigation / comparison is warranted. The
prototype cache might not currently be doing anything.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1286082
[2]: https://bugzilla.mozilla.org/show_bug.cgi?id=592943

- Ryan



those looks like about reading/writing to disk, not about the cache itself, 
which should still help quite a bit with new window performance.
We did get significant performance regression when starting to use html 
for...hm, was it about:newtab or about:home or what.

And https://bugzilla.mozilla.org/show_bug.cgi?id=1286082 is only about some 
case, not all, as far as I see, but let me test.



Just tested and we seem to load for example
chrome://browser/content/browser.xul
chrome://global/content/editMenuOverlay.xul
chrome://browser/content/baseMenuOverlay.xul
chrome://browser/content/places/placesOverlay.xul
loaded chrome://layoutdebug/content/layoutdebug-overlay.xul
chrome://browser/content/report-phishing-overlay.xul
successfully when starting browser.
And once those are loaded, they don't need to be loaded again when opening new 
windows.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-17 Thread smaug

On 01/17/2017 10:51 PM, J. Ryan Stinnett wrote:

On Mon, Jan 16, 2017 at 3:08 PM, smaug <sm...@welho.com> wrote:

On 01/16/2017 10:43 PM, Dave Townsend wrote:


One of the things I've been investigating since moving back to the desktop
team is how we can remove XUL from the application as much as possible.
The
benefits for doing this are varied, some obvious examples:

* XUL is a proprietary standard and we barely maintain it.
* Shallower learning curve for new contributors who may already know and
use HTML.
* HTML rendering is more optimized in the platform than XUL.


But XUL has prototype cache which makes for parsing faster and opening new
windows even faster
since one needs to just clone from the prototype, and not load anything.
So, be careful with the performance numbers.


I am not very familiar with the details of XUL prototype cache, but my
understanding of bug 1286082[1] is that currently the XUL prototype
cache is not actually working correctly (and possibly has been broken
since bug 592943[2] landed as part of Firefox 7 in 2011).

So, an even more careful investigation / comparison is warranted. The
prototype cache might not currently be doing anything.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1286082
[2]: https://bugzilla.mozilla.org/show_bug.cgi?id=592943

- Ryan



those looks like about reading/writing to disk, not about the cache itself, 
which should still help quite a bit with new window performance.
We did get significant performance regression when starting to use html 
for...hm, was it about:newtab or about:home or what.

And https://bugzilla.mozilla.org/show_bug.cgi?id=1286082 is only about some 
case, not all, as far as I see, but let me test.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-17 Thread smaug

On 01/17/2017 12:05 AM, Dave Townsend wrote:

Trees! I knew I was forgetting something, thank you. Yeah those are things
we're going to need some sane replacements for.

AS far as XBL goes, while I suspect it works from HTML documents I think we
want to be phasing out use of XBL too for pretty much the same reasons as
XUL. Web components seem like an obvious alternative and I understand that
they are only preffed off right now. If we can have them work in chrome
documents they could be the right replacement.


The Shadow DOM implementation v0 we have in tree will be replaced with v1. 
Those aren't quite compatible in
API level. We must not pref on the current shadow DOM implementation.




On Mon, Jan 16, 2017 at 1:28 PM, Matthew N.  wrote:


Trees are the big thing that come to mind. I've heard about three non-XUL
re-implementations (IIRC mostly in devtools) which sounds like a
maintainability issue and potentially redundancy. I would rather keep using
XUL trees than have even more different tree implementations (though I'd be
fine with a one or two simpler replacements since many uses of a xul:tree
don't need a lot of the feature like nesting) which brings me to my next
point:

What about XBL? Does it just work from XHTML documents? Is our
implementation of Web Components ready to replace it and riding the trains?
I think it would be good to implement drop-in replacements (or as close as
possible) for some simple XBL bindings or native XUL elements to prove that
Web Components are a replacement. Once the Web Component version is working
we can transition to it everywhere. Perhaps something like 
would be a good candidate.

Matthew

On Mon, Jan 16, 2017 at 12:43 PM, Dave Townsend 
wrote:


One of the things I've been investigating since moving back to the
desktop team is how we can remove XUL from the application as much as
possible. The benefits for doing this are varied, some obvious examples:

* XUL is a proprietary standard and we barely maintain it.
* Shallower learning curve for new contributors who may already know and
use HTML.
* HTML rendering is more optimized in the platform than XUL.
* Further integration of Servo code may require dropping XUL altogether.

The necessary first step of reducing XUL use is to stop adding any more
UI that uses XUL and here I'm talking about wholly new UI, additions to
browser.xul and other existing UI are a separate concern. What do folks
think about making this a rule for new projects in the near future?

Of course there are some features missing from HTML that make this
impossible in some cases right now. Some that I've already found:

* HTML has no support for popup panels like XUL does. The devtools team
have been working around this but they are still dependent on XUL right now.
* iframe elements don't have the same capabilities that the XUL browser
element does and we use that in some UI.
* Top level menus on OSX are currently only able to be defined with XUL
elements. This only affects UI that uses top-level windows and most of our
new UI is in-content so may be less of an issue.

What other features do we depend on in XUL that I haven't listed?

___
firefox-dev mailing list
firefox-...@mozilla.org
https://mail.mozilla.org/listinfo/firefox-dev






___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecating XUL in new UI

2017-01-16 Thread smaug

On 01/16/2017 10:43 PM, Dave Townsend wrote:

One of the things I've been investigating since moving back to the desktop
team is how we can remove XUL from the application as much as possible. The
benefits for doing this are varied, some obvious examples:

* XUL is a proprietary standard and we barely maintain it.
* Shallower learning curve for new contributors who may already know and
use HTML.
* HTML rendering is more optimized in the platform than XUL.


But XUL has prototype cache which makes for parsing faster and opening new 
windows even faster
since one needs to just clone from the prototype, and not load anything.
So, be careful with the performance numbers.



* Further integration of Servo code may require dropping XUL altogether.

The necessary first step of reducing XUL use is to stop adding any more UI
that uses XUL and here I'm talking about wholly new UI, additions to
browser.xul and other existing UI are a separate concern. What do folks
think about making this a rule for new projects in the near future?

Of course there are some features missing from HTML that make this
impossible in some cases right now. Some that I've already found:

* HTML has no support for popup panels like XUL does. The devtools team
have been working around this but they are still dependent on XUL right now.
* iframe elements don't have the same capabilities that the XUL browser
element does and we use that in some UI.
* Top level menus on OSX are currently only able to be defined with XUL
elements. This only affects UI that uses top-level windows and most of our
new UI is in-content so may be less of an issue.

What other features do we depend on in XUL that I haven't listed?


templates? xul:tree? I guess both those can be implemented in JS too, just 
possibly not as efficiently.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Report about the NetworkInformation meeting

2017-01-08 Thread smaug

On 01/07/2017 02:55 AM, Andrea Marchesini wrote:

Today we had a meeting about the next steps for NetworkInformation API.

The results of this meeting are:

1. we want to keep what we have in m-c. It means NetworkInformation enabled
on fennec, main-thread and workers. It stays disabled on desktop.

2. we need more data about the usage and the use-cases of this API before
taken further steps.
Marcos will probably help with this. After that, we can probably organize
another meeting and/or continue the discussion here.




Has or will someone contact(ed) Google and other browser vendors in order to 
figure out
what kind of API we'd like to see implemented everywhere?


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Experiment: CSS Houdini Paint API Level 1

2017-01-06 Thread smaug

On 01/05/2017 06:00 PM, Jet Villegas wrote:

Spec: https://drafts.css-houdini.org/css-paint-api/

Summary: The CSS Paint API is the first of several Web Rendering proposals
from the CSS Houdini Task Force. The CSS Paint API allows Web authors to
define and register a custom Paint method to be executed by the Layout
engine as other elements are rendered.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1302328

Link to standard: https://drafts.css-houdini.org/css-paint-api/

Platform coverage: Android, Desktop

Estimated or target release: TBD

Preference behind which this will be implemented: TBD

DevTools bug: TBD

Tests - TBD

Implementation Details:

CSS Paint API depends on the implementation of the following features:
CSS Houdini Properties & Values - https://bugzilla.mozilla.org/
show_bug.cgi?id=1273706

As you probably know, baku is working on to get support for proper worklets, 
not just
a prototype, supporting running in different threads and so (AudioWorklets 
really need that).
https://bugzilla.mozilla.org/show_bug.cgi?id=1328964




Houdini "Worklets" - https://bugzilla.mozilla.org/show_bug.cgi?id=1290021

The planned implementation in Gecko builds upon the HTML5 Canvas2D API to
provide the rendering surface for CSS Paint.

Risks:

The experimental implementation has progressed enough to warrant this
public Intent to Implement. However, significant risks will need to be
mitigated by careful design and execution if these features are to pass the
experimental stage and ship in Firefox, including:

1. Use Cases. It's not clear that the use cases proposed in the
specification warrant the additional Rendering System complexity. Apart
from conical gradients, we haven't seen many author requests for the other
use cases. If the existing Canvas2D feature set is lacking, what are the
compelling use cases and maximally useful API for such use cases? It's not
clear that the proposed Canvas2D-based API is desirable over a different
API design (eg., WebGL) if most use cases need to directly manipulate
pixels.

2. Rendering Performance. The planned Canvas2D backing store approach may
be too slow for real-world usage of the API. In the future, we may replace
the Canvas2D approach to have the custom paint methods create Layout
(displayList) nodes for direct rendering by the Layout engine, bypassing
the need for a Canvas2D backing store. It's worth noting that the Paint API
isn't directly compatible with existing displayList nodes (e.g., support
for raw paths, funny shapes, & pixel manipulation.)

There may also be other performance issues that arise with the API's usage
in combination with existing CSS features (e.g., CSS Masking, Filters,
etc.) The displayList vs. canvas bitmap implementation would probably look
a good bit different in WebRender. It's also worth noting that multiple
implementations shipping a bitmap-based version can create dependencies
that prevent us from switching to a faster alternative version in the
future.

Couldn't we add some new types of worklets for different APIs?
Or, has it been discussed to have a method to access the worklet?
So, not window.paintWorklet but
window.getPaintWorklet("2d")
which could be extended later, similar to canvas context, to support
different types of paint worklets.





3. Integration with Gecko Architecture. The Quantum Project <
https://wiki.mozilla.org/Quantum> is a major overhaul of the Firefox
Rendering Engine. Implementing the CSS Paint API while that effort is in
progress may add significant impedance. However, a counter-argument is that
we should design Quantum to allow for such extensibility in the future.
Duplication of work ( writing code that would need to be rewritten for
Quantum ) is not desirable and should be avoided.

For WebRender/Quantum, we could initially push this through the same path
that SVG will go through (which will be rasterized on the CPU and then
cached in a GPU texture atlas). It does seem like Houdini Paint could
reduce the amount of acceleration we can do on the GPU (at least in the
short term), but we won't be any worse off than other browsers in that
regard.

4. Dependency on incomplete implementations/specifications. The dependency
creates a chicken/egg scenario where we can't sufficiently evaluate the
dependent specifications (e.g, Worklets, and Properties & Values) without
also implementing an initial key use case (e.g., CSS Paint or Worklets for
Web Audio.) This is somewhat mitigated by getting the implementation far
enough along to formulate informed opinions on all the specifications.

The Houdini Task Force is meeting next week in Seattle to discuss this and
other specifications.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::Result for better error handling

2016-12-23 Thread smaug

On 12/20/2016 03:46 PM, Jan de Mooij wrote:

Hi all,

A few weeks ago we added mozilla::Result to MFBT [0][1]. I was asked
to inform dev-platform about this, so here's a quick overview.
mozilla::Result is based on Rust's Result type [2]. It contains
either a success value of type V or an error value of type E. For example,
a function Foo that returns an `int` on success or some `Error` (enum,
pointer, etc) on failure, would have this signature:

   Result Foo();

mozilla::Ok is an empty struct that can be used when a function doesn't
return anything on success:

   Result Bar() { ... return Ok(); }

The MOZ_TRY(expr) macro is similar to Rust's try! macro: if `expr` is an
error it propagates it to the caller, else it continues:

   Result Baz() { MOZ_TRY(Bar()); ... }

There's also a MOZ_TRY_VAR macro that can be used when you want to store
the return value on success. Result has isOk(), isErr(), unwrapOk(),
unwrapErr() methods that do what you'd expect. It also has the
MOZ_MUST_USE_TYPE annotation, so the static analysis builds will complain
if you ignore the return value of a function that returns Result.


Why is the API not using Mozilla coding style with method naming?
"In C/C++, method names should be capitalized and use CamelCase."
Though, looks like MFBT is odd beast in coding style.
Waiting for tools to just format everything to use same style.



Internally, Result uses mozilla::Variant, but there are some cases that can
be stored more efficiently. For instance, Result just stores an
Error* pointer and Ok is represented as nullptr. This is more efficient and
will also make it easier to call functions that return Result from JIT
code. The documentation [3] has more info.

The long-term plan is to use Result in SpiderMonkey, to simplify our
error/OOM handling [4][5].

Many thanks to Jason Orendorff (jorendorff) for doing most of the work by
writing the initial Result patches, and to Jeff Walden (Waldo) for his
thorough code reviews.

Jan

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1283562
[1] https://searchfox.org/mozilla-central/source/mfbt/Result.h
[2] https://doc.rust-lang.org/std/result/
[3]
https://searchfox.org/mozilla-central/rev/cc2a84852bd4e6f6d8d4d5b17b8382bb5d005749/mfbt/Result.h#148-158
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1277368
[5] https://searchfox.org/mozilla-central/source/js/public/Result.h



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Introducing mozilla::Result for better error handling

2016-12-23 Thread smaug

On 12/22/2016 08:14 PM, Bobby Holley wrote:

We've had this debate several times already, culminating in the attempt to
ban NS_ENSURE_* macros. It didn't work.


This is a bit different. One of the most common NS_ENSURE_ macros is SUCCESS 
variant which explicitly has the
return value.
MOZ_TRY doesn't really hint at all, not even in its name, that it is doing 
early return.

But I've always like even NS_ENSURE_STATE, so I guess I could get used to
code using Result, even though it at first sight looks rather verbose.



Subjectively, I think the reasons for the failure were:
(a) We want logging/warnings on exceptional return paths, especially the
ones we don't expect.
(b) We want to check every potential failure and propagate error codes
correctly.
(c) Trying to do (a) and (b) consistently without macros bloats every
function call to 5 lines of boilerplate.

If we do want to rehash the early-return macro discussion, we should do
that separately. As it stands, MOZ_TRY is necessary to give mozilla::Result
feature parity with nsresult. We don't want people to stick with nsresult
just because they like NS_ENSURE_SUCCESS.

bholley

On Wed, Dec 21, 2016 at 11:22 PM, Eric Rahm  wrote:


The key point for me is that we're hiding the return. I'm fine with the
more verbose explicitly-return-and-make-it-easier-for-the-reviewer-to-
call-out-issues
form. I understand this is going to have differing opinions -- I think
there's merit to more concise code -- but for the things I look at I much
prefer an explicit return. It makes both the writer and reviewer think
about the consequences. If MOZ_TRY just release asserted on error I
wouldn't have an issue.

On Wed, Dec 21, 2016 at 10:59 PM, Kris Maglione 
wrote:


On Wed, Dec 21, 2016 at 10:53:45PM -0800, Eric Rahm wrote:


I like the idea of pulling in some Rusty concepts, but I'm concerned

about

adding yet another early return macro -- absolutely no arguments on the
new
type, just the |MOZ_TRY| macro. In practice these have lead to security
issues (UAF, etc) and memory leaks in the C++ world (I'm looking at you
NS_ENSURE). These aren't hypothetical issues, I've run into them both
working on memshrink and sec issues in Firefox.



We run into the same issues without those macros. The only real

difference

is that when you have to follow every call with:

  if (NS_WARN_IF(NS_FAILED(rv))) {
return rv;
  }

It's much easier to lose track of the allocation you did four  calls ago
that's now 20 lines away.


On Wed, Dec 21, 2016 at 9:53 AM, Ted Mielczarek 

wrote:

On Wed, Dec 21, 2016, at 12:30 PM, Jason Orendorff wrote:

The implicit conversion solves a real problem. Imagine these two
operations
have two different error types:

 MOZ_TRY(JS_DoFirstThing()); // JS::Error&
 MOZ_TRY(mozilla::pkix::DoSecondThing()); // pkix::Error

We don't want our error-handling scheme getting in the way of using

them

together. So we need a way of unifying the two error types: a shared

base

class, perhaps, or a variant.

Experience with Rust says that MOZ_TRY definitely needs to address

this

problem somehow. C++ implicit conversion is just one way to go; we

can

discuss alternatives in the bug.


The `try` macro in Rust will auto-convert error types that implement
`Into`, AIUI, but that's not automatic for all error types. I

haven't

tried it, but I have seen multiple recommendations for the

`error_chain`

crate to make this smoother:
https://docs.rs/error-chain/0.7.1/error_chain/

It's basically just boilerplate to implement conversions from other
error types. I wouldn't be surprised if something like that percolates
into the Rust standard library at some point.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___

dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Kris Maglione
Firefox Add-ons Engineer
Mozilla Corporation

It's always good to take an orthogonal view of something.  It develops
ideas.
 --Ken Thompson



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: BeforeAftereKeyboardEvent

2016-12-16 Thread smaug

On 12/16/2016 10:36 AM, Masayuki Nakano wrote:

Due to the end of B2G platform, I'd like to remove mozbrowserbeforekeydown, 
mozbrowserbeforekeyup, mozbrowserafterkeydown and mozbrowserafterkeyup
events and its event classes.

They were implemented by bug 989198 [1] for allowing embedded browser elements 
to override keyboard events [2]. These events have never been enabled
on non-B2G platforms but they make the keyboard event propagation complicated 
especially in PresShell.

I'm working on this in bug 1322736 [3].

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=989198
[2] 
https://wiki.mozilla.org/WebAPI/BrowserAPI/KeyboardEvent#Dispatch_KeyboardEvent_across_BrowserElements
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1322736




Sounds good.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-15 Thread smaug

On 12/15/2016 09:53 PM, Boris Zbarsky wrote:

On 12/15/16 2:39 PM, Ehsan Akhgari wrote:

FWIW I was one of the people who were involved in the discussions around
this for Firefox OS.  From what I remember, the argument for shipping
this API was that web developers have been asking for this for years,
and they are basically happy to know the distinction between cellular
data and other transports, and infer whether the connection "costs
money".


OK.  That does seem like a useful thing to expose, but it's precisely one bit 
of information, right?  Why are we exposing all this other stuff instead?

-Boris



Even "costs money" based on the transport type is very unreliable. Coming from 
a country where basically all the mobile data plans
are unlimited (and cheap), it mostly just annoys when some OSes warn about use of mobile data to download lots of data (like updates) because of its 
possible cost. Rather bad UX.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [e10s] Higher priority handling for vsync in child processes

2016-11-14 Thread smaug

vsync handling happens usually right after a task (per HTML spec).
Basically we have now two event queues with similar priority, and they are 
handled the same priority, but since
the other one is used very rarely, events added to it get processed on average 
sooner than the ones added to the normal queue.
(We can't starve the normal queue, so both need to be processed)

MemoryPressure seems to want to really processed before any new task, so the 
new system can't be used for that.


Looks like memory pressure works only on some platforms.

On 11/10/2016 03:25 PM, Gabriele Svelto wrote:

On 10/11/2016 00:22, smaug wrote:

Parent process doesn't yet have higher priority handling [2].
Need to fix racy tests first.
Locally using higher priority also in parent process seems to make tab
throbber smoother.


We have an ad-hoc mechanism for scheduling memory pressure event ahead
of all other pending tasks [1]. Do you think we could use this new
functionality to replace it? It would be nice to have this handled via a
generic mechanism too.

  Gabriele

[1]
https://dxr.mozilla.org/mozilla-central/rev/336759fad4621dfcd0a3293840edbed67018accd/xpcom/threads/nsThread.cpp#1444




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[e10s] Higher priority handling for vsync in child processes

2016-11-09 Thread smaug

Hi,

the latest nightly has higher priority for 
vsync/refreshDriver/requestAnimationFrame handling in child processes [1], 
which should make
rendering updates smoother, at least in some cases.
If you see any unexpected behavior, please file bugs and CC me, :smaug.

And if there are cases when responsiveness feels now significantly better, I'd 
be curious to know about those too.


Parent process doesn't yet have higher priority handling [2].
Need to fix racy tests first.
Locally using higher priority also in parent process seems to make tab throbber 
smoother.


-Olli



[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1306591
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1315570
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML spec changes about data: URIs and origins

2016-11-08 Thread smaug

On 11/07/2016 10:41 PM, smaug wrote:

Just to get some idea how many tests would be broken:
https://treeherder.mozilla.org/#/jobs?repo=try=28735d0f2e5516c5a6d1f7805a065a6edbd8f28b



The results show that quite a few tests need to be fixed, if we want to change 
data: handling.
Should we start doing that? I think we should since eventually we should become 
compatible with other engines.

(I'm still busy fixing browser chrome tests to work with proper Promise 
scheduling, so can't help here quite yet)


-Olli




On 09/13/2016 03:31 PM, Frederik Braun wrote:

Firefox treats iframes pointing to a data URL as same-origin. This is
all well-known, was part of the HTML spec and has been discussed before
[1,2]

What has changed now is the HTML spec text[3]: Given that EdgeHTML,
Webkit and Blink violated this requirement, the standard now turned
around and assigns them a unique opaque origin.
I'll gladly accept the fact that we are not the violator, given the
security implications [1].

The GitHub related issue[4] included a discussion with some of our DOM
folks, but did not come to a conclusion as to what we plan to do here.

Is back compat the main concern? I'd be happy to add a telemetry probe
and a devtools warning if someone is willing to point me in the right
direction.


Thanks,
Freddy


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=255107
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1018872
[3]
https://github.com/whatwg/html/commit/00769464e80149368672b894b50881134da4602f
[4] https://github.com/whatwg/html/issues/1753





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   >