Re: New XPIDL attribute: [infallible]

2012-08-24 Thread smaug

On 08/24/2012 02:42 AM, Neil wrote:

Justin Lebar wrote:


So now you can do

 nsCOMPtrnsIFoo foo;
 int32_t f = foo-GetFoo();


Why was I expecting this to be Foo()? (Perhaps unreasonably.)


Yeah, it should be Foo().
File a bug?






I rejected the first approach because it meant that every call to GetFoo from 
XPCOM would need to go through two virtual calls: GetFoo(int32_t*) and
then GetFoo().


And also because MSVC would have messed up the vtable.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to be notified when a node gets detached/reparented?

2012-10-11 Thread smaug

On 10/11/2012 02:40 PM, Paul Rouget wrote:

Context: in the firefox devtools, we need to track some nodes and update
different views based on what's happening to this node (show its parents,
show its child, show its attributes, …).

The new Mutation observers are very helpful. But there's one thing I am not
really sure how to handle correctly .

When a node gets detached (parent.removeChild(node)) or reparented, I need to
be notified.

My current idea is to listen to childList mutations from the parent,
then, on this mutation, check if the node is still part of the children of
the parent, if not, check if it has a parent, if so, the node has been
*relocated*, then I need re-listen to a childList mutation from this
new parent, if no parent, the node has been *detached*.


Why do you need to re-listen anywhere?
You get the node in a MutationRecord and when the callback is called you check 
where it is.
( node.contains can be useful and certainly faster than anything in JS. )
If the node doesn't have parent, it is detached.




I was wondering if there was any better way to do that.

Thanks,

-- Paul



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Pointer Events Working Group

2012-10-11 Thread smaug

On 10/11/2012 07:55 PM, L. David Baron wrote:

W3C is proposing a charter for a new Pointer Events
Working Group.  For more details, see:
http://lists.w3.org/Archives/Public/public-new-work/2012Sep/0017.html
http://www.w3.org/2012/pointerevents/charter/charter-proposed.html

Mozilla has the opportunity to send comments or objections through
Thursday, October 25.  Please reply to this thread if you think
there's something we should say.

-David




We should join PEWG. Nicer API than touch API.
The spec needs some work but is a good approach.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


No more cycle collections during shutdown (opt builds)

2012-12-14 Thread smaug

Hi all,

I just landed the patch for https://bugzilla.mozilla.org/show_bug.cgi?id=818739 
in order to speed up
shutdown times. Shutdown cycle collections are still run in debug builds so 
that we can
detect leaks. Also, one can set XPCOM_CC_RUN_DURING_SHUTDOWN env variable to 
enable
shutdown cycle collections in opt builds - can be useful when debugging leaks 
and such.
If you see regressions, please file new bugs and make them block bug 818739.
Regressions mean probably that we have code which is trying to do I/O too late.
Based on https://bugzilla.mozilla.org/show_bug.cgi?id=818739#c17 such I/O 
shouldn't
happen in normal cases.




-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: No more cycle collections during shutdown (opt builds)

2012-12-15 Thread smaug

On 12/15/2012 12:31 AM, smaug wrote:

Hi all,

I just landed the patch for https://bugzilla.mozilla.org/show_bug.cgi?id=818739 
in order to speed up
shutdown times. Shutdown cycle collections are still run in debug builds so 
that we can
detect leaks. Also, one can set XPCOM_CC_RUN_DURING_SHUTDOWN env variable to 
enable
shutdown cycle collections in opt builds - can be useful when debugging leaks 
and such.
If you see regressions, please file new bugs and make them block bug 818739.
Regressions mean probably that we have code which is trying to do I/O too late.
Based on https://bugzilla.mozilla.org/show_bug.cgi?id=818739#c17 such I/O 
shouldn't
happen in normal cases.




-Olli


In other words, please test shutdown in Nightlies = 2012-12-15.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of instanceof SomeDOMInterface in chrome and extensions

2012-12-29 Thread smaug

On 12/27/2012 12:18 PM, Boris Zbarsky wrote:

We have a bunch of chrome and extension code that does things like instanceof 
HTMLAnchorElement (and likewise with other DOM interfaces).

The problem is that per WebIDL spec and general ECMAScript sanity this 
shouldn't work: instanceof goes up the proto chain looking for the thing on the
right as a constructor, and chrome's HTMLAnchorElement is not on the proto 
chain of web page elements.

The arguably right way to do the el instanceof HTMLAnchorElement test is:

   el instanceof el.ownerDocument.defaultView.HTMLAnchorElement

Needless to say this sucks.

And doesn't work for data documents which don't have defaultView




For now we're violating the spec in a few ways and hacking things like the 
above to work, but that involves preserving the nsIDOMHTML* interfaces,
which we'd like to get rid of to reduce memory usage and whatnot.

So the question is how we should make the above work sanely.  I've brought up 
the problem a few times on public-script-coord and whatnot, but there
seems to not be much interest in solving it, so I think we should take the next 
step and propose a specific solution that we've already implemented.

One option is to model this on the Array.isArray method ES has.  We'd have to 
figure out how to name all the methods.


You mean something like Node.is(element, HTMLAnchorElement); ?



Other ideas?

No matter what we'd then need to migrate our chrome and extensions to the new 
setup, so I'd rather not change this more than once.  Especially for
extensions.

-Boris


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of PGO on Windows

2013-01-31 Thread smaug

On 01/31/2013 10:37 AM, Nicholas Nethercote wrote:

On Thu, Jan 31, 2013 at 3:03 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

Given the above, I'd like to propose the following long-term solutions:

1. Disable PGO/LTCG now.

2. Try to delay disabling PGO/LTCG as much as possible.

3. Try to delay disabling PGO/LTCG until the next time that we hit the
limit, and disable PGO/LTCG then once and for all.


In the long run, 1 and 3 are the same.  If we know we're going to turn
it off, why not bite the bullet and do it now?



Because we're still missing plenty of optimizations in our code
to be fast in microbenchmarks. It would be quite huge pr loss if we suddenly
were 10-20% slower in benchmarks.
But we're getting better (that last spike is because bz managed to effectively 
optimize out one test).
http://graphs.mozilla.org/graph.html#tests=[[73,1,1]]sel=nonedisplayrange=365datatype=running

Has anyone run other than dromaeo? Peacekeeper perhaps?


-Olli



One big advantage of
that is that we'd immediately stop suffering through PGO-only bugs.
(I'm not necessarily advocating this, BTW, just observing that the two
options are basically equivalent.)

Also, stupid question time:  is it possible to build on Windows with
GCC and/or clang?

Nick



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: print preview and documentation

2013-02-04 Thread smaug

On 02/02/2013 11:37 AM, rvj wrote:

Its been a couple of years since Ive used the Mozilla libraries and some of the 
functionality now seems iffy

For example  print preview requests now generate an invalid pointer error

error[Exception... Component returned failure code: 0x80004003 
(NS_ERROR_INVALID_POINTER) [nsIWebBrowserPrint.printPreview]  nsresult: 0x80004003
(NS_ERROR_INVALID_POINTER)  location: JS frame :: 
chrome://testbrowser/content/proofer.xul :: previewprint :: line 119  data: no]

Tried the FAQ for print preview and along with all the other FAQ documents, 
they seems to be unavailable on MDN

Has the calling syntax for print preview changed ?

Is this just a general depreciation in support for application development?


The way to use print preview API changed December 2009, so if your code is 
older, you'll have to update.
You can look at the use of printPreview method in mxr. The basic idea is that 
you have original docshell and its document and call
print preview so that the document gets cloned to another docshell.

https://bugzilla.mozilla.org/show_bug.cgi?id=487667 has some more information.

-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mousemove events from the refresh driver

2013-02-15 Thread smaug

On 02/14/2013 05:48 AM, Robert O'Callahan wrote:

On Thu, Feb 14, 2013 at 3:21 AM, Benjamin Smedberg benja...@smedbergs.uswrote:


On what OSes? Windows by default coalesces mouse move events. They are
like WM_PAINT events in that they are only delivered when the event queue
is empty. See
http://blogs.msdn.com/b/oldnewthing/archive/2011/12/19/10249000.aspx

This should basically mean that we process mousemove events on windows up
to 100% CPU, but we should never be flooded by them. Although I do wonder
if WM_MOUSEMOVE has priority over WM_PAINT so that if the mouse is moving a
lot, that could affect the latency of WM_PAINT.



We are definitely getting flooded. Here's what I think is happening on the
page I'm looking at, it's pretty simple:
1) nsAppShell::ProcessNextNativeEvent checks for input events, finds a
WM_MOUSE_MOVE, and dispatches it, which takes a little while because this
page's mousemove handler modifies and flushes layout on every mouse move.
2) While that's happening, the mouse keeps moving.
3) After processing that WM_MOUSE_MOVE, ProcessNextNativeEvent calls
PeekMessage again and finds another WM_MOUSE_MOVE is ready. Go to step 1.

Hmm, why do we call PeekMessage at that point and not go to gecko event loop.
IIRC we still have the problem at least on OSX that we check native events too 
often.




4) Meanwhile the refresh driver timer has fired and queued an event, but we
don't get around to running it until NATIVE_EVENT_STARVATION_LIMIT has
expired (one second).

I suppose we could try ignoring WM_MOUSE_MOVEs when there's a Gecko event
pending, but that sounds kinda scary. I think deferring DOM mousemove
events to the next refresh driver tick would be safer than that.

Rob



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Soliciting advice on #650960 (replacement for print progress bars)

2013-02-25 Thread smaug

On 02/26/2013 01:18 AM, Daniel Holbert wrote:

On 02/25/2013 01:57 PM, Bobby Holley wrote:

We clone static copies of documents for print preview. We could

potentially

do the same for normal printing, I'd think.


I'm almost certain that we already do. (smaug would know for sure)




We clone documents for printing and print preview.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Soliciting advice on #650960 (replacement for print progress bars)

2013-02-25 Thread smaug

On 02/25/2013 11:28 PM, Benjamin Smedberg wrote:

On 2/25/2013 4:14 PM, Zack Weinberg wrote:

 The current thinking is that we need *some* indication that a print job is in 
progress, because we need to prevent the user from closing the tab or
window until the print job has been completely handed off to the OS.

Why?


IIRC we still use plugins from the original page, in case we're printing pages 
with plugins.




Is the user allowed to interact with the tab contents (potentially modifying 
the DOM)?

--BDS



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firing events at the window vs. firing them at the chrome event handler

2013-03-05 Thread smaug

On 03/04/2013 08:20 PM, Boris Zbarsky wrote:

On 3/4/13 1:08 PM, Zack Weinberg wrote:

It only needs to be certain of seeing the event despite anything content
can do,


In that case, a capturing handler on the chrome event listener will work fine.

-Boris



or capturing or bubbling event listener in system event group.
http://mxr.mozilla.org/mozilla-central/source/content/events/public/nsIEventListenerService.idl#72
Listeners in system event group are called after the default group, but 
stop*Propagation
is per group. Listeners added by content js are only in the default group.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revamping touch input on Windows

2013-04-19 Thread smaug

On 04/18/2013 03:50 PM, Jim Mathies wrote:

We have quite a few issues with touch enabled sites on Windows. [1] Our support 
for touch stretches back to when we first implemented MozTouch events
which over time has morphed into a weird combination of W3C touch / simple 
gestures support. It is rather messy to fix, but I'd like to get this
cleaned up now such that we are producing a reliable stream of events on all 
Windows platforms we support touch on. (This includes Win7 and Win8, and
Metro.)

We are constrained by limitations in the way Windows handles touch input on 
desktop and our own implementation. For the desktop browser, there are two
different Windows event sets we can work with which are implemented in Windows 
such that they are mutually exclusive - we can receive one type or the
other, but not both. The two event sets are Gesture and Touch. The switch we 
use to decide which to process is based on a call to nsIWidget's
RegisterTouchWindow.

If RegisterTouchWindow has not been called we consume only Windows Gesture 
events which generate nsIDOMSimpleGestureEvents (rotate, magnify, swipe)
and pixel scroll events.

s/pixel scroll/wheel/ these days, right?



For the specific case of panning content widget queries event state manager's 
DecideGestureEvent to see if the underlying
element wants pixel scroll / pan feedback. [2] Based on the returned 
panDirection we request certain Gesture events from Windows and send pixel 
scroll
accordingly. If the underlying element can't be panned in the direction the 
input is in, we opt out of receiving Gesture events and fall back on
sending simple mouse input. (This is why you'll commonly get selection when 
dragging your finger horizontally across a page.)

On the flip side, if the DOM communicates the window supports touch input 
through RegisterTouchWindow, we bypass all Gesture events and instead
request Touch events from Windows. In this case we do not fire 
nsIDOMSimpleGestureEvents, mouse, or pixel scroll events and instead fire W3C 
compliant
touch input. We do not call DecideGestureEvent, and we do not generate pan 
feedback on the window. You can see this behavior using a good W3C touch
demo. [3]

One of the concerns here is that since we do not differentiate the metro and 
desktop browsers via UA, the two should emulate each other closely. The
browser would appear completely broken to content if the same UA sent two 
different event streams. So we need to take into account how metrofx works
as well.

With metrofx we can differentiate between mouse and touch input when we receive 
input, so we split the two up and fire appropriate events for each.
When receiving mouse input, we fire standard mouse events. When receiving touch 
input, we fire W3C touch input and nsIDOMSimpleGestureEvents. We also
fire mouse down/mouse up (click) events from touch so taps on the screen 
emulate clicking the mouse. Metrofx ignores RegisterTouchWindow, never
queries DecideGestureEvent, and does not fire pixel scroll events. Panning of 
web pages is currently handled in the front end via js in response to
W3C touch events, which I might note is not as performant as desktop's pixel 
scroll handling. In time this front end handling will hopefully be
replaced by async pan zoom which lives down in the layers backend.

Note that the metrofx front end makes very little use of 
nsIDOMSimpleGestureEvents, the only events we use are left/right swipe events 
for navigation.
If we chose we could ignore these and not generate nsIDOMSimpleGestureEvents at 
all. [4]

To clean this up, I'd like to propose the following:

1) abandon generating nsIDOMSimpleGestureEvents on Windows for both backends 
when processing touch input from touch input displays.*

This would mean that if the desktop front end wants to do something with pinch 
or zoom, it would have to process W3C touch events instead. Note that
we could still fire simple gestures from devices like track pads. But for touch 
input displays, we would not support these events.

Sounds ok to me. SimpleGestureEvents were originally for (OSX) touchpad case 
only anyway.




* There's one exception to this in metro, we would continue to fire 
MozEdgeUIGesture. [5]

We should perhaps then call it something else than SimpleGestureEvent




2) Rework how we process touch events in Windows widget such that:

* Both backends respect RegisterTouchWindow and only fire W3C events when it is 
set.
* If RegisterTouchWindow has been called:
** Send touchstart and the first touchmove and look at the return results.
** If either of these returns eConsumeNoDefault, continue sending W3C events 
only. No mouse or pixel scroll events would be sent.
** If both of these events do not return eConsumeNoDefault:
*** Abandon sending W3C touch events.
*** Generate pixel scroll events in the appropriate direction based on 
DecideGestureEvent, or simple mouse events if DecideGestureEvent indicates
scrolling isn't possible.

Feedback welcome on this 

Re: Accelerating exact rooting work

2013-04-23 Thread smaug

On 04/23/2013 04:07 PM, Tom Schuster wrote:

At the moment it's really just Jono working full time on this, and
terrence and other people reviewing. This stuff is actually quite easy
and you can expect really fast review times from our side.

In some parts of the code rooting could literally just mean to replace
JS::Value to JS::RootedValue and fixing the references to the
variable. It's really easy once you did it a few times.

Here is a list of all files that still have rooting problems:
http://pastebin.mozilla.org/2340241
And the details for each and every problem:
https://people.mozilla.com/~sfink/analysis/browser/rootingHazards.txt

We are using https://bugzilla.mozilla.org/show_bug.cgi?id=831379 to
track the rooting progress, make sure to file every bug as blocking
this one. I would appreciate if every module peer or owner would just
take a look at his/her module and tried to fix some of the issue. If
you are unsure or need help, ask us on #jsapi.

Thanks,
Tom



I found http://mxr.mozilla.org/mozilla-central/source/js/public/RootingAPI.h 
quite useful, but there are few things to
clarify. For example some code uses HandleObject and some code 
HandleJSObject* and having two ways to do the same thing
just makes the code harder to read.



On Tue, Apr 23, 2013 at 3:03 AM, Robert O'Callahan rob...@ocallahan.org wrote:

On Tue, Apr 23, 2013 at 5:36 AM, Terrence Cole tc...@mozilla.com wrote:


Our exact rooting work is at a spot right now where we could easily use
more hands to accelerate the process. The main problem is that the work
is easy and tedious: a hard sell for pretty much any hacker at mozilla.



It sounds worthwhile to encourage developers who aren't currently working
on critical-path projects to pile onto the exact rooting project. Getting
GGC over the line reaps some pretty large benefits and it's an
all-or-nothing project, unlike say pursuing the long tail of WebIDL
conversions.

If that sounds right, put out a call for volunteers (by which I include
paid staff) to help push on exact rooting, with detailed instructions. I
know some people who could probably help.

Rob
--
q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq
qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq
qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq
qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q
qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq
qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We should drop MathML

2013-05-06 Thread smaug

On 05/06/2013 05:46 AM, Benoit Jacob wrote:

Let me just reply to a few points to keep this conversation manageable:

2013/5/5 p.krautzber...@gmail.com


Here are a couple of reasons why dropping MathML would be a bad idea.
(While I wrote this others made some of the points as well.)

* MathML is part of HTML5 and epub3.



That MathML is part of epub3, is useful information. It doesn't mean that
MathML is good but it means that it's more encroached than I knew.

We don't care about this is part of HTML5 arguments (or else we would
support all the crazy stuff that flies on public-fx@w3...)


We do care about the stuff what is in the HTML spec.
http://www.whatwg.org/specs/web-apps/current-work/#mathml
(and if there is something we don't care about, it should be removed from the 
spec)





* Gecko has the very best native implementation out there, only a few
constructs short of complete.
* Killing it off means Mozilla gives up a competitive edge against all
other browser engines.
* MathML is widely used. Almost all publishers use XML workflows and in
those MathML for math. Similarly, XML+MathML dominates technical writing.
* In particular, the entire digital textbook market and thus the entire
educational sector comes out of XML/MathML workflows right now.
* MathML is the only format supported by math-capable accessibility tools
right now.
* MathML is just as powerful for typesetting math as TeX is. Publishers
have been converting TeX to XML for over a decade (e.g., Wiley, Springer,
Elsevier). Fun fact: the Math WG and the LaTeX3 group overlap.
* Limitations of browser support does not mean that the standard is
limited.




 From a MathJax point of view

* MathJax uses MathML as its internal format.
* MathJax output is ~5 times slower than native support. This is after 9
years of development of jsmath and MathJax (and javascript engines).



JavaScript performance hasn't stopped improving and is already far better
than 5x slower than native on use cases (like the Unreal Engine 3 demo)
that were a priori much harder for JavaScript.




* The performance issues lie solely with rendering MathML using HTML
constructs.
* Performance is the only reason why Wikipedia continues to uses images.



Then fix performance? With recent JavaScript improvements, if you really
can't get faster than within 5x of native, then you must be running into a
browser bug. The good thing with rendering with general HTML constructs is
that improving performance for such use cases benefits the entire browser.
If you pit browsers against each other on such a benchmark, you should be
able to generate enough competitive pressure between browser vendors to
force them to pay attention.



* JavaScript cannot access font metrics, so MathJax can only use fonts
we'r able to teach it to use.



Has that issue been brought up in the right places before (like, on this
very mailing list?) Accessing font metrics sounds like something reasonable
that would benefit multiple applications (like PDF.js).



* While TeX and the basic LaTeX packages are stable, most macro packages
are unreliable. Speaking as a mathematician, it's often hard to compile my
own TeX documents from a few years ago. You can also ask the arXiv folks
how painful it is to do what they do.



I'm also speaking as a (former) mathematician, and I've never had to rely
on TeX packages that aren't found in every sane TeX distribution (when I
stopped using TeX on a daily basis, TexLive was what everybody seemed to be
using).

But that's not relevant to my proposal (or considering a suitable subset of
TeX-plus-some-packages) because we could write this specification in a way
that mandates support for a fixed set of functionality, much like other Web
specifications do.





Personal remarks

MathML still feels a lot like HTML 1 to me. It's only entered the web
natively in 2012. We're lacking a lot of tools, in particular open source
tools (authoring environments, cross-conversion, a11y tools etc).



I'm concerned everytime I hear native as an inherent quality. As I tried
to explain above, if something can be done in browsers without native
support, that's much better. The job of browser vendors is to be picky
gatekeepers to limit the number of different specialized things that
require native support. Whence my specific interest in MathJax here.




But that's a bit like complaining in 1994 that HTML sucks and that there's
TeX which is so much more natural with \chapter and \section and has higher
typesetting quality anyway.



It would have been extremely easy to rebut such arguments as irrelevant and
counter them by much stronger arguments why TeX couldn't do the job that
HTML does.

I am still waiting for the rebuttal of my arguments, in the original email
in this thread, about how TeX is strictly better than MathML for the
particular task of representing equations. As far as I can see, MathML's
only inherent claim to existence is it's XML, and being XML stopped being
a relevant 

Re: review stop-energy (was 24hour review)

2013-07-10 Thread smaug

On 07/09/2013 03:14 PM, Taras Glek wrote:

Hi,
Browsers are a competitive field. We need to move faster. Eliminating review 
lag is an obvious step in the right direction.

I believe good code review is essential for shipping a good browser.

Conversely, poor code review practices hold us back. I am really frustrated 
with how many excellent developers are held back by poor review practices.
IMHO the single worst practice is not communicating with patch author as to 
when the patch will get reviewed.

Anecdotal evidence suggests that we do best at reviews where the patch in 
question lines up with reviewer's current project The worst thing that
happens there is rubber-stamping (eg reviewing non-trivial 60KB+ patches in 
30min).

Anecdotally, latency correlates inversely with how close the reviewer is to 
patch author, eg:

project  same team  same part of organization  org-wide  random community 
member

I think we need change a couple things*:

a) Realize that reviewing code is more valuable than writing code as it results 
in higher overall project activity. If you find you can't write code
anymore due to prioritizing reviews over coding, grow more reviewers.

b) Communicate better. If you are an active contributor, you should not leave r? 
patches sitting in your queue without feedback. I will review this
next week because I'm (busy reviewing ___ this week|away at conference). I 
think bugzilla could use some improvements there. If you think a patch is
lower priority than your other work communicate that.

c) If you think saying nothing is better than admitting than you wont get to 
the patch for a while**, that's passive aggressiveness
(https://en.wikipedia.org/wiki/Passive-aggressive_behavior). This is not a good 
way to build a happy coding community. Managers, look for instances of
this on your team.

In my experience the main cause of review stop-energy is lack of will to 
inconvenience own projects by switching gears to go through another person's
work.

I've seen too many amazing, productive people get discouraged by poor review 
throughput. Most of these people would rather not create even more
tension by complaining about this...that's what managers are for :)

Does anyone disagree with my 3 points above? Can we make some derivative of 
these rules into a formal policy(some sort of code of developer conduct)?

Taras

* There obvious exceptions to above guidelines (eg deadlines).
** Holding back bad code is a feature, not a bug, do it politely.



In general, +1 to all the 3 points. For b) it would be nice if bugzilla would let also the patch author to indicate if some patch isn't anything 
urgent. (or perhaps the last sentence of b) means that. Not sure whether 'you' refers to the reviewer or the patch author :) )



One thing, which has often brought up, would be to have other automatic coding 
style checker than just Ms2ger.
At least in the DOM land we try to follow the coding style rules rather 
strictly and it would ease reviewers work if
there was some good tool which does the coding style check automatically.



Curious, do we have some recent statistics how long it takes to get a review? 
Hopefully per module.



On 07/09/2013 03:46 PM, Boris Zbarsky wrote:

 * Split mass-changes or mechanical changes into a separate patch from the 
substantive changes.

 * If possible, separate patches into conceptually-separate pieces for review 
purposes (even if you then later collapse them into a single changeset to
 push).  Any time you're requesting review from multiple people on a single 
huge diff, chance are splitting it might have been a good idea.
...

Splitting patches is usually useful, but having a patch containing all the 
changes can be also good.
If you have a set of 20-30 patches, but not a patch which contains all the 
changes, it is often hard to see the big picture.
Again, perhaps some tool could help here. Something which can generate the full 
patch from the smaller ones.




-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is it necessary to remove message listeners?

2013-07-27 Thread smaug

On 07/27/2013 06:06 AM, Mark Hammond wrote:

On 27/07/2013 2:53 AM, Justin Lebar wrote:
...
 

Whether or not we totally succeed in this endeavor is another question
entirely.  You could instrument your build to count the number of live
nsFrameMessageManager objects and report the number of message
listeners in each -- one way to do this would be with a patch very
similar to this one [1].


Thanks for following up.  I'll add some hacks to count the message listeners as 
you suggest and followup here with what I find.

Just to be clear though, if I find they are *not* all being removed, I should 
open a bug on that rather than just removing the listeners myself and
calling it done?  ie, is it accurate to say that it *should* not be necessary 
to remove these handlers (and, if I verify that is true, that I could
explicitly add a note to this effect on the relevant MDN pages?)


Yes, one shouldn't have to remove message listeners. Message listeners should 
in most cases work like event listeners.
But, as with event listeners, remember that the callback may keep stuff alive. 
So after you have handled the message/event
you're expecting, and don't need to handle more such messages/events, you 
perhaps want to remove the listener manually.





Thanks,

Mark



[1] https://bug893242.bugzilla.mozilla.org/attachment.cgi?id=774978

On Thu, Jul 25, 2013 at 6:51 PM, Mark Hammond mhamm...@skippinet.com.au wrote:

Felipe and I were having a discussion around a patch that uses
nsIMessageManager.  Specifically, we create a browser element, then call
browser.messageManager.addMessageListener() with the requirement that the
listener live for as long as the browser element itself.

The question we had was whether it was necessary to explicitly call
removeMessageListener, or whether we can rely on automatic cleanup when the
browser element dies?  It seems obvious to us that it *should* be safe to
rely on automatic cleanup, but searching both docs and mxr didn't make it
clear, so I figured it was better to ask rather than to cargo-cult the
addition of explicit cleanup code that wasn't necessary.

Thanks,

Mark
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Refcounting threadsafety assertions are now fatal in opt builds.

2013-09-03 Thread smaug

On 08/30/2013 10:48 PM, Kyle Huey wrote:

The assertions that we have to catch refcounting objects on the wrong
thread are now fatal in opt builds.  This change is scoped to the nightly
channel to avoid performance penalties on builds that are widely used, and
will not propagate to aurora.  See bug 907914 for more details.

The motivation for this change is to catch threadsafety problems in
products where debug builds are not routinely tested (such as B2G).

Please followup to dev-platform if you have questions.

- Kyle



And in order to fix profiling on Nightlies, the plan is to enable assertions
only in non- --enable-profiling builds
(b2g doesn't use --enable-profiling by default)

-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DevTools: how to get list of mutation observers for an element

2013-09-04 Thread smaug

On 09/04/2013 09:43 AM, Jan Odvarko wrote:

It's currently possible to get registered event listeners for

specific target (element, window, xhr, etc.)

using nsIEventListenerService.getListenerInfoFor



Is there any API that would allow to get also mutation observers?

no



Should I file a bug for this?

Yes, please. CC me


-Olli :smaug






Honza



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: proposal to use JS GC Handle/Rooted typedefs everywhere

2013-09-18 Thread smaug

On 09/18/2013 10:55 PM, Luke Wagner wrote:

To save typing, the JS engine has typedefs like typedef HandleJSObject* 
HandleObject; typedef RootedJS::Value RootedValue; and the official
style is to prefer the HandleX/RootedX typedefs when there is no need to use the 
HandleX/RootedX template-ids directly.

This issue was discussed off and on in the JS engine for months, leading to a 
m.d.t.js-engine.internals newsgroup thread
(https://groups.google.com/forum/#!topic/mozilla.dev.tech.js-engine.internals/meWx5yxofYw)
 where it was discussed more (the high occurrence of
Handle/Rooted in the JS engine combined with the relatively insignificant 
difference between the two syntactic forms making a perfect bike shed
storm).

Given that the JS engine has the official style of use the typedefs, it seems 
like a shame for Gecko to use a different style; while the
difference may be insignificant, we do strive for consistency.  So, can we 
agree to use the typedefs all over Gecko?  From the
m.d.t.js-engine.internals thread I think bholley of the kingdom of XPConnect is 
strongly in favor.


I don't care too much whether we use Handle/RootedFoo or typedefs but 
consistency is always good.
(And except in WebIDL bindings and in XPConnect we should try to use JSAPI 
manually a little as possible.
Bindings should hide JSAPI usage.)


-Olli



(Again, this doesn't have to be an absolute rule, the needs of meta-programming 
and code-generators can override.)

Cheers, Luke



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Poll: What do you need in MXR/DXR?

2013-10-05 Thread smaug

- Clicking on macros seem to lead to some results, but definitely not the one 
I'd expect -
  the definition of the macro.

- Trying to find files is hard. (Still haven't figured out how to get easily 
from the main page to Navigator.cpp on dom/base)

- cycleCollection on the right side may or may not do something useful.
  In most cases it just ignores all the stuff, so it might be better to not 
have it at all.

- How to mark certain range of code on particular revision?



On 10/02/2013 09:33 PM, Erik Rose wrote:

What features do you most use in MXR and DXR?

Over in the recently renamed Web Engineering group, we're working hard to 
retire MXR. It hasn't been maintained for a long time, and there's a lot of 
duplication between it and DXR, which rests upon a more modern foundation and 
has been developing like crazy. However, there are some holes we need to fill 
before we can expect you to make a Big Switch. An obvious one is indexing more 
trees: comm-central, aurora, etc. And we certainly have some bothersome UI bugs 
to squash. But I'd like to hear from you, the actual users, so it's not just me 
and Taras guessing at priorities.

What keeps you off DXR? (What are the MXR things you use constantly? Or the 
things which are seldom-used but vital?)

If you're already using DXR as part of your workflow, what could it do to make 
your work more fun?

Feel free to reply here, or attach a comment to this blog post, which talks 
about some of the things we've done recently and are considering for the future:

https://blog.mozilla.org/webdev/2013/09/30/dxr-gets-faster-hardware-vcs-integration-and-snazzier-indexing/

We'll use your input to build our priorities for Q4, so wish away!

Cheers,
Erik



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Killing the Moz Audio Data API

2013-10-17 Thread smaug

On 10/17/2013 12:09 AM, Ehsan Akhgari wrote:

I'd like to write a patch to kill Moz Audio Data in Firefox 28 in favor of
Web Audio.  We added a deprecation warning for this API in Firefox 23 (bug
855570).  I'm not sure what our usual process for this kind of thing is,
should we just take the patch, and evangelize on the broken websites enough
times so that we're able to remove the feature in a stable build?

Thanks!
--
Ehsan
http://ehsanakhgari.org/




I thought some games/emscripten still relied on the Moz Audio API.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Closure of trunk trees - owners for bugs needed

2013-11-03 Thread smaug

On 11/01/2013 07:55 AM, Nicholas Nethercote wrote:

I have (slightly optimistically) started writing a post-mortem of this
closure, analyzing what went wrong and why, and how we might avoid it
in the future:

   https://etherpad.mozilla.org/mEB0H50ZjX


FWIW, I added the following TL;DR to the document, which reflects my
understanding of the situation.


Win7 M2 and Mbc tests were OOMing frequently at shutdown because too many
DOM windows were open.  This was due to a combination of: (a) multiple badly
written tests, (b) multiple social API leaks, (c) multiple devtool leaks.  Bug 
932898
will improve our shutdown leak detection.  Bug 932900 will (if implemented) 
prevent
some of these leaks(?).


Is there anything else we can do to prevent this from happening again?

Nick




We should add some checks that hiddenWindow doesn't contain anything unexpected 
when closing FF.
The DOM tree and window scope should hopefully look the sama as when starting 
FF.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How to reduce the time m-i is closed?

2013-11-16 Thread smaug

Hi all,


the recent OOM cases have been really annoying. They have slowed down 
development, even for those who
haven't been dealing with the actual issue(s).

Could we handle this kind of cases differently. Perhaps clone the bad state of 
m-i to
some other repository we're tracking using tbpl, backout stuff from m-i to the 
state where we can
run it, re-open it and do the fixes in the clone.
And then, say in a week, merge the clone back to m-i. If the state is still bad 
(no one has step up to fix the
issues), then keep m-i closed until the issues have been fixed.


thoughts?


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there any reason not to shut down bonsai?

2013-11-21 Thread smaug

On 11/21/2013 09:43 PM, Laura Thomson wrote:

I'll keep it short and to the point. Are there any objections to shutting down 
http://bonsai.mozilla.org/cvsqueryform.cgi ?

If you don't know what that is--and few people do, which is even more reason to 
shut it off--it's a search engine for some of our CVS repositories, of which I 
think none are in active development.

Cheers

Laura Thomson
Mozilla Web Engineering




Don't even think about closing down bonsai. I could perhaps live without
http://bonsai.mozilla.org/cvsqueryform.cgi, but 
http://bonsai.mozilla.org/cvsblame.cgi and http://bonsai.mozilla.org/cvslog.cgi
are super useful.
(I use http://mxr.mozilla.org/seamonkey/ and http://mxr.mozilla.org/mozilla1.8/ 
all the time and they link to bonsai. )



-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is there any reason not to shut down bonsai?

2013-11-21 Thread smaug

On 11/21/2013 10:15 PM, Gavin Sharp wrote:

It would be good to explore alternatives to Bonsai.
https://github.com/mozilla/mozilla-central is supposed to have full
CVS history, right?

Some concerns with that alternative:
- I think that repo misses some history from some branches of CVS
- I'm not confident that we've audited that whatever history is there
is complete/correct, and so losing easy access to the canonical source
could be problematic
- I don't think Github has a replacement for e.g.
http://bonsai.mozilla.org/cvsguess.cgi?file=textbox.xml (find a file
in CVS history including since-removed/forked files)
- there's a learning curve to using any new tools



Usability of https://github.com/mozilla/mozilla-central is too far behind 
bonsai.





Any other concerns with using Github as the alternative?

Are there other potential alternative solutions?

Gavin

On Thu, Nov 21, 2013 at 12:09 PM, Boris Zbarsky bzbar...@mit.edu wrote:

On 11/21/13 2:43 PM, Laura Thomson wrote:


it's a search engine for some of our CVS repositories



It's not just a search engine.  It's also the only way to get CVS blame
sanely without doing a local pull of the CVS repository or trying to make
git do something useful for you.  And a lot of our code dates back to that
CVS repository.

-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


unified build mode and memory consumption

2013-11-29 Thread smaug

Hi all and FYI


unified build mode has increased memory usage of building with gcc 
significantly.
On my laptop (8 gig mem) I started to see some swapping, and because of that
build times with unified mode weren't that much better than before.

But now, finally there is a use case for clang - it uses a lot less memory than 
gcc,
and unified+clang+gold debug clobber build is 16 mins comparing to the old 
~30mins!

(Before using unified mode I didn't have mem usage problems with gcc and there 
was never any significant
difference in build times comparing to clang.)





-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Removing favor-perf-mode

2013-12-08 Thread smaug

Hi all,


I'm planning to land [1] early in the next cycle.
It removes the odd and old favor-perf-mode which is used during page loads.
Removing the mode and forcing timely screen refreshes even on non-omtc platforms
helps significantly in several cases, at least [2], [3], [4].


favor-perf-mode, which is normally active only during page load (unless user is 
actively interacting with the browser)
and 2 seconds after that (don't ask why 2s - this is ancient code) causes 
browser to process main thread gecko events in
a tight loop and only occasionally process events from OS (like user input or 
paint requests).


I do expect some tp regressions but significant tp_responsiveness improvements.


(I'll need to still fix few racy tests before landing)



-smaug


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=930793
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=732621
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=822096
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=880036
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread smaug

On 12/10/2013 11:28 AM, Chris Pearce wrote:

Hi All,

Can we start using C++ STL containers like std::set, std::map, std::queue in 
Mozilla code please? Many of the STL containers are more convenient to
use than our equivalents, and more familiar to new contributors.

I understand that we used to have a policy of not using STL in mozilla code 
since some older compilers we wanted to support didn't have very good
support, but I'd assume that that argument no longer holds since already build 
and ship a bunch of third party code that uses std containers (angle,
webrtc, chromium IPC, crashreporter), and the sky hasn't fallen.

I'm not proposing a mass rewrite converting nsTArray to std::vector, just that 
we allow STL in new code.

Are there valid reasons why should we not allow C++ STL containers in Mozilla 
code?

Cheers,
Chris P.



std::map/set may not have the performance characteristics people think. They 
are significantly slower than
xpcom hashtables (since they are usually trees), yet people tend to use them as 
HashTables or HashSets.

Someone should compare the performance of std::unordered_map to our hashtables.
(Problem is that the comparison needs to be done on all the platforms).



-Olli




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: On the usefulness of style guides (Was: style guide proposal)

2013-12-19 Thread smaug

On 12/20/2013 12:11 AM, Ehsan Akhgari wrote:

On 12/19/2013, 12:57 PM, Till Schneidereit wrote:

I think we should do more than encourage: we should back out for all
style guide violations. Period. We could even enforce that during upload
to a review tool, perhaps.

However. This has to be done on a per-module basis (or even more
fine-grained: different parts of, e.g., SpiderMonkey have slightly
different styles). Different modules have vastly different styles,
ranging from where to put braces over how much to indent to how to name
fields/ vars/ arguments. I very, very much doubt we'll ever be able to
reconcile these differences. (Partly because some of the affected people
from different modules sit in the same offices, and would probably get
into fist fights.)


See, that right there is the root problem!  Programmers tend to care too much 
about their favorite styles.  I used to be like that but over the years
I've mostly stopped caring about which style is better, and what I want now is 
consistency,


Exactly. We need consistency since that leads to easier-to-read code. And that is why we have 
https://developer.mozilla.org/En/Mozilla_Coding_Style_Guide and that is what DOM (C++) is following in new code.

(Except for some strange reason webidl bindings use odd mix of js and normal 
DOM style)




even if the code looks ugly to *me*.  The projects which

enforce a unified style guideline have this huge benefit that their code looks 
clean and consistent, as if it was all written by the same person.  But
letting each module enforce its own rules leads to the kind of code base which 
we have now where you get a completely different style depending on
which directory and file you're looking at.  I think trying to enforce this 
kind of inconsistency with tools is strictly worse than what we have today.

If we stepped back for a second and agreed to put aside our personal 
preferences and value consistency more, we could use tools to enforce the style
guidelines and we'd end up where other code bases that have done this are right 
now.

That all being said, I'm sure there are people who don't value consistency as 
much as I do and are more interested to have the code they work on the
most look beautiful to them.  I think these two camps are at odds with each other and so 
far the latter camp has won this battle.  :/

Cheers,
Ehsan


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread smaug

Sounds good, and I'd include also js/* so that we had consistent style 
everywhere.
It is rather painful to hack various non-js/* and js/* (xpconnect in my case)
in the same patch.
(I also happen to think that Mozilla coding style is inherently better than js 
style, since
it has clear rules for naming parameters and static, global and member 
variables in a way
which make them look different to local variables.)


3rd party code shouldn't be touched.


-Olli



On 01/06/2014 04:34 AM, Nicholas Nethercote wrote:

We've had some recent discussions about code style. I have a propasal

For the purpose of this proposal I will assume that there is consensus on the
following ideas.

- Having multiple code styles is bad.

- Therefore, reducing the number of code styles in our code is a win (though
   there are some caveats relating to how we get to that state, which I discuss
   below).

- The standard Mozilla style is good enough. (It's not perfect, and it should
   continue to evolve, but if you have any pet peeves please mention them in a
   different thread to this one.)

With these ideas in mind, a goal is clear: convert non-Mozilla-style code to
Mozilla-style code, within reason.

There are two notions that block this goal.

- Our rule of thumb is to follow existing style in a file. From the style
   guide:

   The following norms should be followed for new code, and for Tower of Babel
   code that needs cleanup. For existing code, use the prevailing style in a
   file or module, or ask the owner if you are on someone else's turf and it's
   not clear what style to use.

   This implies that large-scale changes to convert existing code to standard
   style are discouraged. (I'd be interested to hear if people think this
   implication is incorrect, though in my experience it is not.)

   I propose that we officially remove this implicit discouragement, and even
   encourage changes that convert non-Mozilla-style code to Mozilla-style (with
   some exceptions; see below). When modifying badly-styled code, following
   existing style is still probably best.

   However, large-scale style fixes have the following downsides.

   - They complicate |hg blame|, but plenty of existing refactorings (e.g.
 removing old types) have done likewise, and these are bearable if they
 aren't too common. Therefore, style conversions should do entire files in
 a single patch, where possible, and such patches should not make any
 non-style changes. (However, to ease reviewing, it might be worth
 putting fixes to separate style problems in separate patches. E.g. all
 indentation fixes could be in one patch, separate from other changes.
 These would be combined before landing. See bug 956199 for an example.)

   - They can bitrot patches. This is hard to avoid.

   However, I imagine changes would happen in a piecemeal fashion, e.g. one
   module or directory at a time, or even one file at a time. (Again, see bug
   956199 for an example.) A gigantic change-all-the-code patch seems
   unrealistic.

- There is an semi-official policy that the owner of a module can dictate its
   style. Examples: SpiderMonkey, Storage, MFBT.

   There appears to be no good reason for this and I propose we remove it.
   Possibly with the exception of SpiderMonkey (and XPConnect?), due to it being
   an old and large module with its own well-established style.

   Also, we probably shouldn't change the style of imported third-party code;
   even if we aren't tracking upstream, we might still want to trade patches.
   (Indeed, it might even be worth having some kind of marking at the top of
   files to indicate this, a bit like a modeline?)

Finally, this is a proposal only to reduce the number of styles in our
codebase. There are other ideas floating around, such as using automated tools
to enforce consistency, but I consider them orthogonal to or
follow-ups/refinements of this proposal -- nothing can happen unless we agree
on a direction (fewer styles!) and a way to move in that direction (non-trivial
style changes are ok!)

Nick



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread smaug

On 01/06/2014 08:06 PM, Bobby Holley wrote:

I think trying to change SpiderMonkey style to Gecko isn't a great use of
effort:
* There are a lot of SpiderMonkey hackers who don't hack on anything else,
and don't consider themselves Gecko hackers. Many of them don't read
dev-platform, and would probably revolt if this were to occur.
* SpiderMonkey is kind of an external module, with an existing community of
embedders who use its style.
* SpiderMonkey has an incredibly detailed style guide - more detailed than
Gecko's: https://wiki.mozilla.org/JavaScript:SpiderMonkey:Coding_Style

I don't see that being any more detailed than Mozilla coding style.
Perhaps more verbal, but not more detailed.



* The stylistic consistency and attention to detail is probably the highest
out of any other similarly-sized module in the tree.

This leaves us with the question about what to do with XPConnect. It used
to have its own (awful) style, and I converted it to SpiderMonkey style 2
years ago (at the then-module-owner's request). It interacts heavily with
the JS engine, so this sorta makes sense, but there's also an argument for
converting it to Gecko style. I could perhaps be persuaded at some point if
someone wants to do the leg work.

bholley


On Mon, Jan 6, 2014 at 6:07 AM, smaug sm...@welho.com wrote:


Sounds good, and I'd include also js/* so that we had consistent style
everywhere.
It is rather painful to hack various non-js/* and js/* (xpconnect in my
case)
in the same patch.
(I also happen to think that Mozilla coding style is inherently better
than js style, since
it has clear rules for naming parameters and static, global and member
variables in a way
which make them look different to local variables.)


3rd party code shouldn't be touched.


-Olli




On 01/06/2014 04:34 AM, Nicholas Nethercote wrote:


We've had some recent discussions about code style. I have a propasal

For the purpose of this proposal I will assume that there is consensus on
the
following ideas.

- Having multiple code styles is bad.

- Therefore, reducing the number of code styles in our code is a win
(though
there are some caveats relating to how we get to that state, which I
discuss
below).

- The standard Mozilla style is good enough. (It's not perfect, and it
should
continue to evolve, but if you have any pet peeves please mention them
in a
different thread to this one.)

With these ideas in mind, a goal is clear: convert non-Mozilla-style code
to
Mozilla-style code, within reason.

There are two notions that block this goal.

- Our rule of thumb is to follow existing style in a file. From the style
guide:

The following norms should be followed for new code, and for Tower of
Babel
code that needs cleanup. For existing code, use the prevailing style
in a
file or module, or ask the owner if you are on someone else's turf and
it's
not clear what style to use.

This implies that large-scale changes to convert existing code to
standard
style are discouraged. (I'd be interested to hear if people think this
implication is incorrect, though in my experience it is not.)

I propose that we officially remove this implicit discouragement, and
even
encourage changes that convert non-Mozilla-style code to Mozilla-style
(with
some exceptions; see below). When modifying badly-styled code,
following
existing style is still probably best.

However, large-scale style fixes have the following downsides.

- They complicate |hg blame|, but plenty of existing refactorings (e.g.
  removing old types) have done likewise, and these are bearable if
they
  aren't too common. Therefore, style conversions should do entire
files in
  a single patch, where possible, and such patches should not make any
  non-style changes. (However, to ease reviewing, it might be worth
  putting fixes to separate style problems in separate patches. E.g.
all
  indentation fixes could be in one patch, separate from other changes.
  These would be combined before landing. See bug 956199 for an
example.)

- They can bitrot patches. This is hard to avoid.

However, I imagine changes would happen in a piecemeal fashion, e.g.
one
module or directory at a time, or even one file at a time. (Again, see
bug
956199 for an example.) A gigantic change-all-the-code patch seems
unrealistic.

- There is an semi-official policy that the owner of a module can dictate
its
style. Examples: SpiderMonkey, Storage, MFBT.

There appears to be no good reason for this and I propose we remove it.
Possibly with the exception of SpiderMonkey (and XPConnect?), due to
it being
an old and large module with its own well-established style.

Also, we probably shouldn't change the style of imported third-party
code;
even if we aren't tracking upstream, we might still want to trade
patches.
(Indeed, it might even be worth having some kind of marking at the top
of
files

Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-06 Thread smaug

On 01/07/2014 01:38 AM, Joshua Cranmer  wrote:

On 1/6/2014 4:27 PM, Robert O'Callahan wrote:

That's just not true, sorry. If some module owner decides to keep using NULL or 
PRUnichar, or invent their own string class, they will be corrected.


Maybe. But we also have a very large number of deprecated or 
effectively-deprecated constructs whose deprecation module owners may not be 
aware of
because their use is somewhat prevalent in code. For example, the NS_ENSURE_* 
macros are apparently now considered officially deprecated.

Since when? NS_ENSURE_ macros are very useful for debugging. (When something is 
going wrong, the warnings in the terminal tend to give strong
hints what/where that something is. Reduces debugging time significantly.)



Our track
record of removing these quickly is poor (nsISupportsArray and nsVoidArray, 
anyone?), and many of the deprecated constructs are macros or things
defined in external project headers (like, say, prtypes.h), which makes using 
__declspec(deprecated) or __attribute__((deprecated)) unfeasible.

Is there any support for setting up a wiki page that lists these deprecated, 
obsolete constructs and provides tracking bugs for actually eliminating
them?




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS

2014-01-06 Thread smaug

On 11/22/2013 10:18 PM, Benjamin Smedberg wrote:

With the landing of bug 672843, the NS_ENSURE_* macros are now considered 
deprecated. If you are writing code that wants to issue warnings when
methods fail, please either use NS_WARNING directly or use the new NS_WARN_IF 
macro.

if (NS_WARN_IF(somethingthatshouldbetrue))
   return NS_ERROR_INVALID_ARG;

if (NS_WARN_IF(NS_FAILED(rv))
   return rv;

I am working on a script which can be used to automatically convert most of the 
existing NS_ENSURE_* macros, and I will also be updating the coding
style guide to point to the recommended form.

--BDS




Why this deprecation?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS

2014-01-06 Thread smaug

On 01/07/2014 02:58 AM, Karl Tomlinson wrote:

smaug sm...@welho.com writes:


Why this deprecation?


NS_ENSURE_ macros hid return paths.
Also many people didn't understand that they issued warnings, and
so used the macros for expected return paths.

Was there some useful functionality that is not provided by the
replacements?




no, since it is always possible to expand those macros.
However
if (NS_WARN_IF(NS_FAILED(rv)) {
  return rv;
}
is super ugly.

Hopefully something like NS_WARN_IF_FAILED(rv) could be added.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-06 Thread smaug

On 01/07/2014 02:46 AM, Jeff Walden wrote:

I'm writing this list, so obviously I'm choosing what I think is on it.  But I 
think there's rough consensus on most of these among JS hackers.

JS widely uses 99ch line lengths (allows a line-wrap character in 100ch 
terminals).  Given C++ symbol names, especially with templates, get pretty
long, it's a huge loss to revert to 80ch because of how much has to wrap.  Is 
there a reason Mozilla couldn't increase to 99 or 100?  Viewability
on-screen seems pretty weak in this era of generally large screens.  
Printability's a better argument, but it's unclear to me files are printed
often enough for this to matter.  I do it one or two times a year, myself, 
these days.


99 or 100 for line lengths sounds good to me. Use of templates has increased 
quite significantly and 80ch isn't enough anymore.



I don't think most JS hackers care for abuse of Hungarian notation for 
scope-based (or const) naming.  Every member/argument having a capital
letter in it surely makes typing slower.  And extra noise in every name but 
locals seems worse for new-contributor readability.

It is rather common to have prefixed variables names (outside Gecko), and it 
increases readability of the code in many cases.
For example with out params it helps significantly when you know which thing is 
actually the return value.
Also with long functions prefixes help to locate the variable definition.


 Personally this
doesn't bother me much (although aCx will always be painful compared to cx 
as two no-cap letters, I'm sure), but others are much more
bothered.





JS people have long worked without bracing single-liners.  With any style 
guide's indentation requirements, they're a visually redundant waste of
space.  Any style checker that checks both indentation and bracing (of course 
we'll have one, right?), will warn twice for the error single-line
bracing prevents.  I think most of us would discount the value of being able to 
add more to a single-line block without changing the condition
line.  So I'm pretty sure we're all dim on this one.

I'd say consistency is good in this case. always {}. No special cases. And it 
improves readability, since {} forces the almost-empty line
after the single-liner.



Skimming the rest of the current list, I don't see anything that would 
obviously, definitely, be on the short list of complaints for SpiderMonkey
hackers.  Other SpiderMonkey hackers should feel free to point out anything 
else they see, that I might have missed.

Jeff



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS

2014-01-07 Thread smaug

On 01/07/2014 05:14 PM, smaug wrote:

On 01/07/2014 08:46 AM, Bobby Holley wrote:

On Mon, Jan 6, 2014 at 5:04 PM, smaug sm...@welho.com wrote:


no, since it is always possible to expand those macros.
However
if (NS_WARN_IF(NS_FAILED(rv)) {
   return rv;
}
is super ugly.



Note that there in a explicit stylistic exception that NS_WARN_IF
statements do not require braces. So it's:

No exceptions. always {} with if.





if (NS_WARN_IF(NS_FAILED(rv)))
   return rv;


Also, I agree that we should get NS_WARN_IF_FAILED. Then it becomes:

if (NS_WARN_IF_FAILED(rv))
   return rv;

which is almost as palatable as NS_ENSURE_SUCCESS(rv, rv);

bholley






And looks like whoever updated the coding style made the examples inconsistent, 
since there has pretty much always
(I think since the time coding style was somewhere in www.mozilla.org/*) been the 
following Always brace controlled statements, and
that is still there. No exceptions. Consistency is more important than style.




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS

2014-01-07 Thread smaug

On 01/07/2014 08:46 AM, Bobby Holley wrote:

On Mon, Jan 6, 2014 at 5:04 PM, smaug sm...@welho.com wrote:


no, since it is always possible to expand those macros.
However
if (NS_WARN_IF(NS_FAILED(rv)) {
   return rv;
}
is super ugly.



Note that there in a explicit stylistic exception that NS_WARN_IF
statements do not require braces. So it's:

No exceptions. always {} with if.





if (NS_WARN_IF(NS_FAILED(rv)))
   return rv;


Also, I agree that we should get NS_WARN_IF_FAILED. Then it becomes:

if (NS_WARN_IF_FAILED(rv))
   return rv;

which is almost as palatable as NS_ENSURE_SUCCESS(rv, rv);

bholley




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tracking Docshells

2014-02-13 Thread smaug

On 02/13/2014 12:53 PM, Girish Sharma wrote:

Thank you everyone for your inputs. Since there is no current method of
precisely tracking window creation and removal, how should I proceed and
add such functionality ?

What I basically want is that despite of BFCache or anything, I should be
able to track when a new docshell (or the corresponding window) is created
and removed.

In these events, I also want to include the following scenarios:

- Docshell/Window loaded from BFCache should emit created event.
- Docshell does not changes, but the window location change, like while
navigating from about:home to any other site should emit corresponding
removed and then created for the new window.
- Docshell/Window which is cached into BFCache should emit removed
event.

Thanks.



Combination of content-document-global-created, chrome-document-global-created 
and
dom-window-destroyed notifications and pageshow and pagehide listeners should 
work in this
case.



-Olli





On Wed, Feb 12, 2014 at 9:28 PM, Boris Zbarsky bzbar...@mit.edu wrote:


On 2/12/14 10:23 AM, Girish Sharma wrote:


I am wondering, is this why google.co.in also not emit an unload event
on
the chrome vent event handler ?



Most likely, yes.


-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform







___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS Variables

2014-03-19 Thread smaug

On 03/18/2014 11:26 AM, Cameron McCormack wrote:

CSS Variables is a feature that allows authors to define custom properties that 
cascade and inherit in the same way that regular properties do, and to
reference the values of these custom properties in the values of regular 
properties (and other custom properties too).

   http://dev.w3.org/csswg/css-variables/

I blogged about the feature when it initially landed here:

   http://mcc.id.au/blog/2013/12/variables

It lives behind the layout.css.variables.enabled pref, which is currently 
enabled by default only on Aurora/Nightly.

One thing from the specification that we don't implement is the CSSVariableMap, 
but Tab tells me that this is going to be removed from the spec.

I intend to enable this feature by default this Friday (March 21).  That would 
target Firefox 31.

The spec has been stable for a while, and the CSS Working Group already has a 
resolution to publish the document as a Candidate Recommendation.

There has been a last minute proposal for a change to the syntax of custom 
property declarations, to align with other custom features in CSS that are
coming down the pipeline (such as custom media queries).  There are a couple of proposals 
on the table: to replace the var- prefix of the name of
the custom property with a _ prefix, or perhaps just anywhere within the name.  Similarly for 
-- rather than _.

Tab wants to discuss this in this week's CSS Working Group telcon, so if the 
decision is made quickly enough we can update our implementation. The CSS
Working Group is aware of our plans to ship.

Bug to enable this feature: https://bugzilla.mozilla.org/show_bug.cgi?id=957833

Blink until recently had an implementation of CSS Variables, behind a flag, but 
not one that supported the fallback syntax (or CSSVariableMap).  They
have recently removed it saying that they need to rewrite as it had poor 
performance, but they say they are still interested in the feature


Curious, how much have we tested the performance of our implementation and are 
there some known perf issues?


-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-19 Thread smaug

On 03/20/2014 01:39 AM, Kyle Huey wrote:

Followup to dev-platform please.

We are discovering a lot of leaks in JS implemented DOM objects.  The
general pattern seems to be that we have a DOM object that also needs
to listen to events from the message manager or notifications from the
observer service, which usually hold strong references to the
listener.  In C++ we would split the object into two separate
refcounted objects, one for the DOM side and another that interfaces
with these global singletons (which I'll call the proxy).  Between
this pair of objects at least one direction would be weak, allowing
the DOM object's lifetime to be managed appropriately by the garbage
and cycle collectors and in its destructor it could tell the proxy to
unregister itself and die.  But this isn't possible because of the
lack of weak references in JS.

Instead we end up running a bunch of manual cleanup code at
inner-window-destroyed.  This is already bad for many things because
it means that our object now lives for the lifetime of the window
(which might be forever, in the case of the system app window on B2G).
  Also in some cases we forget to remove our inner-window-destroyed
observer so we live forever.  For objects not intended to live for the
lifetime of the window we need to manually perform the same cleanup
when we figure out that we can go away (which can be quite difficult
since we can't usefully answer the question is something in the
content page holding me alive?).  All of this requires a lot of
careful manual memory management which is very easy to get wrong and
is foreign to many JS authors.

Short of not implementing things in JS, what ideas do people have for
fixing these issues?  We have some ideas of how to add helpers to
scope these things to the lifetime of the window (perhaps by adding an
API that returns a promise that is resolved at inner-window-destroyed
to provide a good cleanup hook that is not global) but that doesn't
help with objects intended to have shorter lifetimes.  Is it possible
for us to implement some sort of useful weak reference in JS?


I'm rather strongly against adding weak refs to the web platform.
They expose GC behavior, which lead to odd and hard to debug errors,
and implementations may have to use pretty much the same GC.

Internally we have weakref support in WrappedJS, and one can use
weak observers for ObserverService and weak listeners for MessageManager.
Aren't those not enough for the cases where weakrefs anyway can help?
We could use WrappedJS also in some kind of chrome Promises - WeakPromise,
which wouldn't keep the callback alive.


-Olli



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-19 Thread smaug

On 03/20/2014 01:58 AM, smaug wrote:

On 03/20/2014 01:39 AM, Kyle Huey wrote:

Followup to dev-platform please.

We are discovering a lot of leaks in JS implemented DOM objects.  The
general pattern seems to be that we have a DOM object that also needs
to listen to events from the message manager or notifications from the
observer service, which usually hold strong references to the
listener.  In C++ we would split the object into two separate
refcounted objects, one for the DOM side and another that interfaces
with these global singletons (which I'll call the proxy).  Between
this pair of objects at least one direction would be weak, allowing
the DOM object's lifetime to be managed appropriately by the garbage
and cycle collectors and in its destructor it could tell the proxy to
unregister itself and die.  But this isn't possible because of the
lack of weak references in JS.

Instead we end up running a bunch of manual cleanup code at
inner-window-destroyed.  This is already bad for many things because
it means that our object now lives for the lifetime of the window
(which might be forever, in the case of the system app window on B2G).
  Also in some cases we forget to remove our inner-window-destroyed
observer so we live forever.  For objects not intended to live for the
lifetime of the window we need to manually perform the same cleanup
when we figure out that we can go away (which can be quite difficult
since we can't usefully answer the question is something in the
content page holding me alive?).  All of this requires a lot of
careful manual memory management which is very easy to get wrong and
is foreign to many JS authors.

Short of not implementing things in JS, what ideas do people have for
fixing these issues?  We have some ideas of how to add helpers to
scope these things to the lifetime of the window (perhaps by adding an
API that returns a promise that is resolved at inner-window-destroyed
to provide a good cleanup hook that is not global) but that doesn't
help with objects intended to have shorter lifetimes.  Is it possible
for us to implement some sort of useful weak reference in JS?


I'm rather strongly against adding weak refs to the web platform.
They expose GC behavior, which lead to odd and hard to debug errors,
and implementations may have to use pretty much the same GC.

Internally we have weakref support in WrappedJS, and one can use
weak observers for ObserverService and weak listeners for MessageManager.
Aren't those not enough for the cases where weakrefs anyway can help?
We could use WrappedJS also in some kind of chrome Promises - WeakPromise,
which wouldn't keep the callback alive.


-Olli






And we could add a flag to WrappedJS so that it would call some callback when 
it is about
to go away. That would let cleanup of WeakPromise happen asap.
Basically keep a hashtable wrappedjs - 
objects_implementing_callback_interface_foo.
Then when adding an object to the hashtable, wrappedjs would get marked with a 
flag, that
it should look at the hashtable when it is going away, and if there is value in 
the hashtable,
call the foo.

This cleanup mechanism could be used with weak observers and weak listeners too.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-19 Thread smaug

On 03/20/2014 02:25 AM, smaug wrote:

On 03/20/2014 01:58 AM, smaug wrote:

On 03/20/2014 01:39 AM, Kyle Huey wrote:

Followup to dev-platform please.

We are discovering a lot of leaks in JS implemented DOM objects.  The
general pattern seems to be that we have a DOM object that also needs
to listen to events from the message manager or notifications from the
observer service, which usually hold strong references to the
listener.  In C++ we would split the object into two separate
refcounted objects, one for the DOM side and another that interfaces
with these global singletons (which I'll call the proxy).  Between
this pair of objects at least one direction would be weak, allowing
the DOM object's lifetime to be managed appropriately by the garbage
and cycle collectors and in its destructor it could tell the proxy to
unregister itself and die.  But this isn't possible because of the
lack of weak references in JS.

Instead we end up running a bunch of manual cleanup code at
inner-window-destroyed.  This is already bad for many things because
it means that our object now lives for the lifetime of the window
(which might be forever, in the case of the system app window on B2G).
  Also in some cases we forget to remove our inner-window-destroyed
observer so we live forever.  For objects not intended to live for the
lifetime of the window we need to manually perform the same cleanup
when we figure out that we can go away (which can be quite difficult
since we can't usefully answer the question is something in the
content page holding me alive?).  All of this requires a lot of
careful manual memory management which is very easy to get wrong and
is foreign to many JS authors.

Short of not implementing things in JS, what ideas do people have for
fixing these issues?  We have some ideas of how to add helpers to
scope these things to the lifetime of the window (perhaps by adding an
API that returns a promise that is resolved at inner-window-destroyed
to provide a good cleanup hook that is not global) but that doesn't
help with objects intended to have shorter lifetimes.  Is it possible
for us to implement some sort of useful weak reference in JS?


I'm rather strongly against adding weak refs to the web platform.
They expose GC behavior, which lead to odd and hard to debug errors,
and implementations may have to use pretty much the same GC.

Internally we have weakref support in WrappedJS, and one can use
weak observers for ObserverService and weak listeners for MessageManager.
Aren't those not enough for the cases where weakrefs anyway can help?
We could use WrappedJS also in some kind of chrome Promises - WeakPromise,
which wouldn't keep the callback alive.


-Olli






And we could add a flag to WrappedJS so that it would call some callback when 
it is about
to go away. That would let cleanup of WeakPromise happen asap.
Basically keep a hashtable wrappedjs - 
objects_implementing_callback_interface_foo.


Or that should be wrappedjs - 
arrayobjects_implementing_callback_interface_foo



Then when adding an object to the hashtable, wrappedjs would get marked with a 
flag, that
it should look at the hashtable when it is going away, and if there is value in 
the hashtable,
call the foo.

This cleanup mechanism could be used with weak observers and weak listeners too.


-Olli


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Memory management in features implemented in JS

2014-03-20 Thread smaug

On 03/20/2014 12:37 PM, David Rajchenbach-Teller wrote:

So basically, you want to add a finalizer for JS component?


No. It would be callback to tell when the thing (wrappedJS) we have a weakref to
is going away. And wrappedJS keeps then a weak ref to the JS object.
As far as I see, this would be trivial to implement, at least if
we don't mind adding a word to nsXPCWrappedJS.
If we do mind, then some tricks need to be done.

Also, this all could be done outside JS engine, so the native services wouldn't
need to start care about JS API.

(Looks like the current nsXPCWrappedJS fits just perfectly into
jemalloc's 112 bucket on 64bit, but on 32bit case we have room for an extra 
word)



Note that we already have a weak (post-mortem) finalization module for
JS, hidden somewhere in mozilla-central. It's not meant to be used for
performance critical code, and it provides no guarantees about cycles,
but if this is necessary, I could rework it in something a bit
faster/more robust.

Cheers,
  David

On 3/20/14 1:25 AM, smaug wrote:

And we could add a flag to WrappedJS so that it would call some callback
when it is about
to go away. That would let cleanup of WeakPromise happen asap.
Basically keep a hashtable wrappedjs -
objects_implementing_callback_interface_foo.
Then when adding an object to the hashtable, wrappedjs would get marked
with a flag, that
it should look at the hashtable when it is going away, and if there is
value in the hashtable,
call the foo.

This cleanup mechanism could be used with weak observers and weak
listeners too.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Graceful Platform Degradation

2014-03-28 Thread smaug

On 03/27/2014 10:26 AM, Nicholas Nethercote wrote:

This sounds like a worthy and interesting idea, but also a very difficult one.


PC games allow the user to turn certain features (mostly graphics
related ones) on and off so that they can find their own level of
acceptable performance/quality.  This doesn't seem like the right
approach for viewing Web content.


Yeah, games are a much easier case. The content is known ahead of time
(so the degradation can be carefully tested), and typically graphics
dominates the hardware requirements. In a browser, the former is
untrue, and the latter is often untrue -- degradation of audiovisual
elements seems tractable, but what if it's JS execution that's causing
the slowness?

Perhaps there could be a way to annotate the HTML/JS/CSS code to
indicate which parts are less important. I.e. let the page author
dictate what is less important. That would facilitate testing -- a web
developer with a powerful machine could turn on the browser's stress
mode and get a good sense of what would change. Whether developers
would bother with it, though, I don't know.

Nick




Perhaps annotating setTimeout/Interval callbacks and animation frame callbacks 
with
{ priority: low } and process such callbacks only if we can keep up with 60Hz.
priority: medium perhaps when 30Hz.
But anyhow, keeping separate lists for less-important async stuff might make it 
simpler for
web devs to opt-in to different perf characteristics.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Graceful Platform Degradation

2014-03-28 Thread smaug

On 03/28/2014 08:34 PM, Jet Villegas wrote:

Do you think this should require author opt-in? I was thinking that we spec 
what we degrade automagically so it's less of a black box even without opt-in.



We probably need both. opt-in and something automagical degrade.
We certainly could lower refresh rate in some cases in order to try to avoid 
extra layout flushes etc.




--Jet

- Original Message -
From: smaug sm...@welho.com
To: Nicholas Nethercote n.netherc...@gmail.com, Jet Villegas 
j...@mozilla.com
Sent: Friday, March 28, 2014 11:16:42 AM
Subject: Re: Graceful Platform Degradation


Perhaps annotating setTimeout/Interval callbacks and animation frame callbacks 
with
{ priority: low } and process such callbacks only if we can keep up with 60Hz.
priority: medium perhaps when 30Hz.
But anyhow, keeping separate lists for less-important async stuff might make it 
simpler for
web devs to opt-in to different perf characteristics.


-Olli


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Promise.jsm and the predefined Promise object

2014-03-31 Thread smaug

On 03/29/2014 02:55 PM, Paolo Amadini wrote:

With bug 988122 landing soon, you'll now find a Promise object
available by default in the global scope of JavaScript modules.

However, this default implementation is still limited, and you're
strongly recommended to import Promise.jsm explicitly in new modules:

   Cu.import(resource://gre/modules/Promise.jsm);

This will give you a number of advantages, among others:
  - Errors don't risk disappearing silently (bug 966452)
  - Tests will fail if errors are accidentally uncaught (bug 976205)
  - You can inspect the current state in the debugger (bug 966471)
  - You can see registered then handlers (bug 966472)
  - You get better performance on long Promise chains

Promise.jsm and Promise are still interoperable from the functional
point of view, the difference is in the above non-functional
characteristics. Promise.jsm also has better performance due to the
fact that it avoids the JavaScript / C++ / JavaScript turnaround
time on chain resolution,

Has this shown up in profiles? If so, could you please give links to the 
profiles, since
we should get fast promise handling to the web platform.



with an optimized resolution loop handling

How is this different to the C++ implementation?
Based on the code inspection both seem to do pretty much the same thing.
Post a runnable to the event loop and then process all the callbacks in
on batch.



but I don't think this performance part should prevent us from
migrating to C++ Promises when the other limitations are addressed.

Cheers,
Paolo



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recommendations on source control and code review

2014-04-14 Thread smaug

On 04/14/2014 12:42 AM, Robert O'Callahan wrote:

On Sat, Apr 12, 2014 at 8:29 AM, Gregory Szorc g...@mozilla.com wrote:


I came across the following articles on source control and code review:

* https://secure.phabricator.com/book/phabflavor/article/
recommendations_on_revision_control/
* https://secure.phabricator.com/book/phabflavor/article/
writing_reviewable_code/
* https://secure.phabricator.com/book/phabflavor/article/
recommendations_on_branching/

I think everyone working on Firefox should take the time to read them as
they prescribe what I perceive to be a very rational set of best practices
for working with large and complex code bases.

The articles were written by a (now former) Facebooker and the
recommendations are significantly influenced by Facebook's experiences.
They have many of the same problems we do (size and scale of code base,
hundreds of developers, etc). Some of the pieces on feature development
don't translate easily, but most of the content is relevant.

I would be thrilled if we started adopting some of the recommendations
such as more descriptive commit messages and many, smaller commits over
fewer, complex commits.



As a reviewer one of the first things I do when reviewing a big patch is to
see if I can suggest a reasonable way to split it into smaller patches.



As a reviewer I usually want to see _also_ a patch which contains all the 
changes.
Otherwise it can be very difficult to see the big picture.
But sure, having large patches split to smaller pieces may help.





Honestly, I think we're already pretty close to most of those
recommendations, most of the time. More descriptive commit messages is
the only recommendation there that is not commonly followed, as far as I
can see.


I always just use the link to the bug, so I haven't seen multiline comment 
useful at all.
Mostly they are just annoying, making logs harder to read.



-Olli




Rob



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-13 Thread smaug

On 08/12/2014 06:23 PM, Aryeh Gregor wrote:

On Tue, Aug 12, 2014 at 6:16 PM, Benoit Jacob jacob.benoi...@gmail.com wrote:

As far as I know, the only downside in replacing already_AddRefed by
nsCOMPtr would be to incur more useless calls to AddRef and Release. In the
case of threadsafe i.e. atomic refcounting, these use atomic instructions,
which might be expensive enough on certain ARM CPUs that this might matter.
So if you're interested, you could take a low-end ARM CPU that we care about
and see if replacing already_AddRefed by nsCOMPtr causes any measurable
performance regression.


Bug 1015114 removes those extra addrefs using C++11 move semantics, so
assuming that lands, it's not an issue.  (IIRC, Boris has previously
said that excessive addref/release is a real performance problem and
needs to be avoided.)




AddRef/Release calls are performance issue everywhere, in hot code paths, 
whether or not
there is threadsafety involved.
(excluding inline non-virtual AddRef/Release)

-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-13 Thread smaug

On 08/13/2014 07:24 PM, Aryeh Gregor wrote:

On Wed, Aug 13, 2014 at 5:44 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

Can't you do the following instead?

unused  MyFunction(); // I know that I'm leaking this ref, but it's ok
somehow


No, because the use-case is where you don't want to leak the ref --
you want it to be released automatically for you.  So for instance,
here's a real-world bit of code from nsWSRunObject:

   if ((aRun-mRightType  WSType::block) 
   IsBlockNode(nsCOMPtrnsINode(GetWSBoundingParent( {

GetWSBoundingParent() returns an already_AddRefednsINode, but all we
want is to check if it's a block node and then throw it away (without
leaking it).  We don't want to save it beyond that.  With the proposed
change, this would be (assuming we re-added a .get() method, which
would now be as safe as on nsCOMPtr)

   if ((aRun-mRightType  WSType::block) 
   IsBlockNode(GetWSBoundingParent().get())) {

which means the caller didn't take ownership, just used it and let the
destructor destroy it.

I'd say it would be a bug to let easily use already_Addrefed return values this 
way.
(Whether or not it is actually already_AddRefed or somehow hidden doesn't 
matter).
AddRef/Release are meaningful operations, and one should always see when they 
actually happen.
They should not be hidden behind some language magic.




-Olli



Similarly, if I would like to pass a string I have to a function that
wants an atom, I would like to be able to do
f(do_getAtom(string).get()) instead of
f(nsCOMPtrnsIAtom(do_getAtom(string))).

There are also functions in editor that do something like, say, insert
a new br, and return the element they inserted.  I might not want to
get a reference to the element, just want it created.  Currently I
have to make an nsCOMPtr to store the result and then ignore it.  I
don't see the value in requiring this.


I don't understand this.  You are already able to return raw pointers from
functions.  Returning an already_AddRefed is however a very different use
case, it means, I'm giving up ownership to the object I'm returning.


Yes, but what if the caller wants to use the object once (or not at
all) and then have it released immediately?  Is there value in
requiring explicit creation of an nsCOMPtr in that case?



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Touchpad event

2014-09-11 Thread smaug
What would be the event types for touchpad events?
We must not add yet another types of events to handle pointer type of events.


And besides, touch event model is rather horrible, so if we for some strange 
reason need
totally new events, I'd prefer using something closer to pointer events.


-Olli




On 09/11/2014 09:18 AM, Kershaw Chang wrote:
 Hi All,
 
 Summary:
 Touchpad(trackpad) is a common feature on laptop computers. Currently, the
 finger activities on touchpad are translated to touch event and mouse event.
 However, the coordinates of touch event and mouse event are actually
 associated to display [1]. For some cases, we need to expose the absolute
 coordinates that are associated to touchpad itself to the application.
 That’s why AOSP also defines another input source type for touchpad [2]. The
 x and y coordinates of touchpad event are relative to the size of touchpad.
 
 Use case:
 Handwriting recognition application will be benefited from this touchpad
 event. Currently, OS X supports handwriting input by touchpad [3].
 
 Idea of implementation:
 The webidl of touchpad event is like touch event except that x and y
 coordinates are relative to touchpad rather than display.
 
 --- /dev/null
 +++ b/dom/webidl/Touchpad.webidl
 +
 +[Func=mozilla::dom::Touchpad::PrefEnabled]
 +interface Touchpad {
 +  readonlyattribute long identifier;
 +  readonlyattribute EventTarget? target;
 +  readonlyattribute long touchpadX;
 +  readonlyattribute long touchpadY;
 +  readonlyattribute long radiusX;
 +  readonlyattribute long radiusY;
 +  readonlyattribute floatrotationAngle;
 +  readonlyattribute floatforce;
 +};
 
 --- /dev/null
 +++ b/dom/webidl/TouchpadEvent.webidl
 +
 +interface WindowProxy;
 +
 +[Func=mozilla::dom::TouchpadEvent::PrefEnabled]
 +interface TouchPadEvent : UIEvent {
 +  readonly attribute TouchpadList touches;
 +  readonly attribute TouchpadList targetTouches;
 +  readonly attribute TouchpadList changedTouches;
 +
 +  readonly attribute short   button;
 +  readonly attribute boolean altKey;
 +  readonly attribute boolean metaKey;
 +  readonly attribute boolean ctrlKey;
 +  readonly attribute boolean shiftKey;
 +
 +  [Throws]
 +  void initTouchpadEvent(DOMString type,
 + boolean canBubble,
 + boolean cancelable,
 + WindowProxy? view,
 + long detail,
 + short button,
 + boolean ctrlKey,
 + boolean altKey,
 + boolean shiftKey,
 + boolean metaKey,
 + TouchPadList? touches,
 + TouchPadList? targetTouches,
 + TouchPadList? changedTouches);
 +};
 
 --- /dev/null
 +++ b/dom/webidl/TouchpadList.webidl
 +
 +[Func=mozilla::dom::TouchpadList::PrefEnabled]
 +interface TouchpadList {
 +  [Pure]
 +  readonly attribute unsigned long length;
 +  getter Touchpad? item(unsigned long index);
 +};
 +
 +/* Mozilla extension. */
 +partial interface TouchpadList {
 +  Touchpad? identifiedTouch(long identifier);
 +};
 
 Platform converge: all
 
 Welcome for any suggestion or feedback.
 Thanks.
 
 [1]
 http://developer.android.com/reference/android/view/InputDevice.html#SOURCE_
 CLASS_POINTER
 [2]
 http://developer.android.com/reference/android/view/InputDevice.html#SOURCE_
 CLASS_POSITION
 [3] http://support.apple.com/kb/HT4288
 
 Best regards,
 Kershaw
 
 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Touchpad event

2014-09-11 Thread smaug
If we just needs new coordinates, couldn't we extend the existing event 
interfaces with some new properties?


-Olli


On 09/12/2014 12:52 AM, smaug wrote:
 What would be the event types for touchpad events?
 We must not add yet another types of events to handle pointer type of events.
 
 
 And besides, touch event model is rather horrible, so if we for some strange 
 reason need
 totally new events, I'd prefer using something closer to pointer events.
 
 
 -Olli
 
 
 
 
 On 09/11/2014 09:18 AM, Kershaw Chang wrote:
 Hi All,

 Summary:
 Touchpad(trackpad) is a common feature on laptop computers. Currently, the
 finger activities on touchpad are translated to touch event and mouse event.
 However, the coordinates of touch event and mouse event are actually
 associated to display [1]. For some cases, we need to expose the absolute
 coordinates that are associated to touchpad itself to the application.
 That’s why AOSP also defines another input source type for touchpad [2]. The
 x and y coordinates of touchpad event are relative to the size of touchpad.

 Use case:
 Handwriting recognition application will be benefited from this touchpad
 event. Currently, OS X supports handwriting input by touchpad [3].

 Idea of implementation:
 The webidl of touchpad event is like touch event except that x and y
 coordinates are relative to touchpad rather than display.

 --- /dev/null
 +++ b/dom/webidl/Touchpad.webidl
 +
 +[Func=mozilla::dom::Touchpad::PrefEnabled]
 +interface Touchpad {
 +  readonlyattribute long identifier;
 +  readonlyattribute EventTarget? target;
 +  readonlyattribute long touchpadX;
 +  readonlyattribute long touchpadY;
 +  readonlyattribute long radiusX;
 +  readonlyattribute long radiusY;
 +  readonlyattribute floatrotationAngle;
 +  readonlyattribute floatforce;
 +};

 --- /dev/null
 +++ b/dom/webidl/TouchpadEvent.webidl
 +
 +interface WindowProxy;
 +
 +[Func=mozilla::dom::TouchpadEvent::PrefEnabled]
 +interface TouchPadEvent : UIEvent {
 +  readonly attribute TouchpadList touches;
 +  readonly attribute TouchpadList targetTouches;
 +  readonly attribute TouchpadList changedTouches;
 +
 +  readonly attribute short   button;
 +  readonly attribute boolean altKey;
 +  readonly attribute boolean metaKey;
 +  readonly attribute boolean ctrlKey;
 +  readonly attribute boolean shiftKey;
 +
 +  [Throws]
 +  void initTouchpadEvent(DOMString type,
 + boolean canBubble,
 + boolean cancelable,
 + WindowProxy? view,
 + long detail,
 + short button,
 + boolean ctrlKey,
 + boolean altKey,
 + boolean shiftKey,
 + boolean metaKey,
 + TouchPadList? touches,
 + TouchPadList? targetTouches,
 + TouchPadList? changedTouches);
 +};

 --- /dev/null
 +++ b/dom/webidl/TouchpadList.webidl
 +
 +[Func=mozilla::dom::TouchpadList::PrefEnabled]
 +interface TouchpadList {
 +  [Pure]
 +  readonly attribute unsigned long length;
 +  getter Touchpad? item(unsigned long index);
 +};
 +
 +/* Mozilla extension. */
 +partial interface TouchpadList {
 +  Touchpad? identifiedTouch(long identifier);
 +};

 Platform converge: all

 Welcome for any suggestion or feedback.
 Thanks.

 [1]
 http://developer.android.com/reference/android/view/InputDevice.html#SOURCE_
 CLASS_POINTER
 [2]
 http://developer.android.com/reference/android/view/InputDevice.html#SOURCE_
 CLASS_POSITION
 [3] http://support.apple.com/kb/HT4288

 Best regards,
 Kershaw


 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Touchpad event

2014-09-11 Thread smaug

On 09/11/2014 08:26 PM, Chris Peterson wrote:

On 9/11/14 3:49 AM, Mounir Lamouri wrote:

On Thu, 11 Sep 2014, at 18:26, Ms2ger wrote:

First of all, you neglected to explain the standardization situation
here. Is this feature being standardized? If not, why not? How do
other browser vendors feel about it?


Where does this stand in the current Touch Events vs Pointer Events
situation, is the intent to re-use of those or create yet another
standard?


AFAIK, Blink does not intend [1] to implement Pointer Events. Should new web 
features avoid extending Pointer Events?




Unclear. We should still push for pointer events if just possible.
Blink hasn't proposed anything better and their reasoning to drop pointer 
events doesn't make sense.
But the situation is a bit tricky atm. TouchEvents API is worse than Pointer 
Events and web devs seem to prefer
pointer events model, so do I, but if blink won't have pointer events...


-Olli





chris

[1] https://code.google.com/p/chromium/issues/detail?id=162757#c64


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Web Speech API - Speech Recognition with Pocketsphinx

2014-10-30 Thread smaug

On 10/31/2014 02:21 AM, smaug wrote:

Intent to ship is too strong for this.
We need to first have implementation landed and tested ;)

I wouldn't ship the implementation in desktop FF without plenty of more testing.



But I guess the question is what people think about shipping the pocketspinx + 
API, even if disabled by default.

Andre, we need some numbers here. How much does Pocketsphinx increase binary 
size? or download size?
When the pref is enabled, how much does it use memory on desktop, what about on 
b2g?





-Olli


On 10/31/2014 01:18 AM, Andre Natal wrote:

I've been researching speech recognition in Firefox for two years. First
SpeechRTC, then emscripten, and now Web Speech API with CMU pocketsphinx
[1] embedded in Gecko C++ layer, project that I had the luck to develop for
Google Summer of Code with the mentoring of Olli Pettay, Guilherme
Gonçalves, Steven Lee, Randell Jesup plus others and with the management of
Sandip Kamat.

The implementation already works in B2G, Fennec and all FF desktop
versions, and the first language supported will be english. The API and
implementation are in conformity with W3C standard [2]. The preference to
enable it is: media.webspeech.service.default = pocketsphinx

The required patches for achieve this are:

  - Import pocketsphinx sources in Gecko. Bug 1051146 [3]
  - Embed english models. Bug 1065911 [4]
  - Change SpeechGrammarList to store grammars inside SpeechGrammar objects.
Bug 1088336 [5]
  - Creation of a SpeechRecognitionService for Pocketsphinx. Bug 1051148 [6]


Also, other important features that we don't have patches yet:
  - Relax VAD strategy to be les strict and avoid stop in the middle of
speech when speaking low volume phonemes [7]
  - Integrate or develop a grapheme to phoneme algorithm to realtime
generator when compiling grammars [8]
  - Inlcude and build models for other languages [9]
  - Continuous and wordspotting recognition [10]

The wip repo is here [11] and this Air Mozilla video [12] plus this wiki
has more detailed info [13].

At this comment you can see a cpu usage on flame while recognition is
happening [14]

I wish to hear your comments.

Thanks,

Andre Natal

[1] http://cmusphinx.sourceforge.net/
[2] https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1051146
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1065911
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=1088336
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1051148
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=1051604
[8] https://bugzilla.mozilla.org/show_bug.cgi?id=1051554
[9] https://bugzilla.mozilla.org/show_bug.cgi?id=1065904 and
https://bugzilla.mozilla.org/show_bug.cgi?id=1051607
[10] https://bugzilla.mozilla.org/show_bug.cgi?id=967896
[11] https://github.com/andrenatal/gecko-dev
[12] https://air.mozilla.org/mozilla-weekly-project-meeting-20141027/ (Jump
to 12:00)
[13] https://wiki.mozilla.org/SpeechRTC_-_Speech_enabling_the_open_web
[14] https://bugzilla.mozilla.org/show_bug.cgi?id=1051148#c14





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Web Speech API - Speech Recognition with Pocketsphinx

2014-10-30 Thread smaug

Intent to ship is too strong for this.
We need to first have implementation landed and tested ;)

I wouldn't ship the implementation in desktop FF without plenty of more testing.



-Olli


On 10/31/2014 01:18 AM, Andre Natal wrote:

I've been researching speech recognition in Firefox for two years. First
SpeechRTC, then emscripten, and now Web Speech API with CMU pocketsphinx
[1] embedded in Gecko C++ layer, project that I had the luck to develop for
Google Summer of Code with the mentoring of Olli Pettay, Guilherme
Gonçalves, Steven Lee, Randell Jesup plus others and with the management of
Sandip Kamat.

The implementation already works in B2G, Fennec and all FF desktop
versions, and the first language supported will be english. The API and
implementation are in conformity with W3C standard [2]. The preference to
enable it is: media.webspeech.service.default = pocketsphinx

The required patches for achieve this are:

  - Import pocketsphinx sources in Gecko. Bug 1051146 [3]
  - Embed english models. Bug 1065911 [4]
  - Change SpeechGrammarList to store grammars inside SpeechGrammar objects.
Bug 1088336 [5]
  - Creation of a SpeechRecognitionService for Pocketsphinx. Bug 1051148 [6]


Also, other important features that we don't have patches yet:
  - Relax VAD strategy to be les strict and avoid stop in the middle of
speech when speaking low volume phonemes [7]
  - Integrate or develop a grapheme to phoneme algorithm to realtime
generator when compiling grammars [8]
  - Inlcude and build models for other languages [9]
  - Continuous and wordspotting recognition [10]

The wip repo is here [11] and this Air Mozilla video [12] plus this wiki
has more detailed info [13].

At this comment you can see a cpu usage on flame while recognition is
happening [14]

I wish to hear your comments.

Thanks,

Andre Natal

[1] http://cmusphinx.sourceforge.net/
[2] https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1051146
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1065911
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=1088336
[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1051148
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=1051604
[8] https://bugzilla.mozilla.org/show_bug.cgi?id=1051554
[9] https://bugzilla.mozilla.org/show_bug.cgi?id=1065904 and
https://bugzilla.mozilla.org/show_bug.cgi?id=1051607
[10] https://bugzilla.mozilla.org/show_bug.cgi?id=967896
[11] https://github.com/andrenatal/gecko-dev
[12] https://air.mozilla.org/mozilla-weekly-project-meeting-20141027/ (Jump
to 12:00)
[13] https://wiki.mozilla.org/SpeechRTC_-_Speech_enabling_the_open_web
[14] https://bugzilla.mozilla.org/show_bug.cgi?id=1051148#c14



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Profiling on Linux

2014-11-13 Thread smaug

Hi all,


looks like Zoom profiler[1] is now free.
It has rather good UI on top of oprofile/rrprofile making profiling quite easy.
I've found it easier to use than Gecko profiler and it gives different kinds of 
views to the same
data. However it does lack the JS specific bits Gecko profiler has.
Anyhow, anyone hacking Gecko on Linux[2], I suggest you to give a try.




-Olli




[1] http://www.rotateright.com/zoom/
[2] Zoom should run on OSX too, but never tried.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling on Linux

2014-11-13 Thread smaug

On 11/13/2014 08:01 PM, smaug wrote:

Hi all,


looks like Zoom profiler[1] is now free.
It has rather good UI on top of oprofile/rrprofile

perf/oprofile/rrprofile


making profiling quite easy.

I've found it easier to use than Gecko profiler and it gives different kinds of 
views to the same
data. However it does lack the JS specific bits Gecko profiler has.
Anyhow, anyone hacking Gecko on Linux[2], I suggest you to give a try.




-Olli




[1] http://www.rotateright.com/zoom/
[2] Zoom should run on OSX too, but never tried.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling on Linux

2014-11-13 Thread smaug

On 11/13/2014 10:42 PM, Benoit Girard wrote:

Thanks for pointing this out, there's no single all purpose tool.



Indeed. Obviously for b2g stuff for example Gecko profiler is way more useful
(because you can run it on the device and knowing what js is running can be 
very relevant there).






Just a reminder that we have documentation on how to look into performance
problems here:
https://developer.mozilla.org/en-US/docs/Mozilla/Performance

Zoom already has a page on there. If there's any mozilla specific
information about using Zoom it should probably live here:
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_Zoom

On Thu, Nov 13, 2014 at 1:03 PM, smaug sm...@welho.com wrote:


On 11/13/2014 08:01 PM, smaug wrote:


Hi all,


looks like Zoom profiler[1] is now free.
It has rather good UI on top of oprofile/rrprofile


perf/oprofile/rrprofile



making profiling quite easy.


I've found it easier to use than Gecko profiler and it gives different
kinds of views to the same
data. However it does lack the JS specific bits Gecko profiler has.
Anyhow, anyone hacking Gecko on Linux[2], I suggest you to give a try.




-Olli




[1] http://www.rotateright.com/zoom/
[2] Zoom should run on OSX too, but never tried.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: after NPAPI ?

2014-11-25 Thread smaug

On 11/25/2014 05:45 PM, Reuben Morais wrote:

On Nov 25, 2014, at 13:22, Gijs Kruitbosch gijskruitbo...@gmail.com wrote:

On 25/11/2014 14:22, rayna...@gmail.com wrote:

I need to get the audio sample data and do some math on it, then play it in the 
speaker, with the minimum of latency (arround 20ms).

Only the wasapi driver could allow this.


Have you actually tried using getusermedia/web audio for this? Or are you just 
speculating?

What does the wasapi driver provide that the web's APIs don’t?


Low latency. I’m no audio expert, but my understanding is that a 20ms 
round-trip is _very_ fast.


Not really. Nowhere near enough for music production.
One needs to get the round-trip latency to 10ms.





See:

https://wiki.mozilla.org/Media/WebRTC_Audio_Issues#Audio_Latency_--_bug_785584
https://wiki.mozilla.org/Gecko:MediaStreamLatency#Windows

-- reuben



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-12-26 Thread smaug

On 12/26/2014 03:08 PM, Aryeh Gregor wrote:

On Mon, Dec 22, 2014 at 11:10 PM, Jeff Muizelaar jmuizel...@mozilla.com wrote:

Possible solutions would be to:
  - remove implicit conversions to T*


If this were done, I think we should change the calling convention for
functions that take pointers to refcounted classes.  The convention is
broken anyway: the convention is supposed to be that the caller holds
a strong reference, but the parameter type is a raw pointer, which
does not enforce this.  This has led to at least one sec-critical that
I know of, IIRC, where the caller did not hold a strong reference
locally, and the callee wound up freeing it.  There have probably been
lots more that I don't know of.

I've thought in the past of a RefParamT type, which would be for use
only as a function parameter.  nsRefPtrT/nsCOMPtrT would
implicitly convert to it, but raw pointers would not implicitly
convert to it.  And it would implicitly convert to a raw pointer,
which is safe as long as the nsRefPtrT/nsCOMPtrT that it was
initialized with was a local variable (and not a member or global).
Thus to the callee, it would behave exactly like a raw pointer.  The
only difference is the caller is not allowed to pass a raw pointer.

With this change, we wouldn't need to convert from nsRefPtrT to T*
often, as far as I can think of.  It would also preserve binary
compatibility with XPCOM AFAICT, because RefParamT would have
trivial constructor and destructor and no virtual functions.  It also
would add no addref/release.  It would just help the compiler catch
cases where raw pointers are being passed to functions that expect the
caller to hold a strong reference, which would perhaps allow us to
sanely remove the implicit conversion from nsRefPtrT to T*.



How would this setup help with the case when one passes
nsCOMPtr/nsRefPtr member variable as a param? I believe that has been the most 
common issue
with caller-should-keep-the-parameter-alive - one just doesn't remember to 
make sure the
value of the member variable can't be replaced with some other value while 
calling the method.






  - add operator T* explicit and operator T*  = delete // this will be 
available in GCC 4.8.1 and MSVC 2014 Nov CTP


What would this do?  I see that deleting operator T*  would prevent
a temporary nsRefPtrT from converting unsafely to a raw pointer,
which would address that objection to removing already_AddRefed.  But
what does the first part do?

On Tue, Dec 23, 2014 at 1:21 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

Are there good use cases for having functions accept an nsRefPtrT? If
not, we can outlaw them.


Do we have a better convention for an in/out parameter that's a
pointer to a refcounted class?  editor uses this convention in a
number of functions for pass me a node/pointer pair as input, and
I'll update it to the new value while I'm at it.  If there's a good
substitute for this, let me know and I'll use it in the future when
cleaning up editor code.


nsCOM/RefPtr is a good option for inouts.
(Tough one should avoid inouts when possible.)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: SpiderMonkey and XPConnect style changing from |T *p| to |T* p|

2015-03-29 Thread smaug

On 03/28/2015 02:32 AM, Nicolas B. Pierron wrote:

On 03/27/2015 11:51 PM, Bobby Holley wrote:

On Fri, Mar 27, 2015 at 2:04 PM, Mats Palmgren m...@mozilla.com wrote:

So let's change the project-wide coding rules instead to allow 99

columns as the hard limit, but keep 80 columns as the recommended
(soft) limit.



I think we should avoid opening up a can of worms on the merits of
different styles, and instead focus on the most pragmatic ways to unify
Gecko and JS style. Under that framework, Mats' proposal makes a lot of
sense.



I do not see the advantages of having huge patches to rewrite an entire project 
just for the benefit of having only one style guide.



My reviewer's hat on, having just one style speeds up reviewing and makes the 
code easier to read.
So much nicer to look at patches dealing with xpcom/ or docshell/ now that they 
have been converted to use the normal
coding style.

Having the one commit in the blame doesn't really matter. Often one needs to go 
to the first commit of the code anyway.


-Olli





What I see with such patches, is pain to rebase patches, pain to change habs of 
the developers, and security issues as contributors (including
employees) do not look for the original authors.

 From my point of view, the only time where such patches sounds acceptable, is 
when you are trying to take over a dead project, and as far as I know
SpiderMonkey is far from being a dead project.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Propose to remove nsAString::AssignLiteral(const char (aStr)[N])

2015-03-02 Thread smaug

On 03/02/2015 02:41 PM, Jeff Muizelaar wrote:

It looks like the current one should already be as the the AssignASCII
will be inlined into the caller and then the strlen can be inlined as
well.


Well AssignLiteral doesn't use strlen at all and that is the whole point of
AssignLiteral.
Can some template magic guarantee that if AssignASCII deals with literals too,
the strlen-less version is used when possible?



The original issue was
The method name AssignLiteral can easily make people at the callee side
think it makes the string point to a piece of static data, which has no
runtime penalty. But this is false.
Can we somehow make use of nsDependentString easier? (not that I think it is 
hard. The name is just a bit long)



-Olli




-Jeff

On Sun, Mar 1, 2015 at 7:04 PM, smaug sm...@welho.com wrote:

On 03/02/2015 01:11 AM, Xidorn Quan wrote:


On Mon, Mar 2, 2015 at 9:50 AM, Boris Zbarsky bzbar...@mit.edu wrote:


On 3/1/15 5:04 PM, Xidorn Quan wrote:


Hence I think we should remove this method. All callees should use
either
AssignLiteral(MOZ_UTF16(some string)), or, if don't want to bloat the
binary, explicitly use AssignASCII(some string).



The latter requires an strlen() that AssignLiteral optimizes out, right?



Yes, so we can add another overload to AssignASCII which does this
optimization


How would you do that?


with less misleading.

- Xidorn



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: MouseEvent.offsetX/Y

2015-03-01 Thread smaug

On 02/28/2015 05:25 AM, Robert O'Callahan wrote:

On Sat, Feb 28, 2015 at 8:30 AM, Jeff Muizelaar jmuizel...@mozilla.com
wrote:


On Fri, Feb 27, 2015 at 2:21 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

Oh, another issue is that I've followed the spec and made offsetX/Y
doubles, whereas Blink is integers, which introduces a small amount

compat

risk.



IE also uses integers. Wouldn't it be better to change the spec to
follow the existing browser's behaviour?



In some ways, yes, although the extra accuracy given by doubles could be
useful in practice.


Haven't changes from integer to doubles caused issues in some cases.
Boris might recall some examples.






Rob



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Propose to remove nsAString::AssignLiteral(const char (aStr)[N])

2015-03-01 Thread smaug

On 03/02/2015 01:11 AM, Xidorn Quan wrote:

On Mon, Mar 2, 2015 at 9:50 AM, Boris Zbarsky bzbar...@mit.edu wrote:


On 3/1/15 5:04 PM, Xidorn Quan wrote:


Hence I think we should remove this method. All callees should use either
AssignLiteral(MOZ_UTF16(some string)), or, if don't want to bloat the
binary, explicitly use AssignASCII(some string).



The latter requires an strlen() that AssignLiteral optimizes out, right?



Yes, so we can add another overload to AssignASCII which does this
optimization

How would you do that?


with less misleading.

- Xidorn



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Treeherder UI performance much worse in Nightly vs Chrome

2015-04-26 Thread smaug

On 04/23/2015 07:43 AM, Ed Morley wrote:

Scrolling fluidity/general app responsiveness of Treeherder is massively
worse in Nightly compared to Chrome. eg try this in both:
https://treeherder.mozilla.org/#/jobs?repo=mozilla-central

The problem is even more noticeable when the get next 50 buttons is
pressed at the bottom of the page.

I know last year a few people did some profiling (see deps of bug 1112352)
- however only two of the bugs are still open and the situation vs Chrome
is still pretty bad (if not more extreme now than it was previously).

Could someone who's a platform-profiling ninja do a massive favour and see
if there are any more platform bugs we can find for this? It seems like we
could use Treeherder as a useful real-world testcase for improving complex
webapp performance compared to Chrome - in addition to making devs lives a
little less janky whilst using Treeherder themselves.

If anyone does find anything, please add as deps of:
https://bugzilla.mozilla.org/show_bug.cgi?id=1112352

Many thanks :-)

Ed




Seems to be mostly JS and Reflow (I did some page load profiling, not the 'click 
get next 50' case).
JS seems to execute some JIT'ed stuff, then JS::DoCallFallback, Invoke, 
fun_apply, RunScript, Interpret, EnterBaseline...
Looks a bit suspicious, at least to my non-jit-profiling-educated eyes that we 
fall out from the jit code.

Reflow seems to be mostly inline frame reflowing (even in the cases we reflow 
tables).

I don't think these issues are quite the same as any of the bugs 
https://bugzilla.mozilla.org/show_bug.cgi?id=1112352 depend on.
jandem, dbaron, could you profile this a bit to see if there is anything 
obvious?


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: W3C Proposed Recommendation: Pointer Events

2015-04-29 Thread smaug

On 01/16/2015 04:31 AM, L. David Baron wrote:

On Tuesday 2015-01-06 15:14 -0800, L. David Baron wrote:

W3C recently published the following proposed recommendation (the
stage before W3C's final stage, Recommendation):

   http://www.w3.org/TR/pointerevents/
   Pointer Events

There's a call for review to W3C member companies (of which Mozilla
is one) open until January 16.

If there are comments you think Mozilla should send as part of the
review, or if you think Mozilla should voice support or opposition
to the specification, please say so in this thread.  (I'd note,
however, that there have been many previous opportunities to make
comments, so it's somewhat bad form to bring up fundamental issues
for the first time at this stage.)


While it's not quite clear what's going to happen to this spec in
the long term given that there's some opposition to the events part
(although not 'touch-action') from Google (related to whether the
API requires certain touch operations to communicate with the main
thread when they don't want it to; a problem other implementors
including us don't seem to have), given that it seems like a better
(than Touch Events) API for developers and we've been involved in
its development, I voted in support.


Note, Google has changed their mind, and are implementing Pointer Events API.
The spec will get some tweaks, sure, but in general folks from MS, Mozilla and 
Google
agree on the API.


-Olli





-David



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling Pointer Events in Firefox (desktop) Nightly

2015-04-29 Thread smaug

On 04/24/2015 08:57 PM, Matt Brubeck wrote:

tl;dr:

We plan to enable Pointer Events for mouse and pen input in Firefox Nightly 
builds within the next few weeks.

Background:

Pointer Events is a W3C recommendation that defines new DOM events for unified 
handling of mouse, touch, and pen input.  It also defines a new
'touch-action' CSS property for declarative control over touch panning and 
zooming behavior:

http://www.w3.org/TR/pointerevents/

The 'touch-action' CSS property is shipping today in both IE11 and Chrome 
stable.  The DOM PointerEvent API is shipping today in IE11, and the
Chrome team plans to ship it soon.

I would correct this. Chrome team plans to implement Pointer Events API soon.





Status:

Implementation of pointer events and 'touch-action' in Gecko has been in 
progress for several months.  Both features can be enabled in Firefox
Nightly with prefs, currently off by default.  When these prefs are turned on:

* Events for mouse input are supported on Windows, Mac, and Linux. * Events for 
pen input are supported on Windows. * Events for multi-touch input,
and the 'touch-action property, are a work in progress on Windows.  These 
features depend on e10s, and on Async Pan/Zoom (APZ) which is currently
preffed off by default on desktop. * PointerEvent and 'touch-action' are not 
yet implemented on Android or Firefox OS, though in the long term much
of the code will be shared between all platforms, through the APZ controller.

Plans:

The implementation of Pointer Events should be complete enough to enable in 
desktop Nightly builds within the next few weeks.  This will enable
Pointer Events for mouse and pen input.  (It will also enable Pointer Events 
for multi-touch input on Windows when e10s and APZ are enabled, though
like APZ itself this is still experimental and will not yet be turned on by 
default.)

If no serious problems are found, then we want to consider letting this feature 
ride the train to the Aurora/Dev.Edition channel (but not
further).

For the release and beta channels, we may want to wait until after touch input 
is ready to ship on Windows (depends on e10s + APZ),

Yes, I think we should have proper pointer events support on touch screen 
laptops before enabling on beta/release.



and we might
also want to wait until it is ready to ship on Android and/or Firefox OS at the 
same time or soon after.  When the time is closer, we will send an
Intent to Ship email to this list for discussion.

See also:

This wiki page has some links to tracking bugs and other information: 
https://wiki.mozilla.org/Gecko/Touch



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to ban the usage of refcounted objects inside C++ lambdas in Gecko

2015-04-10 Thread smaug

On 04/10/2015 09:09 PM, Seth Fowler wrote:



On Apr 10, 2015, at 8:46 AM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

I would like to propose that we should ban the usage of refcounted objects
inside lambdas in Gecko.  Here is the reason:

Consider the following code:

nsINode* myNode;
TakeLambda([]() {
  myNode-Foo();
});

There is nothing that guarantees that the lambda passed to TakeLambda is
executed before myNode is destroyed somehow.


I agree that this pattern is bad, but I don’t think that means that we should 
ban lambda capture of refcounted objects.

This alternative formulation would work just fine today, AFAIK:


nsCOMPtrnsINode myNode;
TakeLambda([=]() {
  myNode-Foo();
});


This captures by value, so we end up with a copy of myNode in the lambda, with 
the refcount incremented appropriately.

Once we have C++14 support everywhere, we can also do this:


nsCOMPtrnsINode myNode;
TakeLambda([myNode = Move(myNode)]() {
  myNode-Foo();
});


To capture by move (and avoid the cost of a refcount increment).

Using either of these approaches is enough to make refcounted objects safe to 
use in lambda capture expressions. Lambdas will be much less useful if they 
can’t capture refcounted objects, so I’m strongly against banning that.


I'd say that is rather painful for reviewers, since both Move() (I prefer 
.swap()) and lambda hide what is actually happening to the refcnt.
So easy to forget to use nsCOMPtr explicitly there.

We should emphasize easy-to-read-and-understand code over fast-to-write.




-Olli





- Seth



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The War on Warnings

2015-06-04 Thread smaug

On 06/05/2015 12:06 AM, Daniel Holbert wrote:

On 06/04/2015 01:18 PM, smaug wrote:

More likely we need to change a small number of noisy NS_ENSURE_* macro
users to use something else,
and keep most of the NS_ENSURE_* usage as it is.


I agree -- I posted about switching to something opt-in, like MOZ_LOG,
for some of the spammier layout NS_WARNINGS, too:

https://groups.google.com/forum/?fromgroups#!topic/mozilla.dev.tech.layout/YXauN50HDhI

~Daniel




There is also DEBUG_foo
and then using it --with-debug-label=foo

There are couple of #ifdef DEBUG_smaug checks, but it isn't really
nice to pollute the code with developer specific ifdefs.
However for your case DEBUG_layout might work.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Use of 'auto'

2015-06-02 Thread smaug

Hi all,


there was some discussion in #developers about use of 'auto' earlier today.
Some people seem to like it, and some, like I, don't.

The reasons why I try to avoid using it and usually ask to replace it with the 
actual type when I'm
reviewing a patch using it are:
- It makes the code harder to read
  * one needs to explicitly check what kind of type is assigned to the variable
to see how the variable is supposed to be used. Very important for example
when dealing with refcounted objects, and even more important when dealing 
with raw pointers.
- It makes the code possibly error prone if the type is later changed.
  * Say, you have a method nsRefPtrFoo Foo(); (I know, silly example, but you 
get the point)
Now auto foo = Foo(); makes sure foo is kept alive.
But then someone decides to change the return value to Foo*.
Everything still compiles just fine, but use of foo becomes risky
and may lead to UAF.



Perhaps my mind is too much on reviewer's side, and less on the code writer's.

So, I'd like to understand why people think 'auto' is a good thing to use.
(bz mentioned it having some use inside bindings' codegenerator, and sure, I 
can see that being rather
valid case.)





-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: support for old drag events

2015-06-09 Thread smaug

On 06/09/2015 05:17 PM, Neil Deakin wrote:

In bug 1162050, we'd like to remove support for the old non-standard drag 
events, which were left in for a period of compatibility.


yes please. Anything we can do to simplify dnd code is good.



-Olli


The 'draggesture' event should be replaced with the 'dragstart' event (such as 
ondragstart)
The 'dragdrop' event should be replaced with the 'drop' event.

If you use these events, they are fired in the same manner as the standard 
events, so it should be a simple matter of searching and replacing the
event names.

The non-standard dragexit event will remain as is, as it has no exact standard 
equivalent.

Firefox does not use these events in its code. I filed bug 1171979 for fixing 
this in Thunderbird and bug 1171980 for fixing this in Seamonkey.
If you use these events anywhere or you are the author of an add-on that uses 
these events, you will need to update your code as described above.

The standard drag and drop API is described at:

https://html.spec.whatwg.org/multipage/interaction.html#dnd
https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Drag_and_drop

In addition, we would like to remove the long-deprecated nsDragAndDrop.js 
script located in toolkit/content in favour of the standard drag and drop
API, described at the links above. To ease the transition, if necessary, you 
may wish to include this script (
https://dxr.mozilla.org/mozilla-central/source/toolkit/content/nsDragAndDrop.js 
) directly in your project.

Please respond if there are any concerns.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The War on Warnings

2015-06-04 Thread smaug

On 06/04/2015 01:07 PM, David Rajchenbach-Teller wrote:

Part of my world domination plans are to turn warnings into something
that causes test to actually fail (see bug 1080457  co). Would you like
to join forces?



Turning warnings into failures sounds odd (but bug 1080457 seems to be actually 
something different).
Warnings aren't anything which should cause tests to fail.
Warnings, especially those generated by NS_ENSURE_* macros are things which are 
super
useful for debugging - they often point rather exactly where to start looking 
for an issue.
And since there is NS_ENSURE_*, there isn't necessarily any problem or bug, 
just an unusual state, which
is then handled by the NS_ENSURE_* macro.


-Olli




Cheers,
  David

On 04/06/15 03:14, Eric Rahm wrote:

We emit a *lot* of runtime warnings when running debug tests. I inadvertently 
triggered a max log size failure during a landing this week which encouraged me 
to take a look at what all is being logged, and what I found was a ton of 
warnings (sometimes accompanied by stack traces). Most of these should probably 
be removed (of course if they're real issues they should be fixed, but judging 
by the frequency most are probably non-issues).

I'm currently cleaning up some of these, but if you happen to see something in 
the following list and are feeling proactive I would appreciate the help. 
There's even a meta bug for tracking these: 
https://bugzilla.mozilla.org/show_bug.cgi?id=765224

I generated this list by grabbing the logs for a recent m-c linux64 debug run, 
normalizing out PIDs and timestamps and then doing some sort/uniq-fu to get 
counts of unique lines.

This is roughly the top 40 offenders:





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The War on Warnings

2015-06-04 Thread smaug

On 06/04/2015 09:52 PM, Jonas Sicking wrote:

On Thu, Jun 4, 2015 at 5:38 AM, Robert O'Callahan rob...@ocallahan.org wrote:

Usually I use NS_WARNING to mean something weird and unexpected is
happening, e.g. a bug in Web page code, but not necessarily a browser bug.
Sometimes I get useful hints from NS_WARNING spew leading up to a serious
failure.


Yup. I think this is a quite common, and quite useful, usage of
NS_WARNING. But testing runs a lot of weird and unexpected things.
That's a good thing because those tend to be things that often
regress. But it also leads to a lot of warnings.


but for those usages could probably be switched to PR_LOG without
losing much.


I think this would mean changing most NS_ENSURE_SUCCESS(rv, rv), and
likely many other NS_ENSURE_* to macros that call PR_LOG instead.

So like I said, we'll have to either change a huge number of
NS_ENSURE_* macros to use something else, or change what NS_ENSURE_*
does.



More likely we need to change a small number of noisy NS_ENSURE_* macro users 
to use something else,
and keep most of the NS_ENSURE_* usage as it is.


-Olli



/ Jonas




Rob
--
oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
owohooo
osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o'oRoaocoao,o'o
oioso
oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
owohooo
osoaoyoso,o o'oYooouo ofolo!o'o owoiololo oboeo oiono odoaonogoeoro
ooofo
otohoeo ofoioroeo ooofo ohoeololo.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Modifying Element.prototype for all globals

2015-06-18 Thread smaug

On 06/18/2015 03:37 PM, Frederik Braun wrote:

Hi,

I am planning to do a little analysis of FxOS Gaia to identify instances
of innerHTML assignments at runtime[1]. I am hoping this gives me a more
precise number about hot paths (in contrast to just looking at the
source code).


What kind of information would you like to get out from the analysis?
And before even spending too much time with innerHTML, are you sure the possible
performance issues are about it, and not about creating/reflowing layout 
objects for the new elements?
(innerHTML implementation is in common cases quite well optimized.)


If you end up hacking C++ side of innerHTML, the relevant code lives in
http://hg.mozilla.org/mozilla-central/annotate/a3f280b6f8d5/dom/base/FragmentOrElement.cpp#l2769
(except for style, HTMLStyleElement.cpp,  and script, HTMLScriptElement.cpp, 
elements which have different innerHTML behavior.)






In an ideal world I would write a script along the lines of
`Object.defineProperty(Element.prototype, 'innerHTML', …)` and inject
this into every app, or at best run it somewhere so that every Element's
prototype chains back to mine.

I know that I can not modify/inherit prototypes across origins, so I am
wondering if there is something I could do with chrome privileges -
maybe patching shell.js
(https://dxr.mozilla.org/mozilla-central/source/b2g/chrome/content/shell.js),
as it is the main entrypoint from Gecko into Gaia.

Does this sound feasible? Are there any previous experiments that I
could refer to?


Well, there is always profiling.
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler

But this all depends on what kind of data you want to get.



-Olli





Thanks!
Freddy


[1] I intend to run the full test suite, not in production or anything.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser API: iframe.executeScript()

2015-06-16 Thread smaug

What is the context where the scripts would run? In the page or something more 
like a TabChildGlobal (the child side of a message manager)
but without chrome privileges?



On 06/16/2015 06:24 PM, Paul Rouget wrote:

In bug 1174733, I'm proposing a patch to implement the equivalent of
Google's webview.executeScript:

https://developer.chrome.com/apps/tags/webview#method-executeScript

This will be useful to any consumer of the Browser API to access and
manipulate the content.

For some context: the browser.html project needs access to the DOM to
build some sort of tab previews (not a screenshot, something based on
colors, headers and images from the page), and we don't feel like
adding more and more methods to the Browser API to collect all the
information we need. It's just easier to be able to inject a script
and tune the preview algorithm in the system app instead of changing
the API all the time we need a new thing. It also doesn't sound like a
terrible thing to do as other vendors do a similar thing (Android's
executeScript, iOS's stringByEvaluatingJavaScriptFromString, and IE's
InvokeScript).

The API is pretty straight forward:


let foo = 42;
iframe.executeScript(`
new Promise((resolve, reject) = {
   setTimeout(() = resolve({foo: ${foo + 1}}), 2000);
})
`).then(rv = {
   console.log(rv);
}, error = {
   console.error(error);
});


Any reason to not do that?
Any security concerns?
Or is there a better way to do that (like a worker)?


-- Paul



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-06-18 Thread smaug

On 06/02/2015 11:56 PM, Daniel Holbert wrote:

On 06/02/2015 12:58 PM, smaug wrote:

So, I'd like to understand why people think 'auto' is a good thing to use.
(bz mentioned it having some use inside bindings' codegenerator, and
sure, I can see that being rather
valid case.)


One common auto usage I've seen is for storing the result of a
static_cast.  In this scenario, it lets you avoid repeating yourself
and makes for more concise code.


It still hurts readability.
Whenever a variable is declared using auto as type, it forces reader to read 
the part after '='.
So, when reading code below some auto foo = ..., in order to check again the 
type of foo, one needs to
read the = ... part.



 I don't think there's much danger of

fragility in this scenario (unlike your refcounting example), nor is
there any need for a reviewer/code-skimmer to do research to find out
the type -- it's still right there in front of you. (it's just not
repeated twice)

For example:
   auto concretePtr = static_castReallyLongTypeName*(abstractPtr);

Nice  concise (particularly if the type name is namespaced or otherwise
really long).  Though it perhaps takes a little getting used to.

(I agree that mixing auto with smart pointers sounds like a recipe for
fragility  disaster.)

~Daniel




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-06-18 Thread smaug

On 06/02/2015 11:56 PM, Daniel Holbert wrote:

On 06/02/2015 12:58 PM, smaug wrote:

So, I'd like to understand why people think 'auto' is a good thing to use.
(bz mentioned it having some use inside bindings' codegenerator, and
sure, I can see that being rather
valid case.)


One common auto usage I've seen is for storing the result of a
static_cast.  In this scenario, it lets you avoid repeating yourself
and makes for more concise code.


It still hurts readability.
Whenever a variable is declared using auto as type, it forces reader to read 
the part after '='.
So, when reading code below some auto foo = ..., in order to check again the 
type of foo, one needs to
read the = ... part.



 I don't think there's much danger of

fragility in this scenario (unlike your refcounting example), nor is
there any need for a reviewer/code-skimmer to do research to find out
the type -- it's still right there in front of you. (it's just not
repeated twice)

For example:
   auto concretePtr = static_castReallyLongTypeName*(abstractPtr);

Nice  concise (particularly if the type name is namespaced or otherwise
really long).  Though it perhaps takes a little getting used to.

(I agree that mixing auto with smart pointers sounds like a recipe for
fragility  disaster.)

~Daniel




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: xulrunner 31, swt 4.5, ZOOM

2015-07-03 Thread smaug

On 07/03/2015 10:27 AM, mihai.slavcov...@gmail.com wrote:

Hi,

I have an SWT application that uses a browser to display HTML pages. Latest SWT 
4.5 now support XulRunner v31, but something has changed: zoom is not working 
anymore. It was working before with SWT 4.4 and XulRunner v10.

I have no experience with XulRunner or XUL. Did something changed between 10 
and 31 regarding zoom?

Here is what I used for zooming:

var winWatcher = 
Components.classes[@mozilla.org/embedcomp/window-watcher;1].getService(Components.interfaces.nsIWindowWatcher);
var enumerator = winWatcher.getWindowEnumerator();

var win = null;
while (enumerator.hasMoreElements()) {
 var checkWin = enumerator.getNext();
 if (checkWin.document.location == '@{currentLocation}') {
 win = checkWin;
 break;
 }
}
if (win != null) {
 var interfaceRequestor = 
win.QueryInterface(Components.interfaces.nsIInterfaceRequestor);
 var webNavigation = 
interfaceRequestor.getInterface(Components.interfaces.nsIWebNavigation);
 var docShell = 
webNavigation.QueryInterface(Components.interfaces.nsIDocShell);
 var docViewer = 
docShell.contentViewer.QueryInterface(Components.interfaces.nsIMarkupDocumentViewer);
 docViewer.fullZoom = @{zoomLevel};
}



Are you sure you're on v31?
https://bugzilla.mozilla.org/show_bug.cgi?id=1036694 merged 
nsIMarkupDocumentViewer to nsIContentViewer.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Busy indicator API

2015-07-05 Thread smaug

On 07/05/2015 06:11 PM, Anne van Kesteren wrote:

A while back there have been some requests from developers (seconded
by those working on GitHub) to have an API to indicate whether a site
is busy with one thing or another (e.g. networking).

They'd like to use this to avoid having to create their own UI. In
Firefox this could manifest itself by the spinner that replaces the
favicon when loading a tab.

Is there a reason we shouldn't expose a hook for this?





Sounds reasonable. Currently at least in Gecko one can emulate this with some 
dummy iframe and call
iframe.contentDocument.open(); to start the spinner, and 
iframe.contentDocument.close(); to stop it
(assuming of course no other loading is keeping the spinner running).

-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mutations from Parser handled after DOMContentLoaded?

2015-07-06 Thread smaug

On 07/06/2015 10:33 PM, Zibi Braniecki wrote:

Hi all,

I have a question about MO behavior.

From what I understand, MutationObserver API is designed in a way that is 
supposed to guarantee two things that I need:

1) That if I have a MO set on the document during readyState=loading, then all 
consequent elements injected by Parser into DOM will go through MO
before layout 2) And that they will block DOMContentLoaded


I don't know what have a MO set on the document during readyState=loading 
actually means.
MutationObserver callback is called at the end of microtask, so end of 
outermost script execution or end of a task in general.
And MutationObserver has nothing to do with DOMContentLoaded.



The first behavior is crucial for client side localization so that we can 
translate the node that is being injected before any frame
creation/layout happens.

I'm still not sure if that's the case and I'm not even sure how to test if our 
implementation guarantees that.

But now, I have more doubts because it seems that we don't do 2).

My test works like this - https://pastebin.mozilla.org/8838694

I first start a MutationObserver inline and register it on document.head. 
Whenever link with rel=localization is inserted I add `ready` property to
it.

I don't see any link elements in your example




Then I run a deferred script (in my tests I use external scripts, but I inlined 
them for the testcase) which collects all links with
rel=localization from document.head and operates on their `ready` property 
which should be, according to my logic, always there.

When I test this, I get nondeterministic behavior with ready being set in ~80% 
of reloads and in 20% onAddedHeadElement is executed after the
deferred script.

Is that a bug? Because if it's not, it feels like it should be. Or am I wrong?


Oh, you want to ensure MutationObservers are called before some script is 
executed? That is indeed still a bug, Bug 789315.







zb.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread smaug

As someone who spends more than 50% of working time doing reviews I'm strongly 
against this proposal.
aFoo helps with readability - reader knows immediately when the code is dealing 
with arguments.


-Olli


On 07/07/2015 06:12 AM, Jeff Gilbert wrote:

I propose that we stop recommending the universal use of an 'a' prefix for
arguments to functions in C and C++. If the prefix helps with
disambiguation, that's fine. However, use of this prefix should not be
prescribed in general.

`aFoo` does not provide any additional safety that I know of.[1] As a
superfluous prefix, it adds visual noise, reducing immediate readability of
all function declarations and subsequent usage of the variables within the
function definition.

Notable works or style guides [2] which do not recommend `aFoo`: [3]
* Google
* Linux Kernel
* Bjarne Stroustrup
* GCC
* LLVM
* Java Style (Java, non-C)
* PEP 0008 (Python, non-C)
* FreeBSD
* Unreal Engine
* Unity3D (largely C#)
* Spidermonkey
* Daala
* RR
* Rust
* Folly (from Facebook)
* C++ STL entrypoints
* IDL for web specs on W3C and WhatWG
* etc.

Notable works or style guides which *do* recommend `aFoo`:
* Mozilla (except for IDL, Java, and Python)
* ?

3rd-party projects in our tree which do not use `aFoo`:
* Cairo
* Skia
* ANGLE
* HarfBuzz
* ICU
* Chromium IPC
* everything under modules/ that isn't an nsFoo.c/cpp/h
* etc.?

3rd-party projects in our tree which *do* recommend `aFoo`:
* ?

As far as I can tell, the entire industry disagrees with us (as well as a
number of our own projects), which means we should have a good reason or
two for making our choice. No such reason is detailed in the style guide.

I propose we strike the `aFoo` recommendation from the Mozilla style guide.

-

[1]: Maybe it prevents accidental shadowing? No: Either this isn't allowed
by spec, or at least MSVC 2013 errors when compiling this.

[2]: I do not mean this as an endorsement of the listed works and guides,
but rather as illustration on how unusual our choice is.

[3]: I created an Etherpad into which people are welcome to gather other
works, projects, or style guides that I missed:
https://etherpad.mozilla.org/6FcHs9mJYQ



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MozPromises are now in XPCOM

2015-08-19 Thread smaug

Hi bholley,


looks great, but a question

The example
mProducer-RequestFoo()
 -Then(mThread, __func__,
[...] (ResolveType aVal) { ... },
[...] (RejectType aVal) { ... });
uses C++ lambdas. Do we have some static analysis or such in place to protect 
that
lambdas don't refer to raw pointer variables in the scope
(especially raw pointer variables pointing to ref counted objects)?
Or does MozPromise have similar setup to bug 1153295 or what?






-Olli


On 08/19/2015 06:17 AM, Bobby Holley wrote:

I gave a lightning talk at Whistler about MozPromise and a few other new
tools to facilitate asynchronous and parallel programming in Gecko. There
was significant interest, and so I spent some time over the past few weeks
untangling them from dom/media and hoisting them into xpcom/.

Bug 1188976 has now landed on mozilla-central, MozPromise (along with
TaskQueue, AbstractThread, SharedThreadPool, and StateMirroring) can now be
used everywhere in Gecko.

I also just published a blog post describing why MozPromises are great and
how they work: http://bholley.net/blog/2015/mozpromise.html

Feedback is welcome. These tools are intended to allow developers to easily
and safely run code on off-main-thread thread pools, which is something we
urgently need to do more of in Gecko. Go forth and write more parallel code!

bholley



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-08-20 Thread smaug

On 08/15/2015 10:24 AM, Jonas Sicking wrote:

On Sun, Aug 9, 2015 at 11:59 AM, L. David Baron dba...@dbaron.org wrote:

The W3C is proposing revised charters for:

   Web Platform Working Group:
   http://www.w3.org/2015/07/web-platform-wg.html
   https://lists.w3.org/Archives/Public/public-new-work/2015Jul/0020.html

...


The Web Platform Working Group ***replaces the HTML and WebApps
Groups***.


This seems like a terrible idea to me. The WebApps WG is very
functional and has had both a good discussion culture and a good track
record of creating functionality which has been adopted by browsers.

The HTML WG has been extremely dysfunctional. Both with a mailing list
which has attracted lots of noise and little useful discussion, and
has not managed to produce a lot of work which has affected what
browsers implement (most of the HTML5 stuff browsers implemented was
based off of Hixie's work in WHATWG).

Merging the two seems like a a very bad idea. It seems very likely
that it will disrupt the work happening in WebApps right now.

I'm very much for trying to find better ways for the work currently
happening in the HTML WG. But lets do that without changing the
WebApps WG for now.

I would personally prefer to put forward a formal objection to having
a merged group at this time.

/ Jonas





Fully agree with this all.



-Olli

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship Notification API on Web Workers

2015-06-30 Thread smaug

yes, please!


On 06/30/2015 06:46 AM, nsm.nik...@gmail.com wrote:

Hello,

Target release: Firefox 41
Implementation and shipping bug: 
https://bugzilla.mozilla.org/buglist.cgi?quicksearch=916893
Specification: https://notifications.spec.whatwg.org/

Gecko already implements support for the Notification API on window behind the 
dom.webnotifications.enabled pref, and this has been enabled by default for at 
least a year. This is the intent to ship the same API on workers, guarded by 
the same pref, so it will be enabled by default.

The patches landed on central on July 29, 2015. These patches implement support 
for the Notification constructor on dedicated and shared workers. This is 
exposed via the Notification constructor.

The Service Worker parts of the Notification API are not shipping yet due to 
breaking some Gaia tests. That implementation is tracked in 
https://bugzilla.mozilla.org/show_bug.cgi?id=1114554

Potential for abuse?
This API allows workers to abuse the user. There are some safeguards in place.
1) Notification.requestPermission() which prompts the user to grant permission 
is only available on window. This means the website cannot secretly acquire 
permission. It is also clear to the user which origin is requesting the 
permission.
2) Each notification displays the origin it came from. The user can revoke 
permission using the standard user agent mechanisms (Page Info in Firefox).

Platforms: All platforms.

Support in other engines:
Blink - shipped - 
https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/4WNnq8BIydI
Webkit - I don't have a way to try this out, but from the Blink intent to ship, 
it seems it isn't supported.
Edge/Trident: not supported

Developer documentation: 
https://developer.mozilla.org/en-US/docs/Web/API/Notification/Notification, the 
doc has not been updated for worker support yet.





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-08-02 Thread smaug

A new type of error 'auto' seems to cause, now seen on m-i, is
auto foo = new SomeRefCountedFoo();

That hides that foo is a raw pointer but we should be using nsRefPtr.

So please, consider again when about to use auto. It usually doesn't make the 
code easier to read,
and it occasionally just leads to errors. In this case it clearly made the code 
harder to read so that
whoever reviewed that patch didn't catch the issue.

(So far the only safe case I've seen is using 'auto' on left side when doing 
*_cast on the right side.
 And personally I think even in that case auto doesn't help with readability, 
but I can live with use of auto there.)



-Olli


On 06/02/2015 10:58 PM, smaug wrote:

Hi all,


there was some discussion in #developers about use of 'auto' earlier today.
Some people seem to like it, and some, like I, don't.

The reasons why I try to avoid using it and usually ask to replace it with the 
actual type when I'm
reviewing a patch using it are:
- It makes the code harder to read
   * one needs to explicitly check what kind of type is assigned to the variable
 to see how the variable is supposed to be used. Very important for example
 when dealing with refcounted objects, and even more important when dealing 
with raw pointers.
- It makes the code possibly error prone if the type is later changed.
   * Say, you have a method nsRefPtrFoo Foo(); (I know, silly example, but 
you get the point)
 Now auto foo = Foo(); makes sure foo is kept alive.
 But then someone decides to change the return value to Foo*.
 Everything still compiles just fine, but use of foo becomes risky
 and may lead to UAF.



Perhaps my mind is too much on reviewer's side, and less on the code writer's.

So, I'd like to understand why people think 'auto' is a good thing to use.
(bz mentioned it having some use inside bindings' codegenerator, and sure, I 
can see that being rather
valid case.)





-Olli


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-08-02 Thread smaug

On 08/02/2015 01:47 PM, Xidorn Quan wrote:

On Sun, Aug 2, 2015 at 7:57 PM, Kyle Huey m...@kylehuey.com wrote:

On Sun, Aug 2, 2015 at 2:56 AM, Hubert Figuière h...@mozilla.com wrote:

On 02/08/15 04:55 AM, smaug wrote:

A new type of error 'auto' seems to cause, now seen on m-i, is
auto foo = new SomeRefCountedFoo();

That hides that foo is a raw pointer but we should be using nsRefPtr.

So please, consider again when about to use auto. It usually doesn't
make the code easier to read,
and it occasionally just leads to errors. In this case it clearly made
the code harder to read so that
whoever reviewed that patch didn't catch the issue.


Shouldn't we, instead, ensure that SomeRefCountedFoo() returns a nsRefPtr?


How do you do that with a constructor?


Probably we should generally avoid using constructor directly for
those cases. Instead, use helper functions like MakeUnique() or
MakeAndAddRef(), which is much safer.


MakeAndAddRef would have the same problem as MakeUnique. Doesn't really tell 
what type is returned.
And when you're dealing with lifetime management issues, you really want to 
know what kind of type you're playing with.


I would just limit use of 'auto' to those cases which are safe for certain, 
given that
it helps with readability in rather rare cases (this is of course very 
subjective).
So, *_cast and certain forms of iteration, but only in cases when iteration 
is known to not call
any may-change-the-view-of-the-world methods - so no calling to JS, or flushing 
layout or dispatching dom events or...


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of 'auto'

2015-08-02 Thread smaug

On 08/02/2015 02:34 PM, Hubert Figuière wrote:

On 02/08/15 07:17 AM, smaug wrote:

Probably we should generally avoid using constructor directly for
those cases. Instead, use helper functions like MakeUnique() or
MakeAndAddRef(), which is much safer.


MakeAndAddRef would have the same problem as MakeUnique. Doesn't really
tell what type is returned.


makeSomeRefCountedFoo(), newSomeRefCountedFoo() or
SomeRefCountedFoo::make() returning an nsRefPtrSomeRefCountedFoo. It
is a matter of having an enforced convention for naming them.


And when you're dealing with lifetime management issues, you really want
to know what kind of type you're playing with.


This is also part of why I'd suggest having an construction method that
will return a smart pointer - preventing the use of raw pointers. So
that there is no ambiguity in what we deal with and its ownership.



Sure,
static already_AddRefedClassFoo ClassFoo::Create()
would make sense.
(or returning nsRefPtrClassFoo ?)
But that has nothing to do with auto.
One should still see in the calling code what the type is in order to
verify lifetime management is ok.





This is probably not something trivial in our codebase.

Hub



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: large memory allocations / resource consumption in crashtests?

2015-07-27 Thread smaug

On 07/27/2015 04:07 PM, Ehsan Akhgari wrote:

On 2015-07-27 5:35 AM, Karl Tomlinson wrote:

Is anything done between crashtests to clean up memory use?


There isn't, AFAIK.


Or can CC be triggered to run during or after a particular crashtest?


You can use SimpleTest.forceGC/CC() as needed.


Do crashtests go into B/F cache on completion, or can we know that
nsGlobalWindow::CleanUp() or FreeInnerObjects() will run on completion?


I'm pretty sure that pages are put into the bf cache if possible as usual.




And you can test that by using pagehide event listener and check .persisted 
property.
If you explicitly want to prevent bfcache for certain page, just add a dummy 
unload event listener.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement HTMLMediaElement.srcObject partially

2015-07-15 Thread smaug

On 07/15/2015 10:42 PM, Jan-Ivar Bruaroey wrote:

Hi,

We intend to un-prefix HTMLMediaElement.srcObject (it currently exists as 
HTMLMediaElement.mozSrcObject), even though it only supports a subset of the
types mandated in the spec. [1]


It is a bit unfortunate to expose the property without supporting what is in 
the spec atm, but I think it is good enough for now.


-Olli





This means it will support get/set of: MediaStream objects.

This means it will throw TypeError on set of: MediaSource objects, Blob 
objects, and File objects, for now.

The intent is still to support these other types eventually. [2]

The reason for doing this now is that this subset of funtionality is believed 
to be stable, and is valuable to user-cases in WebRTC and MediaCapture
and Streams.

Bug: https://bugzil.la/1175523

Links:
[1] 
https://html.spec.whatwg.org/multipage/embedded-content.html#dom-media-srcobject
[2] 
https://html.spec.whatwg.org/multipage/embedded-content.html#media-provider-object

Platform coverage:
All.

Estimated or target release:
ASAP, Q3, 2015


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Busy indicator API

2015-07-13 Thread smaug

On 07/13/2015 01:50 PM, Richard Barnes wrote:

On Sun, Jul 5, 2015 at 5:11 PM, Anne van Kesteren ann...@annevk.nl wrote:


A while back there have been some requests from developers (seconded
by those working on GitHub) to have an API to indicate whether a site
is busy with one thing or another (e.g. networking).

They'd like to use this to avoid having to create their own UI. In
Firefox this could manifest itself by the spinner that replaces the
favicon when loading a tab.

Is there a reason we shouldn't expose a hook for this?



Obligatory: Will this be restricted to secure contexts?


It would be easier to answer to that question if it was properly spec'ed 
somewhere
what secure context means ;)



But given that web pages can already achieve something like this using 
document.open()/close(), at least on Gecko,
perhaps exposing the API to certainly-not-secure-contexts wouldn't be too bad.

-Olli







--
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switch to Google C++ Style Wholesale (was Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++)

2015-07-14 Thread smaug

On 07/14/2015 08:11 PM, Martin Thomson wrote:

On Tue, Jul 14, 2015 at 10:06 AM, Gregory Szorc g...@mozilla.com wrote:

That being said, every other organizations in the world is using the same
or similar tools and is faced with similar challenges. Lack of a
commit-skipping feature doesn't hinder other organizations from performing
major refactorings. So while I'd love to make the tools better, I don't
think waiting on the tools should be a blocker to mass reformatting the
tree.


This.  If blame is the only victim, and a temporary one, then that's a
pretty small price to pay.




Couple of work days per year for certain devs?
Perhaps it is a small price.

Also, if we just stick with the current coding style, large parts of Gecko 
doesn't need to be
refactored to new style.



About using Google coding style, there isn't any evidence it would make new 
contributors more productive, and
it might make old contributors less productive at least for some time.



But whatever we change, if any - since the current coding style is rather sane 
for C++ -
consistency is what I care about most. It is mystery to me why we've still 
written new code not using
the coding style we have had for ages. I guess that is where we really need 
tools, enforce some style.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: sendKeyEvent doesn't support event.key

2015-10-26 Thread smaug

On 10/26/2015 10:21 AM, Amit Zur wrote:

MDN says keyCode is deprecated and web developers should favor `key` instead. 
But sendKeyEvent doesn't support key property on the event.
I found bug #1214993 but the solution there is a workaround for the home button 
for TV.

Can we expect this to be fixed any time soon?




You probably want to use 
http://mxr.mozilla.org/mozilla-central/source/dom/interfaces/base/nsITextInputProcessor.idl

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: API request: MutationObserver with querySelector

2015-10-09 Thread smaug

On 10/09/2015 03:46 AM, zbranie...@mozilla.com wrote:

We're about to start working on another API for the next Firefox OS, this time 
for DOM Intl, that will operate on `data-intl-format`,
`data-intl-value` and `data-intl-options`.

It would be much easier for us to keep l10n and intl separately and 
independently, but in the current model we will have two MutationObservers
reporting everything that happens on document.body just to fish for elements 
with those attributes. Twice.

So we may have to introduce a single mutation observer to that handles that for 
both, which will be a bad design decision but improve performance.

I Reported it a month ago and so far no response. What's my next step to get 
this in our platform?

zb.





Let's try to move this forward in the spec level (we can't really implement 
anything before there is some specification for this).
I added a question to the spec bug.



-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread smaug

On 07/07/2015 10:55 PM, Jeff Gilbert wrote:

On Tue, Jul 7, 2015 at 12:36 PM, Honza Bambas hbam...@mozilla.com wrote:


On 7/7/2015 21:27, Jeff Gilbert wrote:


On Tue, Jul 7, 2015 at 4:54 AM, Honza Bambas hbam...@mozilla.com wrote:

  I'm strongly against removing the prefix.  I got used to this and it has

its meaning all the time I inspect code (even my own) and doing reviews.
Recognizing a variable is an argument is very very useful.  It's
important
to have it and it's good we enforce it!

-hb-



Please expand on this.



Not sure how.  I simply find it useful since I was once forced to obey it
strictly in a dom code.  It simply has its meaning.  It helps to orient.  I
don't know what more you want from me to hear.

I would like to have reasons why 'we' feel it's necessary or helpful when

the rest of the industry (and nearly half our own company) appears to do
fine without it. If we deviate from widespread standards, we should have
reasons to back our deviation.

More acutely, my module does not currently use `aFoo`, and our (few)
contributors do not use use or like it.  `aFoo` gets in the way for us.
Recently, there has been pressure to unify the module's style with the
rest of Gecko. The main complaint I have with Gecko style is `aFoo` being
required.

Vague desires for `aFoo` are not compelling. There needs to be solid
reasons. If there are no compelling reasons, the requirement should be
removed. We have deprecated style before, and we can do it again.



readability / easier to follow the dataflow are rather compelling reasons.
I selfishly try to get the time I spend on reviewing a patch shorter, and aFoo 
helps with that.
Though even more important is consistent coding style everywhere (per 
programming language).

-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-07 Thread smaug

On 07/07/2015 11:45 PM, Milan Sreckovic wrote:


Removing the style guide for “prefix function arguments with a” will not 
preclude people from naming a variable aFoo.  At least the current style guide 
precludes people from naming non-function arguments that way, albeit indirectly.

I’m trying to understand the possible outcomes of this particular conversation:

a) Nothing happens.  We leave a prefix in the style guide, some code ignores 
it, some follows it.

until the tools (and poiru) are run and make the code follow Mozilla coding 
style.


b) We change the style guide to remove the a prefix
   1) We wholesale modify the code to remove the prefix, catching scenarios 
where we have a clash
   2) We don’t do a wholesale modification
  i) We get rid of a’s as we modify the code anyway
 ii) We get rid of a’s one file at a time as we see fit
iii) We get rid of a’s one function at a time
c) We change the style guide to prohibit the a prefix
   1) We wholesale modify the code to remove the prefix, catching scenarios 
where we have a clash
   2) We don’t do a wholesale modification
  i) We get rid of a’s as we modify the code anyway
 ii) We get rid of a’s one file at a time as we see fit
iii) We get rid of a’s one function at a time

I can’t imagine the mess of any option that includes “1” and wholesale code 
modification, and if you remove those, the rest of the sort of start looking 
more or less the same.

I find a’s useful, but I’ve spent enough time in different codebases that I 
don’t think those types of things are ever worth the level of energy we expend 
on them.  As long as we’re not adding _ in the variable names.  That’s just 
wrong. ;)

—
- Milan



On Jul 7, 2015, at 16:33 , Jeff Gilbert jgilb...@mozilla.com wrote:


...

I have found no other style guide that recommends `aFoo`. Why are we
different? Why do we accept reduced readability for all external
contributors? Why do so many other Mozilla projects not use this alleged
readability aid?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-08 Thread smaug

On 07/08/2015 04:05 AM, Milan Sreckovic wrote:


Jeff encouraged me to add more things to this thread, so I’m blaming him.  So, 
some random thoughts.

After getting paid to write code for 20+ years and then showing up at Mozilla, 
and seeing the a prefix, I thought “this is brilliant, how come we
didn’t think of doing that before?!”, as a reasonable balance between nothing 
and the insanity of the full Hungarian.

I find a prefix useful when I’m writing code and when I’m reading it.

I have no trouble reading the code that isn’t using this convention.  I don’t 
think I ran into a situation where only some of the arguments in the
function were using the prefix (and some were not), but I can imagine that 
being the only situation where I’d argue that it’s confusing.

In other words, as weird as it may sound, I find the prefix improving the 
readability, but the lack of it not hindering it.  And it makes no
difference to me when I’m reviewing code, which is a couple of orders of 
magnitude fewer times than for most people on this thread.

If I was writing a new file from scratch, I’d use this convention.  If I was in 
a file that wasn’t using it, it wouldn’t bother me.

I think it would be a bad idea to force this consistency on the whole codebase 
(e.g., either clear it out, or put it everywhere), as I don’t think
it would actually solve anything.  The “consistent is good” can be taken too 
far, and I think this would be taking it too far.

I honestly think the best thing to do here is nothing - remove it from the 
style guide if we don’t want to enforce it, but don’t stop me from using
it.

Removing it from the coding style, yet allowing it to be used would be the 
worst case. Whatever we do, better do it consistently.





Blame Jeff for the above.

— - Milan



On Jul 7, 2015, at 20:41 , Karl Tomlinson mozn...@karlt.net wrote:


Jeff Gilbert writes:


It can be a burden on the hundreds of devs who have to read and understand the 
code in order to write more code.


Some people find the prefix helps readability, because it makes extra 
information immediately available in the code being examined, while you are
indicating that this is a significant burden on readability.

Can you explain why the extra letter is a significant burden?

If the 'a' prefix is a burden then the 'm' prefix must be also, and so we should 
be using this-member instead of mMember.


The opinions of a few over-harried reviewers should not hold undue sway over 
the many many devs writing code.


unless people want code to be reviewed. 
___ dev-platform mailing list 
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-08 Thread smaug

Do you actually have any data how many % of Gecko devs would prefer not using 
aFoo?
I mean it makes no sense to change to foo, if most of the devs prefer aFoo.
Similarly I would stop objecting the change if majority of the devs say
yes, please change this coding style which Mozilla has had forever[1].
(it might mean me doing reviews a bit slower, which tends to lead fewer review 
requests, which might not be such a bad thing ;) )

Right now it feels like there are couple of devs in favor of aFoo, and couple 
of devs in favor of foo, and the rest
haven't said anything.



-Olli


[1] Note, the coding style has been there for a long time, but not followed in 
all the modules for some reason.


On 07/07/2015 06:12 AM, Jeff Gilbert wrote:

I propose that we stop recommending the universal use of an 'a' prefix for
arguments to functions in C and C++. If the prefix helps with
disambiguation, that's fine. However, use of this prefix should not be
prescribed in general.

`aFoo` does not provide any additional safety that I know of.[1] As a
superfluous prefix, it adds visual noise, reducing immediate readability of
all function declarations and subsequent usage of the variables within the
function definition.

Notable works or style guides [2] which do not recommend `aFoo`: [3]
* Google
* Linux Kernel
* Bjarne Stroustrup
* GCC
* LLVM
* Java Style (Java, non-C)
* PEP 0008 (Python, non-C)
* FreeBSD
* Unreal Engine
* Unity3D (largely C#)
* Spidermonkey
* Daala
* RR
* Rust
* Folly (from Facebook)
* C++ STL entrypoints
* IDL for web specs on W3C and WhatWG
* etc.

Notable works or style guides which *do* recommend `aFoo`:
* Mozilla (except for IDL, Java, and Python)
* ?

3rd-party projects in our tree which do not use `aFoo`:
* Cairo
* Skia
* ANGLE
* HarfBuzz
* ICU
* Chromium IPC
* everything under modules/ that isn't an nsFoo.c/cpp/h
* etc.?

3rd-party projects in our tree which *do* recommend `aFoo`:
* ?

As far as I can tell, the entire industry disagrees with us (as well as a
number of our own projects), which means we should have a good reason or
two for making our choice. No such reason is detailed in the style guide.

I propose we strike the `aFoo` recommendation from the Mozilla style guide.

-

[1]: Maybe it prevents accidental shadowing? No: Either this isn't allowed
by spec, or at least MSVC 2013 errors when compiling this.

[2]: I do not mean this as an endorsement of the listed works and guides,
but rather as illustration on how unusual our choice is.

[3]: I created an Etherpad into which people are welcome to gather other
works, projects, or style guides that I missed:
https://etherpad.mozilla.org/6FcHs9mJYQ



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++

2015-07-08 Thread smaug

On 07/07/2015 11:34 PM, Jeff Gilbert wrote:

Outvars are good candidates for having markings in the variable name.
`aFoo` for all arguments is a poor solution for this, though.

On Tue, Jul 7, 2015 at 1:22 PM, smaug opet...@mozilla.com wrote:


On 07/07/2015 11:18 PM, Jeff Gilbert wrote:


On Tue, Jul 7, 2015 at 1:03 PM, smaug opet...@mozilla.com wrote:

  As someone who spends more than 50% of working time doing reviews I'm

strongly against this proposal.
aFoo helps with readability - reader knows immediately when the code is
dealing with arguments.



When and why is this useful to know?




Most common case in Gecko is to know that one is assigning value to
outparam.





Another example where aFoo tends to be rather useful is lifetime management.
If I see aFoo being used somewhere in a method after some unsafe method call
(layout flush, any script callback handling, event dispatch, observer service 
notification etc.),
I know that I need to check that the _caller_ follows COM-rules and keeps aFoo 
object alive during the method call.
With non-aFoo variables I know the lifetime is controlled within the method.




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Alternative to Bonsai?

2015-09-15 Thread smaug

On 09/15/2015 06:53 PM, Boris Zbarsky wrote:

On 9/15/15 11:11 AM, Ben Hearsum wrote:

I'm pretty sure https://github.com/mozilla/gecko-dev has full history.


Though note that it doesn't have working blame for a lot of files in our source 
tree (and especially the ones you'd _want_ to get blame for, in my
experience), so it's of pretty limited use if you're trying to do the sorts of 
things you used to be able to do with bonsai.

I believe gps is working on standing up a web front end for the CVS repo blame 
to replace bonsai...


I guess that is unofficially http://52.25.115.98/viewvc/main/




-Boris


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Web Platform and Timed Media Working Groups

2015-09-11 Thread smaug

On 09/11/2015 04:53 AM, L. David Baron wrote:

On Tuesday 2015-09-08 17:33 -0700, Tantek Çelik wrote:

Follow-up on this, since we now have two days remaining to respond to these
proposed charters.

If you still have strong opinions about the proposed Web Platform and Timed
Media Working Groups charters, please reply within 24 hours so we have the
opportunity to integrate your opinions into Mozilla's response to these
charters.


Here are the comments I have so far (Web Platform charter first,
then timed media).

The deadline for comments is in about 2 hours.  I'll submit these
tentatively, but can revise if I get feedback quickly.  (Sorry for
not gathering them sooner.)

-David

=

We are very concerned that the merger of HTML work into the functional
WebApps group might harm the ability of the work happening in WebApps to
continue to make progress as well as it currently does.  While a number
of people within Mozilla think we should formally object to this merger
because of the risk to work within WebApps, I am not making this a
formal objection.  However, I think the proper functioning of this group
needs to be carefully monitored, and the consortium needs to be prepared
to make changes quickly if problems occur.  And I think it would be
helpful if the HTML and WebApps mailing lists are *not* merged.



This sounds good to me.
After chatting with MikeSmith and ArtB I'm not so worried about the merge 
anymore.
(Apparently merge is a bit too strong word here even, it is more like taking 
the specification to the
WebApps WG, but trying to not take the rest of the baggage from HTML WG.)


-Olli




A charter that is working on many documents that are primarily developed
at the WHATWG should explicitly mention the WHATWG.  It should explain
how the relationship works, including satisfactorily explaining how
W3C's work on specifications that are rapidly evolving at the WHATWG
will not harm interoperability (presuming that the W3C work isn't just
completely ignored).

In particular, this concerns the following items of chartered work:
   * Quota Management API
   * Web Storage (2nd Edition)
   * DOM4
   * HTML
   * HTML Canvas 2D Context
   * Web Sockets API
   * XHR Level 1
   * Fetching resources
   * Streams API
   * URL
   * Web Workers
and the following items in the specification maintenance section:
   * CORS
   * DOM specifications
   * HTML 5.0
   * Progress Events
   * Server-sent Events
   * Web Storage
   * Web Messaging

One possible approach to this problem would be to duplicate the
technical work happening elsewhere on fewer or none of these
specifications.  However, given that I don't expect that to happen, the
charter still needs to explain the relationship between the technical
work happening at the WHATWG and the technical work (if any) happening
at the W3C.


The group should not be chartered to modularize the entire HTML
specification.  While specific documents that have value in being
separated, active editorship, and implementation interest are worth
separating, chartering a group to do full modularization of the HTML
specification feels both like busywork and like chartering work that is
too speculative and not properly incubated.  It also seems like it will
be harmful to interoperability since it proposes to modularize a
specification whose primary source is maintained elsewhere, at the
WHATWG.


The charter should not include work on HTML Imports.  We don't plan to
implement it for the reasons described in
https://hacks.mozilla.org/2014/12/mozilla-and-web-components/
and believe that it will no longer be needed when JavaScript modules are
available.


The inclusion of "Robust Anchoring API" in the charter is suspicious
given that we haven't heard of it before.  It should probably be in an
incubation process before being a chartered work item.


We also don't think the working group should be chartered to work
on any items related to "Widgets"; this technology is no longer used.



I'm still considering between two different endings:

OPTION 1:

Note that while this response is not a formal objection, many of these
issues are serious concerns and we hope they will be properly
considered.

OPTION 2:

The only part of this response that constitutes a formal objection is
having a reasonable explanation of the relationship between the working
group and the work happening at the WHATWG (rather than ignoring the
existence of the WHATWG).  However, many of the other issues issues
raised are serious concerns and we hope they will be properly
considered.

=

One of the major problems in reaching interoperability for media
standards has been patent licensing of lower-level standards covering
many lower-level media technologies.  The W3C's Patent Policy only helps
with technology that the W3C develops, and not technology that it
references.  Given that, this group's charter should explicitly prefer
referencing technology that can be implemented and used without paying
royalties and 

  1   2   3   >