Proposal: New BMO Toolkit component for Notifications and Alerts

2014-05-18 Thread Matthew N.

Hello,

I think the various notification features outlined below are big enough 
to deserve their own Toolkit component on bugzilla.mozilla.org. The 
scope of the component would be changes to:
(i) PopupNotifications.jsm – Page-related popup (aka. doorhanger) 
notifications
(ii) notification.xml - Notification Bars (usually application 
notifications)
(iii) toolkit/components/alerts/ – Alert service including fallback XUL 
UI when we don't integrate with a more native service (e.g. OS X 
Notification Center).


(i) and (ii) are currently filed in either Firefox::General or 
Toolkit::General while bugs for (iii) are currently filed in 
Toolkit::XUL Widgets.


I propose the component be named Notifications and Alerts in the 
Toolkit product with the description Popup/doorhanger notifications, 
notification bars, and alerts.


Thoughts?

Matthew N.
:MattN
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


OMTC on Windows

2014-05-18 Thread Bas Schouten
Hey all,

After quite a lot of waiting we've switched on OMTC on Windows by default today 
(bug 899785). This is a great move towards moving all our platforms onto OMTC 
(only linux is left now), and will allow us to remove a lot of code that we've 
currently been duplicating. Furthermore it puts us on track for enabling other 
features on desktop like APZ, off main thread animations and other improvements.

Having said that we realize that what we've currently landed and turned on is 
not completely bug free. There's several bugs still open (some more serious 
than others) which we will be addressing in the coming weeks, hopefully before 
the merge to Aurora. The main reason we've switched it on now is that we want 
to get as much data as possible from the nightly channel and our nightly user 
base before the aurora merge, as well as wanting to prevent any new regressions 
from creeping in while we fix the remaining problems. This was extensively 
discussed both internally in the graphics team and externally with other people 
and we believe we're at a point now where things are sufficiently stabilized 
for our nightly audience. OMTC is enabled and disabled with a single pref so if 
unforeseen, serious consequences occur we can disable it quickly at any stage. 
We will inevitably find new bugs in the coming weeks, please link any bugs you 
happen to come across to bug 899785, if anything se
 ems very serious, please let us know, we'll attempt to come up with a solution 
on the short-term rather than disabling OMTC and reducing the amount of 
feedback we get.

There's also some important notes to make on performance, which we expect to be 
reported by our automated systems:

- Bug 1000640 is about WebGL. Currently OMTC regresses WebGL performance 
considerably, patches to fix this are underway and this should be fixed on the 
very short term.

- Several of the Talos test suite numbers will change considerably (especially 
with Direct2D enabled), this means Tscroll for example will improve by ~25%, 
but tart will regress by ~20%, and several other suites will regress as well. 
We've investigated this extensively and we believe the majority of these 
regressions are due to the nature of OMTC and the fact that we have to do more 
work. We see no value in holding off OMTC because of these regressions as we'll 
have to go there anyway. Once the last correctness and stability problems are 
all solved we will go back to trying to find ways to get back some of the 
performance regressions. We're also planning to move to a system more like 
tiling in desktop, which will change the performance characteristics 
significantly again, so we don't want to sink too much time into optimizing the 
current situation.

- Memory numbers will increase somewhat, this is unavoidable, there's several 
steps which have to be taken when doing off main thread compositing (like 
double-buffering), which inherently use more memory.

- On a brighter note: Async video is also enabled by these patches. This means 
that when the main thread is busy churning JavaScript, instead of stuttering 
your video should now happily continue playing!

- Also there's some indications that there's a subjective increase in scrolling 
performance as well.


If you have any questions please feel free to reach out to myself or other 
members of the graphics team!


Bas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko style: Formatting function return type and specifiers

2014-05-18 Thread Birunthan Mohanathas
For top-level function definitions, the recommended style is:

templatetypename T
static inline T
Foo()
{
  // ...
}

However, for function declarations and inline member functions, there
does not seem to be a definitive style. Some use:

int Foo();

class Bar
{
  virtual int Baz()
  {
// ...
  }
};

... and others use:

int
Foo();

class Bar
{
  virtual int
  Baz()
  {
// ...
  }
};

Which one should be preferred?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko style: Line-continuation backslash in macros

2014-05-18 Thread Birunthan Mohanathas
(Note: Use a monospace font to view this message or use
https://gist.github.com/anonymous/5695db5c115f07ac5118.)

The placement of the line-continuation backslash in macros is wildly
inconsistent. Some put the backslash directly after the line:

#define NS_IMPL_CYCLE_COLLECTION_CAN_SKIP_IN_CC_BEGIN(_class) \
  NS_IMETHODIMP_(bool) \
  NS_CYCLE_COLLECTION_CLASSNAME(_class)::CanSkipInCCReal(void *p) \
  { \
_class *tmp = DowncastCCParticipant_class (p);


Others align the backslashes with that of the longest line:

#define NS_IMPL_CYCLE_COLLECTION_CAN_SKIP_IN_CC_BEGIN(_class) \
  NS_IMETHODIMP_(bool)\
  NS_CYCLE_COLLECTION_CLASSNAME(_class)::CanSkipInCCReal(void *p) \
  {   \
_class *tmp = DowncastCCParticipant_class (p);


And the rest put the backslashes at the 80th column:

#define NS_IMPL_CYCLE_COLLECTION_CAN_SKIP_IN_CC_BEGIN(_class)  \
  NS_IMETHODIMP_(bool) \
  NS_CYCLE_COLLECTION_CLASSNAME(_class)::CanSkipInCCReal(void *p)  \
  {\
_class *tmp = DowncastCCParticipant_class (p);


Of these options, I personally dislike the second style as it will
result in unnecessary churn whenever the longest line changes. Is
there a preferred style?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko style: Braces with enums and uninons

2014-05-18 Thread Birunthan Mohanathas
While working on bug 995730, I bumped into a few things that the Gecko
style guide leaves unspecified. I have described one such case below.
I'll create separate threads for the other things in order to maximize
the number of bikeshedding threads (and for readability).

For classes and structs, braces are placed like so:

class Foo
{
  // ...
};

How should they be placed for enums and unions? What about with
anonymous enums and unions?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-18 Thread Armen Zambrano G.
What kind of bugs could we expect seeing?
Any place you would like us to put focus on testing?

Thanks for all the hard work to get this in.

cheers,
Armen

On 2014-05-18, 3:16 AM, Bas Schouten wrote:
 Hey all,
 
 After quite a lot of waiting we've switched on OMTC on Windows by default 
 today (bug 899785). This is a great move towards moving all our platforms 
 onto OMTC (only linux is left now), and will allow us to remove a lot of code 
 that we've currently been duplicating. Furthermore it puts us on track for 
 enabling other features on desktop like APZ, off main thread animations and 
 other improvements.
 
 Having said that we realize that what we've currently landed and turned on is 
 not completely bug free. There's several bugs still open (some more serious 
 than others) which we will be addressing in the coming weeks, hopefully 
 before the merge to Aurora. The main reason we've switched it on now is that 
 we want to get as much data as possible from the nightly channel and our 
 nightly user base before the aurora merge, as well as wanting to prevent any 
 new regressions from creeping in while we fix the remaining problems. This 
 was extensively discussed both internally in the graphics team and externally 
 with other people and we believe we're at a point now where things are 
 sufficiently stabilized for our nightly audience. OMTC is enabled and 
 disabled with a single pref so if unforeseen, serious consequences occur we 
 can disable it quickly at any stage. We will inevitably find new bugs in the 
 coming weeks, please link any bugs you happen to come across to bug 899785, 
 if anything 
 seems ver
y serious, please let us know, we'll attempt to come up with a solution on the 
short-term rather than disabling OMTC and reducing the amount of feedback we 
get.
 
 There's also some important notes to make on performance, which we expect to 
 be reported by our automated systems:
 
 - Bug 1000640 is about WebGL. Currently OMTC regresses WebGL performance 
 considerably, patches to fix this are underway and this should be fixed on 
 the very short term.
 
 - Several of the Talos test suite numbers will change considerably 
 (especially with Direct2D enabled), this means Tscroll for example will 
 improve by ~25%, but tart will regress by ~20%, and several other suites will 
 regress as well. We've investigated this extensively and we believe the 
 majority of these regressions are due to the nature of OMTC and the fact that 
 we have to do more work. We see no value in holding off OMTC because of these 
 regressions as we'll have to go there anyway. Once the last correctness and 
 stability problems are all solved we will go back to trying to find ways to 
 get back some of the performance regressions. We're also planning to move to 
 a system more like tiling in desktop, which will change the performance 
 characteristics significantly again, so we don't want to sink too much time 
 into optimizing the current situation.
 
 - Memory numbers will increase somewhat, this is unavoidable, there's several 
 steps which have to be taken when doing off main thread compositing (like 
 double-buffering), which inherently use more memory.
 
 - On a brighter note: Async video is also enabled by these patches. This 
 means that when the main thread is busy churning JavaScript, instead of 
 stuttering your video should now happily continue playing!
 
 - Also there's some indications that there's a subjective increase in 
 scrolling performance as well.
 
 
 If you have any questions please feel free to reach out to myself or other 
 members of the graphics team!
 
 
 Bas
 


-- 
Zambrano Gasparnian, Armen (armenzg)
Mozilla Senior Release Engineer
https://mozillians.org/en-US/u/armenzg/
http://armenzg.blogspot.ca
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-18 Thread Gavin Sharp
 but tart will regress by ~20%, and several other suites will regress as well.
 We've investigated this extensively and we believe the majority of these
 regressions are due to the nature of OMTC and the fact that we have to do
 more work.

Where can I read more about the TART investigations? I'd like to
understand why it is seen as inevitable, and get some of the details
of the regression. OMTC is important, and I'm excited to see it land
on Windows, but the Firefox and Performance teams have just come off a
months-long effort to make significant wins in TART, and the thought
of taking a 20% regression (huge compared to some of the improvements
we fought for) is pretty disheartening.

Gavin

On Sun, May 18, 2014 at 12:16 AM, Bas Schouten bschou...@mozilla.com wrote:
 Hey all,

 After quite a lot of waiting we've switched on OMTC on Windows by default 
 today (bug 899785). This is a great move towards moving all our platforms 
 onto OMTC (only linux is left now), and will allow us to remove a lot of code 
 that we've currently been duplicating. Furthermore it puts us on track for 
 enabling other features on desktop like APZ, off main thread animations and 
 other improvements.

 Having said that we realize that what we've currently landed and turned on is 
 not completely bug free. There's several bugs still open (some more serious 
 than others) which we will be addressing in the coming weeks, hopefully 
 before the merge to Aurora. The main reason we've switched it on now is that 
 we want to get as much data as possible from the nightly channel and our 
 nightly user base before the aurora merge, as well as wanting to prevent any 
 new regressions from creeping in while we fix the remaining problems. This 
 was extensively discussed both internally in the graphics team and externally 
 with other people and we believe we're at a point now where things are 
 sufficiently stabilized for our nightly audience. OMTC is enabled and 
 disabled with a single pref so if unforeseen, serious consequences occur we 
 can disable it quickly at any stage. We will inevitably find new bugs in the 
 coming weeks, please link any bugs you happen to come across to bug 899785, 
 if anything 
 se
  ems very serious, please let us know, we'll attempt to come up with a 
 solution on the short-term rather than disabling OMTC and reducing the amount 
 of feedback we get.

 There's also some important notes to make on performance, which we expect to 
 be reported by our automated systems:

 - Bug 1000640 is about WebGL. Currently OMTC regresses WebGL performance 
 considerably, patches to fix this are underway and this should be fixed on 
 the very short term.

 - Several of the Talos test suite numbers will change considerably 
 (especially with Direct2D enabled), this means Tscroll for example will 
 improve by ~25%, but tart will regress by ~20%, and several other suites will 
 regress as well. We've investigated this extensively and we believe the 
 majority of these regressions are due to the nature of OMTC and the fact that 
 we have to do more work. We see no value in holding off OMTC because of these 
 regressions as we'll have to go there anyway. Once the last correctness and 
 stability problems are all solved we will go back to trying to find ways to 
 get back some of the performance regressions. We're also planning to move to 
 a system more like tiling in desktop, which will change the performance 
 characteristics significantly again, so we don't want to sink too much time 
 into optimizing the current situation.

 - Memory numbers will increase somewhat, this is unavoidable, there's several 
 steps which have to be taken when doing off main thread compositing (like 
 double-buffering), which inherently use more memory.

 - On a brighter note: Async video is also enabled by these patches. This 
 means that when the main thread is busy churning JavaScript, instead of 
 stuttering your video should now happily continue playing!

 - Also there's some indications that there's a subjective increase in 
 scrolling performance as well.


 If you have any questions please feel free to reach out to myself or other 
 members of the graphics team!


 Bas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-18 Thread Chris Peterson
That's awesome news, Bas! OMTC on Windows has been one of the major 
dependencies for e10s.


AFAIU, Nightly users on Windows should now be able to test per-window 
e10s without tweaking any prefs or restarting the browser. To open an 
e10s window, open File (or Hamburger) menu then New e10s Window menu 
item.


(Nightly users on OS X have been able to test per-window e10s for a 
couple months now.)



chris


On 5/18/14, 12:16 AM, Bas Schouten wrote:

Hey all,

After quite a lot of waiting we've switched on OMTC on Windows by default today 
(bug 899785). This is a great move towards moving all our platforms onto OMTC 
(only linux is left now), and will allow us to remove a lot of code that we've 
currently been duplicating. Furthermore it puts us on track for enabling other 
features on desktop like APZ, off main thread animations and other improvements.

Having said that we realize that what we've currently landed and turned on is not completely bug free. There's several bugs still open (some more serious than others) which we will be addressing in the coming weeks, hopefully before the merge to Aurora. The main reason we've switched it on now is that we want to get as much data as possible from the nightly channel and our nightly user base before the aurora merge, as well as wanting to prevent any new regressions from creeping in while we fix the remaining problems. This was extensively discussed both internally in the graphics team and externally with other people and we believe we're at a point now where things are sufficiently stabilized for our nightly audience. OMTC is enabled and disabled with a single pref so if unforeseen, serious consequences occur we can disable it quickly at any stage. We will inevitably find new bugs in the coming weeks, please link any bugs you happen to come across to bug 899785, if anything 

seems ver
y serious, please let us know, we'll attempt to come up with a solution on the 
short-term rather than disabling OMTC and reducing the amount of feedback we 
get.


There's also some important notes to make on performance, which we expect to be 
reported by our automated systems:

- Bug 1000640 is about WebGL. Currently OMTC regresses WebGL performance 
considerably, patches to fix this are underway and this should be fixed on the 
very short term.

- Several of the Talos test suite numbers will change considerably (especially 
with Direct2D enabled), this means Tscroll for example will improve by ~25%, 
but tart will regress by ~20%, and several other suites will regress as well. 
We've investigated this extensively and we believe the majority of these 
regressions are due to the nature of OMTC and the fact that we have to do more 
work. We see no value in holding off OMTC because of these regressions as we'll 
have to go there anyway. Once the last correctness and stability problems are 
all solved we will go back to trying to find ways to get back some of the 
performance regressions. We're also planning to move to a system more like 
tiling in desktop, which will change the performance characteristics 
significantly again, so we don't want to sink too much time into optimizing the 
current situation.

- Memory numbers will increase somewhat, this is unavoidable, there's several 
steps which have to be taken when doing off main thread compositing (like 
double-buffering), which inherently use more memory.

- On a brighter note: Async video is also enabled by these patches. This means 
that when the main thread is busy churning JavaScript, instead of stuttering 
your video should now happily continue playing!

- Also there's some indications that there's a subjective increase in scrolling 
performance as well.


If you have any questions please feel free to reach out to myself or other 
members of the graphics team!


Bas



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-18 Thread Bas Schouten
Hi Gavin,

There have been several e-mails on different lists, and some communication on 
some bugs. Sadly the story is at this point not anywhere in a condensed form, 
but I will try to highlight a couple of core points, some of these will be 
updated further as the investigation continues. The official bug is bug 946567 
but the numbers and the discussion there are far outdated (there's no 400% 
regression ;)):

- What OMTC does to tart scores differs wildly per machine, on some machines we 
saw up to 10% improvements, on others up to 20% regressions. There also seems 
to be somewhat more of a regression on Win7 than there is on Win8. What the 
average is for our users is very hard to say, frankly I have no idea.
- One core cause of the regression is that we're now dealing with two D3D 
devices when using Direct2D since we're doing D2D drawing on one thread, and 
D3D11 composition on the other. This means we have DXGI locking overhead to 
synchronize the two. This is unavoidable.
- Another cause is that we're now having two surfaces in order to do double 
buffering, this means we need to initialize more resources when new layers come 
into play. This again, is unavoidable.
- Yet another cause is that for some tests we composite 'ASAP' to get 
interesting numbers, but this causes some contention scenario's which are less 
likely to occur in real-life usage. Since the double buffer might copy the area 
validated in the last frame from the front buffer to the backbuffer in order to 
prevent having to redraw much more. If the compositor is compositing all the 
time this can block the main thread's rasterization. I have some ideas on how 
to improve this, but I don't know how much they'll help TART, in any case, some 
cost here will be unavoidable as a natural additional consequence of double 
buffering.
- The TART number story is complicated, sometimes it's hard to know what 
exactly they do, and don't measure (which might be different with and without 
OMTC) and how that affects practical performance. I've been told this by Avi 
and it matches my practical experience with the numbers. I don't know the exact 
reasons and Avi is probably a better person to talk about this than I am :-).

These are the core reasons that we were able to identify from profiling. Other 
than that the things I said in my previous e-mail still apply. We believe we're 
offering significant UX improvements with async video and are enabling more 
significant improvements in the future. Once we've fixed the obvious problems 
we will continue to see if there's something that can be done, either through 
tiling or through other improvements, particularly in the last point I 
mentioned there might be some, not 'too' complex things we can do to offer some 
small improvement.

If we want to have a more detailed discussion we should probably pick a list to 
have this on and try not to spam people too much :-).

Bas

- Original Message -
From: Gavin Sharp ga...@gavinsharp.com
To: Bas Schouten bschou...@mozilla.com
Cc: dev-tree-management dev-tree-managem...@lists.mozilla.org, 
dev-tech-...@lists.mozilla.org, release-drivers 
release-driv...@mozilla.org, mozilla.dev.platform group 
dev-platform@lists.mozilla.org
Sent: Sunday, May 18, 2014 6:23:58 PM
Subject: Re: OMTC on Windows

 but tart will regress by ~20%, and several other suites will regress as well.
 We've investigated this extensively and we believe the majority of these
 regressions are due to the nature of OMTC and the fact that we have to do
 more work.

Where can I read more about the TART investigations? I'd like to
understand why it is seen as inevitable, and get some of the details
of the regression. OMTC is important, and I'm excited to see it land
on Windows, but the Firefox and Performance teams have just come off a
months-long effort to make significant wins in TART, and the thought
of taking a 20% regression (huge compared to some of the improvements
we fought for) is pretty disheartening.

Gavin

On Sun, May 18, 2014 at 12:16 AM, Bas Schouten bschou...@mozilla.com wrote:
 Hey all,

 After quite a lot of waiting we've switched on OMTC on Windows by default 
 today (bug 899785). This is a great move towards moving all our platforms 
 onto OMTC (only linux is left now), and will allow us to remove a lot of code 
 that we've currently been duplicating. Furthermore it puts us on track for 
 enabling other features on desktop like APZ, off main thread animations and 
 other improvements.

 Having said that we realize that what we've currently landed and turned on is 
 not completely bug free. There's several bugs still open (some more serious 
 than others) which we will be addressing in the coming weeks, hopefully 
 before the merge to Aurora. The main reason we've switched it on now is that 
 we want to get as much data as possible from the nightly channel and our 
 nightly user base before the aurora merge, as well as wanting to prevent any 
 new regressions from 

Re: OMTC on Windows

2014-05-18 Thread avihal
Re TART regressions and Gavin's concerns - as always, we should not trust the 
numbers blindly.

The first thing we need is probably taking few windows machines with different 
performance characteristics and compare tab animation perf on those machines, 
especially on the cases where TART shows regression (TART measures 10 cases of 
animation - like when opening a new tab, closing a tab, in several DPIs, etc), 
and preferably listen to subjective assessments on this from more than one 
person.

We should also remember that tests tend to be more reliable when the stuff they 
measure is in their comfort zone. The more specialized the test is - the more 
it expects the test subject to behave within tighter constraints, and vice 
verse - the higher level the test is, the more it could treat its subject like 
a black box, and cares less about internal details.

TART happens to be quite specialized and OMTC is a major shift in graphics 
implementation. Even if TART is already running and providing useful results 
with OMTC on OS X, it could still be out of its comfort zone with OMTC on 
windows.

This is true about all tests. The more specialized the test is - the more it 
needs to be kept aligned with test subject, and the more it could produce less 
reliable results when it isn't.

So, we should take the regression numbers with a grain of salt, and try to make 
sure first that the results are still good, and if they are, see what we can do 
about it, etc.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-18 Thread Bas Schouten
Hi Armen,

You -could- be seeing all kinds of bugs, but the most likely things I'd be 
expecting is things while window shapes and such are updating (i.e. resizing, 
pop-up windows, awesomebar, etc.) particularly since those type of short-lived 
compositors are not typical on mobile devices where we've been using OMTC for 
the longest, as well as being very platform-specific in their behavior.

Thanks!
Bas

- Original Message -
From: Armen Zambrano G. arme...@mozilla.com
To: dev-platform@lists.mozilla.org
Sent: Sunday, May 18, 2014 5:50:50 PM
Subject: Re: OMTC on Windows

What kind of bugs could we expect seeing?
Any place you would like us to put focus on testing?

Thanks for all the hard work to get this in.

cheers,
Armen

On 2014-05-18, 3:16 AM, Bas Schouten wrote:
 Hey all,
 
 After quite a lot of waiting we've switched on OMTC on Windows by default 
 today (bug 899785). This is a great move towards moving all our platforms 
 onto OMTC (only linux is left now), and will allow us to remove a lot of code 
 that we've currently been duplicating. Furthermore it puts us on track for 
 enabling other features on desktop like APZ, off main thread animations and 
 other improvements.
 
 Having said that we realize that what we've currently landed and turned on is 
 not completely bug free. There's several bugs still open (some more serious 
 than others) which we will be addressing in the coming weeks, hopefully 
 before the merge to Aurora. The main reason we've switched it on now is that 
 we want to get as much data as possible from the nightly channel and our 
 nightly user base before the aurora merge, as well as wanting to prevent any 
 new regressions from creeping in while we fix the remaining problems. This 
 was extensively discussed both internally in the graphics team and externally 
 with other people and we believe we're at a point now where things are 
 sufficiently stabilized for our nightly audience. OMTC is enabled and 
 disabled with a single pref so if unforeseen, serious consequences occur we 
 can disable it quickly at any stage. We will inevitably find new bugs in the 
 coming weeks, please link any bugs you happen to come across to bug 899785, 
 if anything 
 seems ver
y serious, please let us know, we'll attempt to come up with a solution on the 
short-term rather than disabling OMTC and reducing the amount of feedback we 
get.
 
 There's also some important notes to make on performance, which we expect to 
 be reported by our automated systems:
 
 - Bug 1000640 is about WebGL. Currently OMTC regresses WebGL performance 
 considerably, patches to fix this are underway and this should be fixed on 
 the very short term.
 
 - Several of the Talos test suite numbers will change considerably 
 (especially with Direct2D enabled), this means Tscroll for example will 
 improve by ~25%, but tart will regress by ~20%, and several other suites will 
 regress as well. We've investigated this extensively and we believe the 
 majority of these regressions are due to the nature of OMTC and the fact that 
 we have to do more work. We see no value in holding off OMTC because of these 
 regressions as we'll have to go there anyway. Once the last correctness and 
 stability problems are all solved we will go back to trying to find ways to 
 get back some of the performance regressions. We're also planning to move to 
 a system more like tiling in desktop, which will change the performance 
 characteristics significantly again, so we don't want to sink too much time 
 into optimizing the current situation.
 
 - Memory numbers will increase somewhat, this is unavoidable, there's several 
 steps which have to be taken when doing off main thread compositing (like 
 double-buffering), which inherently use more memory.
 
 - On a brighter note: Async video is also enabled by these patches. This 
 means that when the main thread is busy churning JavaScript, instead of 
 stuttering your video should now happily continue playing!
 
 - Also there's some indications that there's a subjective increase in 
 scrolling performance as well.
 
 
 If you have any questions please feel free to reach out to myself or other 
 members of the graphics team!
 
 
 Bas
 


-- 
Zambrano Gasparnian, Armen (armenzg)
Mozilla Senior Release Engineer
https://mozillians.org/en-US/u/armenzg/
http://armenzg.blogspot.ca
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Rik Cabanier
FYI this attribute landed in WebKit today:
http://trac.webkit.org/changeset/169017


On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:




 On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:

 On 2014-05-13, 9:01 PM, Rik Cabanier wrote:

 ...

 The problem is that the API doesn't really make it obvious that
 you're not supposed to take the value that the getter returns and
 just spawn N workers.  IOW, the API encourages the wrong behavior by
 design.


 That is simply untrue.


 I'm assuming that the goal of this API is to allow authors to spawn as
 many workers as possible so that they can exhaust all of the cores in the
 interest of finishing their computation faster.


 That is one way of using it but not the only one.
 For instance, let's say that I'm writing on a cooperative game. I might
 want to put all my network logic in a worker and want to make sure that
 worker is scheduled. This worker consumes little (if any) cpu, but I want
 it to be responsive.
 NumCores = 1 - do everything in the main thread and try to make sure the
 network code executes
 NumCores = 2 - spin up a worker for the network code. Everything else in
 the main thread
 NumCores = 3 - spin up a worker for the network code + another one for
 physics and image decompression. Everything else in the main thread


  I have provided reasons why any thread which is running at a higher
 priority on the system busy doing work is going to make this number an over
 approximation, I have given you two examples of higher priority threads
 that we're currently shipping in Firefox (Chrome Workers and the
 MediaStreamGraph thread)


 You're arguing against basic multithreading functionality. I'm unsure how
 ANY thread framework in a browser could fix this since there might be other
 higher priority tasks in the system.
 For your example of Chrome Workers and MediaStreamGraph, I assume those
 don't run at a constant 100% so a webapp that grabs all cores will still
 get more work done.


 and have provided you with experimental evidence of running Eli's test
 cases trying to exhaust as many cores as it can fails to predict the number
 of cores in these situations.


 Eli's code is an approximation. It doesn't prove anything.
 I don't understand your point here.


  If you don't find any of this convincing, I'd respectfully ask us to
 agree to disagree on this point.


 OK.


  For the sake of argument, let's say you are right. How are things worse
 than before?


 I don't think we should necessarily try to find a solution that is just
 not worse than the status quo, I'm more interested in us implementing a
 good solution here (and yes, I'm aware that there is no concrete proposal
 out there that is better at this point.)


 So, worst case, there's no harm.
 Best case, we have a more responsive application.

  ...


 That's fine but we're coming right back to the start: there is no way
 for informed authors to make a decision today.


 Yes, absolutely.


  The let's build something complex that solves everything proposal
 won't be done in a long time. Meanwhile apps can make responsive UI's
 and fluid games.


 That's I think one fundamental issue we're disagreeing on.  I think that
 apps can build responsive UIs and fluid games without this today on the Web.


 Sure. You can build apps that don't tax the system or that are
 specifically tailored to work well on a popular system.


  There were 24,000 hits for java which is on the web and a VM but now you
 say that it's not a vote of popularity?


 We may have a different terminology here, but to me, positive feedback
 from web developers should indicate a large amount of demand from the web
 developer community for us to solve this problem at this point, and also a
 strong positive signal from them on this specific solution with the flaws
 that I have described above in mind.  That simply doesn't map to searching
 for API names on non-Web technologies on github. :-)


 This was not a simple search. Please look over the examples especially the
 node.js ones and see how it's being used.
 This is what we're trying to achieve with this attribute.


 Also, FTR, I strongly disagree that we should implement all popular Java
 APIs just because there is a way to run Java code on the web.  ;-)

  ...

 Can you restate the actual problem? I reread your message but didn't
 find anything that indicates this is a bad idea.


 See above where I re-described why this is not a good technical solution
 to achieve the goal of the API.

 Also, as I've mentioned several times, this API basically ignores the
 fact that there are AMP systems shipping *today* and dies not take the fact
 that future Web engines may try to use as many cores as they can at a
 higher priority (Servo being one example.)


 OK. They're free to do so. This is not a problem (see previous messages)
 It seems like you're arguing against basic multithreading again.


Others 

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Xidorn Quan
IMO, though we may have a better model in the future, it is at least not
harmful to have such attribute with some limitation. The WebKit guys think
it is not a fingerprinting when limiting the max value to 8. I think it
might be meaningful to also limit the number to power of 2 (very few people
has 3 or 6 cores so they will be fingerprinting as well).

And I think it makes sense to announce that, UA does not guarantee this
value a constant, so that UA can return whatever value it feels comfortable
with when the getter is invoked. Maybe in the future, we can even have an
event to notify the script that the number has been changed.

In addition, considering that WebKit has landed this feature, and Blink is
also going to implement that, it is not a bad idea for us to have the
attribute as well.


On Mon, May 19, 2014 at 9:23 AM, Rik Cabanier caban...@gmail.com wrote:

 FYI this attribute landed in WebKit today:
 http://trac.webkit.org/changeset/169017


 On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
 
  On 2014-05-13, 9:01 PM, Rik Cabanier wrote:
 
  ...
 
  The problem is that the API doesn't really make it obvious that
  you're not supposed to take the value that the getter returns and
  just spawn N workers.  IOW, the API encourages the wrong behavior
 by
  design.
 
 
  That is simply untrue.
 
 
  I'm assuming that the goal of this API is to allow authors to spawn as
  many workers as possible so that they can exhaust all of the cores in
 the
  interest of finishing their computation faster.
 
 
  That is one way of using it but not the only one.
  For instance, let's say that I'm writing on a cooperative game. I might
  want to put all my network logic in a worker and want to make sure that
  worker is scheduled. This worker consumes little (if any) cpu, but I want
  it to be responsive.
  NumCores = 1 - do everything in the main thread and try to make sure the
  network code executes
  NumCores = 2 - spin up a worker for the network code. Everything else in
  the main thread
  NumCores = 3 - spin up a worker for the network code + another one for
  physics and image decompression. Everything else in the main thread
 
 
   I have provided reasons why any thread which is running at a higher
  priority on the system busy doing work is going to make this number an
 over
  approximation, I have given you two examples of higher priority threads
  that we're currently shipping in Firefox (Chrome Workers and the
  MediaStreamGraph thread)
 
 
  You're arguing against basic multithreading functionality. I'm unsure how
  ANY thread framework in a browser could fix this since there might be
 other
  higher priority tasks in the system.
  For your example of Chrome Workers and MediaStreamGraph, I assume those
  don't run at a constant 100% so a webapp that grabs all cores will still
  get more work done.
 
 
  and have provided you with experimental evidence of running Eli's test
  cases trying to exhaust as many cores as it can fails to predict the
 number
  of cores in these situations.
 
 
  Eli's code is an approximation. It doesn't prove anything.
  I don't understand your point here.
 
 
   If you don't find any of this convincing, I'd respectfully ask us to
  agree to disagree on this point.
 
 
  OK.
 
 
   For the sake of argument, let's say you are right. How are things worse
  than before?
 
 
  I don't think we should necessarily try to find a solution that is just
  not worse than the status quo, I'm more interested in us implementing a
  good solution here (and yes, I'm aware that there is no concrete
 proposal
  out there that is better at this point.)
 
 
  So, worst case, there's no harm.
  Best case, we have a more responsive application.
 
   ...
 
 
  That's fine but we're coming right back to the start: there is no way
  for informed authors to make a decision today.
 
 
  Yes, absolutely.
 
 
   The let's build something complex that solves everything proposal
  won't be done in a long time. Meanwhile apps can make responsive UI's
  and fluid games.
 
 
  That's I think one fundamental issue we're disagreeing on.  I think that
  apps can build responsive UIs and fluid games without this today on the
 Web.
 
 
  Sure. You can build apps that don't tax the system or that are
  specifically tailored to work well on a popular system.
 
 
   There were 24,000 hits for java which is on the web and a VM but now
 you
  say that it's not a vote of popularity?
 
 
  We may have a different terminology here, but to me, positive feedback
  from web developers should indicate a large amount of demand from the
 web
  developer community for us to solve this problem at this point, and
 also a
  strong positive signal from them on this specific solution with the
 flaws
  that I have described above in mind.  That simply doesn't map to
 searching
  for API names on non-Web 

Re: OMTC on Windows

2014-05-18 Thread Boris Zbarsky

On 5/18/14, 2:23 PM, Gavin Sharp wrote:

OMTC is important, and I'm excited to see it land
on Windows, but the Firefox and Performance teams have just come off a
months-long effort to make significant wins in TART, and the thought
of taking a 20% regression (huge compared to some of the improvements
we fought for) is pretty disheartening.


My question here is whether we have data that indicates why there is a 
regression.  Are we painting more, or are we waiting on things more?


In particular, if I understand correctly TART uses a somewhat bizarre 
configuration: it tries to run refresh drivers at 10,000 frames per 
second (yes, 10kHz).  That may not going interact at all well with 
compositing at 60Hz, and I'm not even sure how well it'll interact with 
the work to trigger the refresh driver off vsync.


In any case, it's entirely possible to get regressions on TART that have 
nothing to do with actual slowdowns at normal frame rates.  That may not 
be the case here, but it's a distinct possibility that it is.  For 
example, on Mac we ended up special-casing the TART configuration and 
doing non-blocking buffer swaps in it (see bug 99) precisely because 
otherwise TART ended up gated on things other than actual rendering 
time.  I would not be terribly surprised if something like that needs to 
be done on Windows too...


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: ResourceStats API

2014-05-18 Thread Borting Chen
Summary:
  ResourceStats API supports resource statistics and cost control for network 
usage and power consumption. Resource statistics provides resource usage 
information about the whole system, a system service, or an application. Cost 
control notifies user when resource usage exceeds a defined threshold.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=951976

Link to standard: N/A

Link to public discussion: 
https://groups.google.com/forum/#!topic/mozilla.dev.webapi/tWkgbD1v_Gg

Platform coverage: Firefox OS

Estimated or target release: TBD

Preference behind which this will be implemented: dom.resource_stats.enabled

--
Borting Chen
Intern of Mozilla Taiwan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-18 Thread Rik Cabanier
On Sun, May 18, 2014 at 4:51 PM, Xidorn Quan quanxunz...@gmail.com wrote:

 IMO, though we may have a better model in the future, it is at least not
 harmful to have such attribute with some limitation. The WebKit guys think
 it is not a fingerprinting when limiting the max value to 8. I think it
 might be meaningful to also limit the number to power of 2 (very few people
 has 3 or 6 cores so they will be fingerprinting as well).


There are CPU's from Intel [1], AMD [2], Samsung [3] and possible others
that have 6 cores. I'm unsure why we would treat them differently since
they're not high-value systems.

And I think it makes sense to announce that, UA does not guarantee this
 value a constant, so that UA can return whatever value it feels comfortable
 with when the getter is invoked. Maybe in the future, we can even have an
 event to notify the script that the number has been changed.


Yes, if a user agent wants to return a lower number (ie so a well-behaved
application leaves a CPU free), it's free to do so.
I'm unsure if the event is needed but that can be addressed later.


 In addition, considering that WebKit has landed this feature, and Blink is
 also going to implement that, it is not a bad idea for us to have the
 attribute as well.


The WebKit patch limits the maximum number to 8. The blink patch currently
does not limit what it returns.
My proposed mozilla patch [4] makes the maximum return value configurable
through a dom.maxHardwareConcurrency preference key. It currently has a
default value of 8.

1: http://ark.intel.com/products/63697 http://ark.intel.com/products/77780
2:
http://products.amd.com/en-us/DesktopCPUDetail.aspx?id=811f1=f2=f3=f4=f5=f6=f7=f8=f9=f10=f11=f12=
3:
http://www.samsung.com/global/business/semiconductor/minisite/Exynos/products5hexa.html
4: https://bugzilla.mozilla.org/show_bug.cgi?id=1008453


 On Mon, May 19, 2014 at 9:23 AM, Rik Cabanier caban...@gmail.com wrote:

 FYI this attribute landed in WebKit today:
 http://trac.webkit.org/changeset/169017


 On Thu, May 15, 2014 at 1:26 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Wed, May 14, 2014 at 11:39 AM, Ehsan Akhgari 
 ehsan.akhg...@gmail.comwrote:
 
  On 2014-05-13, 9:01 PM, Rik Cabanier wrote:
 
  ...
 
  The problem is that the API doesn't really make it obvious that
  you're not supposed to take the value that the getter returns and
  just spawn N workers.  IOW, the API encourages the wrong behavior
 by
  design.
 
 
  That is simply untrue.
 
 
  I'm assuming that the goal of this API is to allow authors to spawn as
  many workers as possible so that they can exhaust all of the cores in
 the
  interest of finishing their computation faster.
 
 
  That is one way of using it but not the only one.
  For instance, let's say that I'm writing on a cooperative game. I might
  want to put all my network logic in a worker and want to make sure that
  worker is scheduled. This worker consumes little (if any) cpu, but I
 want
  it to be responsive.
  NumCores = 1 - do everything in the main thread and try to make sure
 the
  network code executes
  NumCores = 2 - spin up a worker for the network code. Everything else
 in
  the main thread
  NumCores = 3 - spin up a worker for the network code + another one for
  physics and image decompression. Everything else in the main thread
 
 
   I have provided reasons why any thread which is running at a higher
  priority on the system busy doing work is going to make this number an
 over
  approximation, I have given you two examples of higher priority threads
  that we're currently shipping in Firefox (Chrome Workers and the
  MediaStreamGraph thread)
 
 
  You're arguing against basic multithreading functionality. I'm unsure
 how
  ANY thread framework in a browser could fix this since there might be
 other
  higher priority tasks in the system.
  For your example of Chrome Workers and MediaStreamGraph, I assume those
  don't run at a constant 100% so a webapp that grabs all cores will still
  get more work done.
 
 
  and have provided you with experimental evidence of running Eli's test
  cases trying to exhaust as many cores as it can fails to predict the
 number
  of cores in these situations.
 
 
  Eli's code is an approximation. It doesn't prove anything.
  I don't understand your point here.
 
 
   If you don't find any of this convincing, I'd respectfully ask us to
  agree to disagree on this point.
 
 
  OK.
 
 
   For the sake of argument, let's say you are right. How are things
 worse
  than before?
 
 
  I don't think we should necessarily try to find a solution that is just
  not worse than the status quo, I'm more interested in us implementing a
  good solution here (and yes, I'm aware that there is no concrete
 proposal
  out there that is better at this point.)
 
 
  So, worst case, there's no harm.
  Best case, we have a more responsive application.
 
   ...
 
 
  That's fine but we're coming right back to the start: there is no way

Re: Gecko style: Formatting function return type and specifiers

2014-05-18 Thread Dave Hylands
I've seen both styles used, although I think that 

int Foo(); 

is the most common style when declaring a function prototype in a class header. 

Generally speaking its more important to be consistent with the rest of the 
file. 

Dave Hylands 

- Original Message -

 From: Birunthan Mohanathas birunt...@mohanathas.com
 To: dev-platform dev-platform@lists.mozilla.org
 Sent: Sunday, May 18, 2014 4:27:07 AM
 Subject: Gecko style: Formatting function return type and specifiers

 For top-level function definitions, the recommended style is:

 templatetypename T
 static inline T
 Foo()
 {
 // ...
 }

 However, for function declarations and inline member functions, there
 does not seem to be a definitive style. Some use:

 int Foo();

 class Bar
 {
 virtual int Baz()
 {
 // ...
 }
 };

 ... and others use:

 int
 Foo();

 class Bar
 {
 virtual int
 Baz()
 {
 // ...
 }
 };

 Which one should be preferred?
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko style: Formatting function return type and specifiers

2014-05-18 Thread Karl Tomlinson
Birunthan Mohanathas writes:

 For top-level function definitions, the recommended style is:

 templatetypename T
 static inline T
 Foo()
 {
   // ...
 }

The main reasons for having the function name at the start of a
new line, I assume, was to help some tools (including diff) that
look there for function names.

 However, for function declarations and inline member functions, there
 does not seem to be a definitive style.

For indented declarations within a class, AFAIK the only reason
for a line break would be if there is a limit on the number of
characters in a line.  (diff and other tools are not going to
find the function name anyway.)  Then it comes down to judgement
re whether to split the line before the function name, or between
parameters, etc.

If there are no parameters, then there is usually no need to break
the line.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform