Re: Off-main-thread compositing on Linux

2015-03-12 Thread Nicolas Silva
Tiling is in a usable state (modulo some reftest failures), but I
haven't tried to run talos with tiling enabled yet. We'll probably see
the benefit of tiling when we enable APZ (which I don't know the state
of on Linux). We can also enable OMTA but I haven't tried to run tests
with it or dogfood it in a while.

Nical


On Wed, Mar 11, 2015 at 5:02 PM, Jet Villegas jville...@mozilla.com wrote:

 Nice! This is big news. WIth Linux getting OMTC, we can also enable
 off-main-thread animations (OMTA) on all our supported platforms.

 Is tiling + OMTC on Linux in a testable state? How do we score on the
 scrolling performance test in Talos?

 --Jet


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-12 Thread Mike de Boer
Congratulations to all!

This sounds like a massive improvement to our rendering pipeline, definitely 
worth of some PR effort! Is that being considered?

Is this expected to have an impact on Talos numbers? I’d expect them to 
improve, but that probably also depends on the hardware used for the Talos 
infrastructure.

Cheers,

Mike.

 On 12 Mar 2015, at 02:12, Mason Chang mch...@mozilla.com wrote:
 
 Hi all,
 
 Project Silk (http://www.masonchang.com/blog/2015/1/22/project-silk), which 
 aligns rendering to vsync, will be landing over the next couple of weeks (bug 
 1071275). You should expect smoother animations and scrolling while browsing 
 the web. It'll land in 4 parts, with the vsync compositor on OS X landing 
 today. We'll start landing the vsync compositor on Windows a week or two from 
 now, then the vsync refresh driver's on OSX and Windows a week or two after 
 the vsync compositor. If you have any issues, please file bugs and make them 
 block bug 1071275.
 
 Thanks to Jerry Shih, Benoit Girard, Kartikaya Gupta, Jeff Muizelaar, and 
 Markus Stange for helping get this on desktop!
 
 Thanks,
 Mason
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: RFC: what's the correct behavior for Visual Studio projects and unified builds?

2015-03-12 Thread Boris Zbarsky

On 3/12/15 5:39 AM, Chris Pearce wrote:

Breaking VisualStudio Intellisense also broke most of the code
navigation things that make VisualStudio awesome.


Chris,

So just to make sure the actual question in Nathan's mail is answered:

1)  You're saying Intellisense _does_ work in a reasonable way for you 
with unified builds without the patch for bug 1122812?


2)  You're saying Intellisense does _not_ work in a reasonable way for 
you on trunk right now?



We should backout whatever we have to to fix this.


I think Nathan is trying to figure out what that whatever is and is 
looking for help from people who use Visual Studio, because he's not in 
a position to evaluate its behavior himself.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Adam Roach

On 3/12/15 12:26, Aryeh Gregor wrote:

Because unless things have changed a lot in the last three years or
so, HTTPS is a pain for a few reasons:

1) It requires time and effort to set up.  Network admins have better
things to do.  Most of them either are volunteers, work part-time,
computers isn't their primary job responsibility, they're overworked,
etc.

2) It adds an additional point of failure.  It's easy to misconfigure,
and you have to keep the certificate up-to-date.  If you mess up,
browsers will helpfully go berserk and tell your users that your site
is trying to hack their computer (or that's what users will infer from
the terrifying bright-red warnings).  This is not a simple problem to
solve -- for a long time,https://amazon.com  would give a cert error,
and I'm pretty sure I once saw an error on a Google property too.  I
think Microsoft too once.

3) Last I checked, if you want a cert that works in all browsers, you
need to pay money.  This is a big psychological hurdle for some
people, and may be unreasonable for people who manage a lot of small
domains.

4) It adds round-trips, which is a big deal for people on high-latency
connections.  I remember Google was trying to cut it down to one extra
round-trip on the first connection and none on subsequent connections,
but I don't know if that's actually made it into all the major
browsers yet.

These issues seem all basically fixable within a few years


As an aside, the first three are not just fixable, but actually fixed 
within the next few months: https://letsencrypt.org/



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Boris Zbarsky

On 3/12/15 6:28 AM, Anne van Kesteren wrote:

It does seem like there are some improvements we could make here. E.g.
not allow an iframe to request certain permissions. Insofar we
haven't already.


That doesn't help much; the page can just navigate itself to the attack 
site instead of loading it in a subframe.  Combined with fullscreen 
spoofing to make it look like it's still the old page...


-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Ehsan Akhgari

On 2015-03-12 8:26 AM, Aryeh Gregor wrote:

Aha, that makes a lot more sense.  Thanks.  Yes, that does seem like a
more realistic attack.  A few points come to mind:

1) The page has no way to know whether it has persisted permissions
without just trying, right?  If so, the user will notice something is
weird when he gets strange permissions requests, which makes the
attack less attractive.


FWIW there are attempts to add features to the Web platform which would 
let web pages query for the permissions that they have without asking 
for the permission.  See 
https://docs.google.com/document/d/12xnZ_8P6rTpcGxBHiDPPCe7AUyCar-ndg8lh2KwMYkM/edit#.



2) If the only common real-world MITM threat is via a compromise
adjacent to the client (e.g., wireless), there's no reason to restrict
geolocation, because the attacker already knows the user's location
fairly precisely.


I don't think that is the only common real-world attack.  Other types 
include your traffic being intercepted by your ISP, and/or your government.



3) Is there any reason to not persist permissions for as long as the
user remains on the same network (assuming we can figure that out
reliably)?  If not, the proposal would be much less annoying, because
in many common cases the permission would be persisted for a long time
anyway.  Better yet, can we ask the OS whether the network is
classified as home/work/public and only restrict the persistence for
public networks?


That would have been a good idea if wifi attacks were the only ones.


4) Feasible though the attack may be, I'm not sure how likely
attackers are to try it.  Is there some plausible profit motive here?
Script kiddies will set up websites and portscan with botnets just for
lulz, but a malicious wireless router requires physical presence,
which is much riskier for the attacker.  If I compromised a public
wireless router, I would try passively sniffing for credit card info
in people's unencrypted webmail, or steal their login info.  Why would
I blow my cover by trying to take pictures of them?


There have been documented cases of webcam spying victims committing 
suicide.  And I wouldn't be surprised if there are or will be businesses 
based on selling people's webcam feeds.  Protecting people's physical 
privacy is just as important as protecting their digital privacy.


Cheers,
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Ehsan Akhgari

On 2015-03-12 9:45 AM, Boris Zbarsky wrote:

On 3/12/15 6:28 AM, Anne van Kesteren wrote:

It does seem like there are some improvements we could make here. E.g.
not allow an iframe to request certain permissions. Insofar we
haven't already.


That doesn't help much; the page can just navigate itself to the attack
site instead of loading it in a subframe.  Combined with fullscreen
spoofing to make it look like it's still the old page...


Well, top level navigation cancels the fullscreen mode, right?

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Boris Zbarsky

On 3/12/15 10:26 AM, Ehsan Akhgari wrote:

Well, top level navigation cancels the fullscreen mode, right?


The attack scenario I'm thinking is:

1) User loads http://a.com
2) Attacker immediately sets location to http://b.com
3) Attacker's hacked-up b.com goes fullscreen, pretending to still be 
a.com to the user by spoofing browser chrome, while also turning on the 
camera because the user granted permission to b.com to do that at some 
point.


That sort of thing.

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Ehsan Akhgari

On 2015-03-12 11:24 AM, Boris Zbarsky wrote:

On 3/12/15 10:26 AM, Ehsan Akhgari wrote:

Well, top level navigation cancels the fullscreen mode, right?


The attack scenario I'm thinking is:

1) User loads http://a.com
2) Attacker immediately sets location to http://b.com
3) Attacker's hacked-up b.com goes fullscreen, pretending to still be
a.com to the user by spoofing browser chrome, while also turning on the
camera because the user granted permission to b.com to do that at some
point.


Do you mean that after (2), the user somehow interacts with the site but 
doesn't realize that the site has gone full screen?  (Note that the 
fullscreen API cannot be used outside of user generated event handlers.)


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Boris Zbarsky

On 3/12/15 12:19 PM, Ehsan Akhgari wrote:

(Note that the
fullscreen API cannot be used outside of user generated event handlers.)


Oh, good point.  That helps a lot, yes.

-Boris



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Ehsan Akhgari

On 2015-03-12 12:57 PM, Boris Zbarsky wrote:

On 3/12/15 12:19 PM, Ehsan Akhgari wrote:

(Note that the
fullscreen API cannot be used outside of user generated event handlers.)


Oh, good point.  That helps a lot, yes.


So do you think it makes sense to restrict iframes requesting certain 
permissions?


The downside is that there are probably legit use cases for iframes 
requesting some permissions too, for example it's very common for an 
iframe to request fullscreen (e.g. the vimeo video embedding iframes.) 
One could envision map widgets implemented as iframes which may want to 
geolocate, or Google Hangout/Firefox Hello widgets that let you embed a 
video chat service in your website.


Another concern with persisting permissions requested from iframes is 
that it's possible to conceive of a TLS website (such as 
https://geolocator.com) hosting a widget that for example geolocates you 
and window.parent.postMessage()'s the info to the embedder.  If 
http://goodguy.com embeds this kind of widget in a real mapping app and 
the user chooses to grant geolocator.com a persistent permission to 
geolocate anywhere (presumably because they trust goodguy.com) and then 
evil.com can come around and embed the same widget in a possibly 
invisible iframe and profit.  Although I'm not sure how realistic this 
attack is...

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Helping the DOM Team

2015-03-12 Thread Anthony Hughes

Hello dev-platform,

I recently joined the DOM team as an embedded QA member. One of the 
things I've been working on is establishing and documenting QA 
processes. I have two goals with this effort:


1. To make it clear how volunteers can help
2. To make it clear how developers and QA can help each other

If you go to MDN today, you'll find a Helping the DOM Team document[1] 
which focuses on explaining bug triage. I plan to continue building this 
out to include things such as feature ownership and test automation. My 
hope in sharing this with dev-platform is to improve engagement with 
developers.


I welcome any feedback you want to share.

Thank you.

--
Anthony Hughes
Senior Quality Engineer
Mozilla Corporation

1. https://developer.mozilla.org/en-US/docs/Mozilla/QA/Helping_the_DOM_team

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Eric Rescorla
On Thu, Mar 12, 2015 at 12:31 PM, Aryeh Gregor a...@aryeh.name wrote:

 On Thu, Mar 12, 2015 at 4:34 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
  2) If the only common real-world MITM threat is via a compromise
  adjacent to the client (e.g., wireless), there's no reason to restrict
  geolocation, because the attacker already knows the user's location
  fairly precisely.
 
 
  I don't think that is the only common real-world attack.  Other types
  include your traffic being intercepted by your ISP, and/or your
 government.

 I guess it's hard to say how common those are in practice, or how much
 of a concern they are.  I agree that for an API that allows taking
 pictures without the user's case-by-case permission, it would pay to
 err far on the safe side.

 I'm actually rather disturbed that such an API even exists.  Even if
 the site is HTTPS, it could be hacked, or it could be spoofed, or the
 operators could just be abusive.  Every site must be assumed possibly
 malicious, no matter how many permissions dialogs the user clicks
 through, and HTTPS can be assumed to be only modestly safer than HTTP.
 Why isn't the user prompted before every picture is taken?  Is there
 really a use-case for allowing a site to take pictures without the
 user's case-by-case permission that outweighs the privacy issues?


Yes. User consent failure represents a large fraction of failures on
video conferencing sites. Also, continually prompting users for
permissions weakens protections against users granting consent
to malicious sites.

See also Adam Barth's
Prompting the User Is a Security Failure at
http://rtc-web.alvestrand.com/home/papers

-Ekr


 As for geolocation, I'm still not convinced that it's worth worrying
 about here.  The ISP and government probably have better ways of
 tracking down the user's location.  The ISP generally knows where the
 Internet connection goes regardless, and the government can probably
 get the info from the ISP (after all, it was able to install a MITM).

  There have been documented cases of webcam spying victims committing
  suicide.  And I wouldn't be surprised if there are or will be businesses
  based on selling people's webcam feeds.  Protecting people's physical
  privacy is just as important as protecting their digital privacy.

 Then why only focus on attacks that are foiled by HTTPS?  We should be
 equally concerned with attacks that HTTPS doesn't prevent, which I
 think are probably much more common.

 On Thu, Mar 12, 2015 at 5:24 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  The attack scenario I'm thinking is:
 
  1) User loads http://a.com
  2) Attacker immediately sets location to http://b.com
  3) Attacker's hacked-up b.com goes fullscreen, pretending to still be
 a.com
  to the user by spoofing browser chrome, while also turning on the camera
  because the user granted permission to b.com to do that at some point.

 How about:

 1) User loads http://a.com
 2) Attacker opens a background tab and navigates it to http://b.com (I
 can't think of a JavaScript way to do this, but if there isn't one,
 making a big a href=b.com target=_blank that covers the whole page
 would work well enough)
 3) http://b.com loads in 10 ms because it's really being served by the
 MITM, uses the permission, and closes itself
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-12 Thread Mason Chang
Hi Mike,

 This sounds like a massive improvement to our rendering pipeline, definitely 
 worth of some PR effort! Is that being considered?

We’ve had a PR effort before when Silk landed on b2g. It hit hacker news and 
received over 10K views IIRC on mozilla hacks, so I’m not keen on doing another 
one.

 Is this expected to have an impact on Talos numbers? I’d expect them to 
 improve, but that probably also depends on the hardware used for the Talos 
 infrastructure.

There are a few cases where talos numbers changed, but I don’t expect a big 
change. Software timers were already going at the same rate as hardware vsync, 
just not as accurately. Talos usually measures how fast we do something. Since 
we technically should be rendering at the same rate, just more precisely, the 
numbers should be more stable, but not necessarily better. We don’t have a lot 
of talos numbers on smoothness yet, but separate tests which specifically 
measure smoothness do improve.

Mason

 On Mar 12, 2015, at 2:27 AM, Mike de Boer mdeb...@mozilla.com wrote:
 
 Congratulations to all!
 
 This sounds like a massive improvement to our rendering pipeline, definitely 
 worth of some PR effort! Is that being considered?
 
 Is this expected to have an impact on Talos numbers? I’d expect them to 
 improve, but that probably also depends on the hardware used for the Talos 
 infrastructure.
 
 Cheers,
 
 Mike.
 
 On 12 Mar 2015, at 02:12, Mason Chang mch...@mozilla.com wrote:
 
 Hi all,
 
 Project Silk (http://www.masonchang.com/blog/2015/1/22/project-silk), which 
 aligns rendering to vsync, will be landing over the next couple of weeks 
 (bug 1071275). You should expect smoother animations and scrolling while 
 browsing the web. It'll land in 4 parts, with the vsync compositor on OS X 
 landing today. We'll start landing the vsync compositor on Windows a week or 
 two from now, then the vsync refresh driver's on OSX and Windows a week or 
 two after the vsync compositor. If you have any issues, please file bugs and 
 make them block bug 1071275.
 
 Thanks to Jerry Shih, Benoit Girard, Kartikaya Gupta, Jeff Muizelaar, and 
 Markus Stange for helping get this on desktop!
 
 Thanks,
 Mason
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Boris Zbarsky

On 3/12/15 3:31 PM, Aryeh Gregor wrote:

2) Attacker opens a background tab and navigates it to http://b.com (I
can't think of a JavaScript way to do this, but if there isn't one,
making a big a href=b.com target=_blank that covers the whole page
would work well enough)


This is presuming user interaction.  I agree that attacks that rely on 
user interaction are also a problem here, but I'm _really_ scared by the 
potential of no-interaction needed attacks, which can happen when the 
user is not even actively using the computer.  Maybe it's just me.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-12 Thread Gervase Markham
On 11/03/15 18:12, Mason Chang wrote:
 Project Silk (http://www.masonchang.com/blog/2015/1/22/project-silk),
 which aligns rendering to vsync, will be landing over the next couple
 of weeks (bug 1071275). You should expect smoother animations and
 scrolling while browsing the web. It'll land in 4 parts, with the
 vsync compositor on OS X landing today. We'll start landing the vsync
 compositor on Windows a week or two from now, then the vsync refresh
 driver's on OSX and Windows a week or two after the vsync compositor.
 If you have any issues, please file bugs and make them block bug
 1071275.

ObQuestion: what about Linux? :-)

Gerv
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Aryeh Gregor
On Thu, Mar 12, 2015 at 4:34 PM, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:
 2) If the only common real-world MITM threat is via a compromise
 adjacent to the client (e.g., wireless), there's no reason to restrict
 geolocation, because the attacker already knows the user's location
 fairly precisely.


 I don't think that is the only common real-world attack.  Other types
 include your traffic being intercepted by your ISP, and/or your government.

I guess it's hard to say how common those are in practice, or how much
of a concern they are.  I agree that for an API that allows taking
pictures without the user's case-by-case permission, it would pay to
err far on the safe side.

I'm actually rather disturbed that such an API even exists.  Even if
the site is HTTPS, it could be hacked, or it could be spoofed, or the
operators could just be abusive.  Every site must be assumed possibly
malicious, no matter how many permissions dialogs the user clicks
through, and HTTPS can be assumed to be only modestly safer than HTTP.
Why isn't the user prompted before every picture is taken?  Is there
really a use-case for allowing a site to take pictures without the
user's case-by-case permission that outweighs the privacy issues?

As for geolocation, I'm still not convinced that it's worth worrying
about here.  The ISP and government probably have better ways of
tracking down the user's location.  The ISP generally knows where the
Internet connection goes regardless, and the government can probably
get the info from the ISP (after all, it was able to install a MITM).

 There have been documented cases of webcam spying victims committing
 suicide.  And I wouldn't be surprised if there are or will be businesses
 based on selling people's webcam feeds.  Protecting people's physical
 privacy is just as important as protecting their digital privacy.

Then why only focus on attacks that are foiled by HTTPS?  We should be
equally concerned with attacks that HTTPS doesn't prevent, which I
think are probably much more common.

On Thu, Mar 12, 2015 at 5:24 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 The attack scenario I'm thinking is:

 1) User loads http://a.com
 2) Attacker immediately sets location to http://b.com
 3) Attacker's hacked-up b.com goes fullscreen, pretending to still be a.com
 to the user by spoofing browser chrome, while also turning on the camera
 because the user granted permission to b.com to do that at some point.

How about:

1) User loads http://a.com
2) Attacker opens a background tab and navigates it to http://b.com (I
can't think of a JavaScript way to do this, but if there isn't one,
making a big a href=b.com target=_blank that covers the whole page
would work well enough)
3) http://b.com loads in 10 ms because it's really being served by the
MITM, uses the permission, and closes itself
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Anne van Kesteren
On Thu, Mar 12, 2015 at 2:56 PM, Adam Roach a...@mozilla.com wrote:
 As an aside, the first three are not just fixable, but actually fixed within
 the next few months: https://letsencrypt.org/

Indeed, and for performance concerns there's a good read here:
https://istlsfastyet.com/ It's no longer an issue, unless you have an
extremely specialized setup that's non-trivial to migrate from
(Netflix).


-- 
https://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Boris Zbarsky

On 3/12/15 1:28 PM, Ehsan Akhgari wrote:

Another concern with persisting permissions requested from iframes


Can we persist them for the pair (origin of iframe, origin of toplevel 
page) or something?


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Using rr with test infrastructure

2015-03-12 Thread Robert O'Callahan
On Fri, Mar 13, 2015 at 12:34 PM, Seth Fowler s...@mozilla.com wrote:

 I guess (but don’t know for sure) that recording RR data for every test
 that gets run might be too expensive.


It probably wouldn't be too expensive. The runtime overhead is low; the
main cost is trace storage, but we can delete traces that don't reproduce
bugs.

AFAIK the limiting factors for running rr in our test infrastructure are:
1) rr only supports x86(64) Linux, so for bugs on other platforms you'd be
out of luck. Hopefully we'll have ARM support at some point, but non-Linux
isn't going to happen.
2) Our Linux tests run on EC2 and Amazon doesn't enable perfcounter
virtualization for EC2. This could change anytime (it's a configuration
change), but probably won't.
3) Some bugs might not reproduce when run under rr.
4) We'd need to figure out how make rr results available to developers. It
would probably work to move the traces to an identically-configured VM
running on the same CPU microarchitecture and run rr replay there.
So I don't think we're going to have rr in our real test infrastructure
anytime soon.

To work around these issues, I would like to have a dedicated machine that
continuously downloads builds and runs tests under rr. Ideally it would
reenable tests that have been disabled-for-orange. When it finds failures,
we would match failures to bugs and notify in the bug that an rr trace is
available. Developers could then ssh into the box to get a debugging
session. This should be reasonably easy to set up, especially if we start
by focusing on the simpler test suites and manually update bugs.

Rob
-- 
oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
owohooo
osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
oioso
oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
owohooo
osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
ooofo
otohoeo ofoioroeo ooofo ohoeololo.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: mozilla::Pair is now a little more flexible

2015-03-12 Thread Seth Fowler
I thought I’d let everyone know that bug 1142366 and bug 1142376 have added 
some handy new features to mozilla::Pair. In particular:

- Pair objects are now movable. (It’s now a requirement that the underlying 
types be movable too. Every existing use satisfied this requirement.)

- Pair objects are now copyable if the underlying types are copyable.

- We now have an equivalent of std::make_pair, mozilla::MakePair. This lets you 
construct a Pair object with type inference. So this code:

 PairFoo, Bar GetPair() {
   return PairFoo, Bar(Foo(), Bar());
 }

Becomes:

 PairFoo, Bar GetPair() {
   return MakePair(Foo(), Bar());
 }

Nice! This can really make a big difference for long type names or types which 
have their own template parameters.

These changes should make Pair a little more practical to use. Enjoy!

- Seth
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Chrome removed support for multipart/x-mixed-replace documents. We should too.

2015-03-12 Thread Seth Fowler
Chrome removed support for multipart/x-mixed-replace main resources in this 
issue:

https://code.google.com/p/chromium/issues/detail?id=249132 
https://code.google.com/p/chromium/issues/detail?id=249132

Here’s their explanation:

 This feature is extremely rarely used by web sites and is the source of a lot 
 of complexity and security bugs in the loader.  UseCounter stats from the 
 stable channel indicate that the feature is used for less than 0.1% of 
 page loads.


They made main resources that use multipart/x-mixed-replace trigger downloads 
instead of being displayed.

The observation that multipart/x-mixed-replace support introduces a lot of 
complexity is absolutely true for us as well. It’s a huge mess.

Looks like this patch landed in Chromium on June 13, 2013 and has stuck since 
then, so removing it has not resulted in a disaster for Chrome. With so few 
people using multipart/x-mixed-replace, and since now both IE and Chrome do not 
support it, I suggest that we remove support for it from the docloader as well.

- Seth
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: RFC: what's the correct behavior for Visual Studio projects and unified builds?

2015-03-12 Thread cpearce

Intellisense and other code navigation things all work now in the generated 
project file. This is awesome, thanks!

Chris Pearce.


On Friday, March 13, 2015 at 10:13:55 AM UTC+13, Nathan Froyd wrote:
 On Thu, Mar 12, 2015 at 5:39 AM, Chris Pearce cpea...@mozilla.com wrote:
 
  Breaking VisualStudio Intellisense also broke most of the code navigation
  things that make VisualStudio awesome. I don't build with VisualStudio, I
  build with the command line because I like to build  run, and I like to
  pipe the output to grep, file, or set environment variables more easily.
 
  I use the project for Visual Studio's solely for the awesome code
  navigation that it enables. Not having that makes me much less productive.
  We should backout whatever we have to to fix this. I can't even configure
  with --disable-unified-compilation to generate a project file that works.
 
  Unless someone complains (and I bet they won't given the lack of response
  in this thread), we should backout bug 1122812. There are others on my team
  affected by this too.
 
 
 The problematic bit of bug 1122812 has been fixed in bug 1138250, which has
 landed on central and should be getting merged around soonish.
 
 -Nathan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-12 Thread Mason Chang
Yeah it is, but I don’t really want to do another PR run when lots of people 
have already read about Silk on b2g. Feels spammy to me to do another one just 
a month after the previous one, but that’s my 2 cents.

Mason

 On Mar 12, 2015, at 3:17 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 
 On Fri, Mar 13, 2015 at 5:28 AM, Mason Chang mch...@mozilla.com 
 mailto:mch...@mozilla.com wrote:
 Hi Mike,
 
  This sounds like a massive improvement to our rendering pipeline, 
  definitely worth of some PR effort! Is that being considered?
 
 We’ve had a PR effort before when Silk landed on b2g. It hit hacker news and 
 received over 10K views IIRC on mozilla hacks, so I’m not keen on doing 
 another one.
 
 Isn't hackernews and 10K views on h.m.o a *good * thing?
 
 Rob
 -- 
 oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
 owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
 osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo owohooo
 osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o 
 oioso
 oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo 
 owohooo
 osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro ooofo
 otohoeo ofoioroeo ooofo ohoeololo.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


What are your pain points when running unittests?

2015-03-12 Thread Jonathan Griffin
The A-Team is embarking on a project to improve the developer experience
when running unittests locally.  This project will address the following
frequently-heard complaints:

* Locally developers often use mach to run tests, but tests in CI use
mozharness, which can result in different behaviors.
* It's hard to reproduce a try job because it's hard to set up the test
environment and difficult to figure out which command-line arguments to use.
* It's difficult to run tests from a tests.zip package if you don't have a
build on that machine and thus can't use mach.
* It's difficult to run tests through a debugger using a downloaded build.

The quintessential use case here is making it easy to reproduce a try run
locally, without a local build, using a syntax something like:

* runtests --try 2844bc3a9227

Ideally, this would download the appropriate build and tests.zip package,
bootstrap the test environment, and run the tests using the exact same
arguments as are used on try, optionally running it through an appropriate
debugger.  You would be able to substitute a local build and/or local
tests.zip package if desired.  You would be able to override command-line
arguments used in CI if you wanted to, otherwise the tests would be run
using the same args as in CI.

What other use cases would you like us to address, which aren't derivatives
of the above issues?

Thanks for your input,

Jonathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Chrome removed support for multipart/x-mixed-replace documents. We should too.

2015-03-12 Thread Kyle Huey
I've been meaning to rip out the putative support for this from XHR (and
all of the complexity that it introduces) for months now.  This would be
great.

- Kyle

On Thu, Mar 12, 2015 at 3:37 PM, Seth Fowler s...@mozilla.com wrote:

 Chrome removed support for multipart/x-mixed-replace main resources in
 this issue:

 https://code.google.com/p/chromium/issues/detail?id=249132 
 https://code.google.com/p/chromium/issues/detail?id=249132

 Here’s their explanation:

  This feature is extremely rarely used by web sites and is the source of
 a lot of complexity and security bugs in the loader.  UseCounter stats from
 the stable channel indicate that the feature is used for less than 0.1%
 of page loads.


 They made main resources that use multipart/x-mixed-replace trigger
 downloads instead of being displayed.

 The observation that multipart/x-mixed-replace support introduces a lot of
 complexity is absolutely true for us as well. It’s a huge mess.

 Looks like this patch landed in Chromium on June 13, 2013 and has stuck
 since then, so removing it has not resulted in a disaster for Chrome. With
 so few people using multipart/x-mixed-replace, and since now both IE and
 Chrome do not support it, I suggest that we remove support for it from the
 docloader as well.

 - Seth
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-12 Thread Robert O'Callahan
On Fri, Mar 13, 2015 at 11:59 AM, Mason Chang mch...@mozilla.com wrote:

 Yeah it is, but I don’t really want to do another PR run when lots of
 people have already read about Silk on b2g. Feels spammy to me to do
 another one just a month after the previous one, but that’s my 2 cents.


I see, makes sense.

Rob
-- 
oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
owohooo
osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
oioso
oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
owohooo
osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
ooofo
otohoeo ofoioroeo ooofo ohoeololo.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Aryeh Gregor
On Tue, Mar 10, 2015 at 5:00 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 The mitigation applies in this situation:

 1)  User connects to a MITMed network (e.g. wireless at the airport or
 coffeeshop or whatever) which I will henceforth call the attacker.
 2)  No matter what site the user loads, the attacker injects a hidden
 iframe claiming to be from hostname X that the user has granted a
 persistent permissions grant to.
 3)  The attacker now turns the camera/microphone/whatever.

Aha, that makes a lot more sense.  Thanks.  Yes, that does seem like a
more realistic attack.  A few points come to mind:

1) The page has no way to know whether it has persisted permissions
without just trying, right?  If so, the user will notice something is
weird when he gets strange permissions requests, which makes the
attack less attractive.

2) If the only common real-world MITM threat is via a compromise
adjacent to the client (e.g., wireless), there's no reason to restrict
geolocation, because the attacker already knows the user's location
fairly precisely.

3) Is there any reason to not persist permissions for as long as the
user remains on the same network (assuming we can figure that out
reliably)?  If not, the proposal would be much less annoying, because
in many common cases the permission would be persisted for a long time
anyway.  Better yet, can we ask the OS whether the network is
classified as home/work/public and only restrict the persistence for
public networks?

4) Feasible though the attack may be, I'm not sure how likely
attackers are to try it.  Is there some plausible profit motive here?
Script kiddies will set up websites and portscan with botnets just for
lulz, but a malicious wireless router requires physical presence,
which is much riskier for the attacker.  If I compromised a public
wireless router, I would try passively sniffing for credit card info
in people's unencrypted webmail, or steal their login info.  Why would
I blow my cover by trying to take pictures of them?

 Right, and only work if the user loads such a site themselves on that
 network.  If I load cnn.com and get a popup asking whether Google Hangouts
 can turn on my camera, I'd get a bit suspicious... (though I bet a lot of
 people would just click through anyway).

Especially because it says Google Hangouts wants the permission.  Why
wouldn't I give permission to Google Hangouts, if I use it regularly?
Maybe it's a bit puzzling that it's asking me right now, but computers
are weird, it probably has some reason.  If it was some site I didn't
recognize I might say no, but not if it's a site I use all the time.

I'm not convinced that the proposal increases real-world security
enough to warrant any reduction at all in user convenience.

 Switch to HTTPS is not a reasonable solution.


 Why not?

Because unless things have changed a lot in the last three years or
so, HTTPS is a pain for a few reasons:

1) It requires time and effort to set up.  Network admins have better
things to do.  Most of them either are volunteers, work part-time,
computers isn't their primary job responsibility, they're overworked,
etc.

2) It adds an additional point of failure.  It's easy to misconfigure,
and you have to keep the certificate up-to-date.  If you mess up,
browsers will helpfully go berserk and tell your users that your site
is trying to hack their computer (or that's what users will infer from
the terrifying bright-red warnings).  This is not a simple problem to
solve -- for a long time, https://amazon.com would give a cert error,
and I'm pretty sure I once saw an error on a Google property too.  I
think Microsoft too once.

3) Last I checked, if you want a cert that works in all browsers, you
need to pay money.  This is a big psychological hurdle for some
people, and may be unreasonable for people who manage a lot of small
domains.

4) It adds round-trips, which is a big deal for people on high-latency
connections.  I remember Google was trying to cut it down to one extra
round-trip on the first connection and none on subsequent connections,
but I don't know if that's actually made it into all the major
browsers yet.

These issues seem all basically fixable within a few years, if the
major stakeholders were on board.  But until they're fixed, there are
good reasons for sysadmins to be reluctant to use SSL.  Ideally,
setting up SSL would like something like this: the webserver
automatically generates a key pair, submits the public key to its
nameserver to be put into its domain's DNSSEC CERT record, queries the
resulting DNSSEC record, and serves it to browsers as its certificate;
and of course automatically re-queries the record periodically so it
doesn't expire.  The nameserver can verify the server's IP address
matches the A record to to ensure that it's the right one, unless
someone has compromised the backbone or the nameserver's local
network.  In theory you don't need DNSSEC, CACert or whatever would
work too.  You would 

Re: What are your pain points when running unittests?

2015-03-12 Thread Gijs Kruitbosch
IME the issue is not so much about not running tests identical to the 
ones on CI, but the OS environment which doesn't match, and then 
reproducing intermittent failures.


If a failure happens once in 100 builds, it is very annoying for the 
sheriffs (happens multiple times a day) and needs fixing or backing out 
- but running-by-dir, say, mochitest-browser for 
browser/base/content/test/general/ 100 times takes way too long, and OS 
settings / screen sizes / machine speed / (...) differences mean you 
might not be able to reproduce anyway (or in the worst case, that you 
get completely different failures).


It'd be better if we could more easily get more information about 
failures as they happened on infra (replay debugging stuff a la what roc 
has worked on, or better logs, or somehow making it possible to 
remote-debug the infra machines as/when they fail).


~ Gijs

On 12/03/2015 22:51, Jonathan Griffin wrote:

The A-Team is embarking on a project to improve the developer experience
when running unittests locally.  This project will address the following
frequently-heard complaints:

* Locally developers often use mach to run tests, but tests in CI use
mozharness, which can result in different behaviors.
* It's hard to reproduce a try job because it's hard to set up the test
environment and difficult to figure out which command-line arguments to use.
* It's difficult to run tests from a tests.zip package if you don't have a
build on that machine and thus can't use mach.
* It's difficult to run tests through a debugger using a downloaded build.

The quintessential use case here is making it easy to reproduce a try run
locally, without a local build, using a syntax something like:

* runtests --try 2844bc3a9227

Ideally, this would download the appropriate build and tests.zip package,
bootstrap the test environment, and run the tests using the exact same
arguments as are used on try, optionally running it through an appropriate
debugger.  You would be able to substitute a local build and/or local
tests.zip package if desired.  You would be able to override command-line
arguments used in CI if you wanted to, otherwise the tests would be run
using the same args as in CI.

What other use cases would you like us to address, which aren't derivatives
of the above issues?

Thanks for your input,

Jonathan



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Chrome removed support for multipart/x-mixed-replace documents. We should too.

2015-03-12 Thread Boris Zbarsky

On 3/12/15 7:04 PM, Seth Fowler wrote:

It looks like it doesn’t anymore, because it works fine in Chrome.


Iirc, bugzilla sniffs server-side and sends different things to 
different browsers.  Worth testing in Firefox with multipart/x-mixed 
support disabled.


-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Chrome removed support for multipart/x-mixed-replace documents. We should too.

2015-03-12 Thread Ehsan Akhgari

On 2015-03-12 6:54 PM, Kyle Huey wrote:

I've been meaning to rip out the putative support for this from XHR (and
all of the complexity that it introduces) for months now.  This would be
great.


Henri beat you by two years.  ;-)

https://bugzilla.mozilla.org/show_bug.cgi?id=843508

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-12 Thread Ehsan Akhgari
Every time I break something that doesn't have a mach command (such as 
various Gaia tests for example) I shiver in fear, as I need to download 
the log, read it pretty much line by line and try to retrace 
mozharnesses' steps along the way.  Is the runtests tool also going to 
help with those types of unit tests (i.e., everything except for 
mochitest-* and reftest/crashtest/xpcshell)?


Thanks!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-12 Thread Xidorn Quan
I wonder if it is possible to trigger particular single unittest on which
we observe intermittent failures, instead of the whole test set. I guess it
would save time. I sometimes disable all tests I do not need to check
before pushing to the try to make it end faster.

- Xidorn

On Fri, Mar 13, 2015 at 10:34 AM, Seth Fowler s...@mozilla.com wrote:


  On Mar 12, 2015, at 4:17 PM, Gijs Kruitbosch gijskruitbo...@gmail.com
 wrote:
  It'd be better if we could more easily get more information about
 failures as they happened on infra (replay debugging stuff a la what roc
 has worked on, or better logs, or somehow making it possible to
 remote-debug the infra machines as/when they fail).
 
  ~ Gijs

 Yes!

 I guess (but don’t know for sure) that recording RR data for every test
 that gets run might be too expensive. But it’d be nice if we were able to
 request that RR data be recording for *particular* tests in the test
 manifest. Probably we should make it possible to record it unconditionally,
 and to record it only if the test failed. I can see using this in two ways:

 - When I’m investigating a failure, I can push a commit that records RR
 data for the test I’m concerned about.

 - We can annotate rare intermittent failures to record RR data whenever
 they fail. This obviously means that the tests must be run under RR all the
 time, but at least we won’t have to waste storage space on RR data for the
 common, passing case. This would be a huge help for some hard-to-reproduce
 intermittent failures.

 I’m sure there are a lot of gotchas here, but in an ideal world these
 features would be a huge help.

 - Seth
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Chrome removed support for multipart/x-mixed-replace documents. We should too.

2015-03-12 Thread Boris Zbarsky

On 3/12/15 6:37 PM, Seth Fowler wrote:

They made main resources that use multipart/x-mixed-replace trigger downloads 
instead of being displayed.


So what gets downloaded is the entire mixed stream, right?


The observation that multipart/x-mixed-replace support introduces a lot of 
complexity is absolutely true for us as well.


Does this really introduce a lot of complexity in the loader?  There's 
the actual stream converter, and the fact that people have to worry 
about headers on part channels, but apart from that I don't recall much 
complexity.



so removing it has not resulted in a disaster for Chrome. With so few people 
using multipart/x-mixed-replace


We should use a use counter here, because as far as I know sites decide 
via server-side sniffing whether to do this.  :(


-Boris


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-12 Thread Brian Birtles

On 2015/03/13 7:51, Jonathan Griffin wrote:

The A-Team is embarking on a project to improve the developer experience
when running unittests locally.


Is this about C++ unittests or about mochitests etc.?

If it's the latter, most of my pain points would be around debugging B2G 
failures. Similar to what Gijs wrote, in particular I find:


* Running a single test often behaves differently or doesn't work (e.g. 
bug 927889 for B2G) so I sometimes edit the mochitest.ini etc. to remove 
all the other tests.


* Running mochitests on B2G desktop on Windows (with mach 
mochitest-b2g-desktop) seems to size the window to 0x0 for me. (I've 
never looked into why but I've just sometimes worked around this by 
using setTimeout so I can manually resize the window before the test 
continues.)


* No sooner do I fix an intermittent failure on B2G desktop, than one 
pops up on B2G emulator which often means a lot of time rebuilding (and 
I always have trouble getting mochitests to run on emulator--it worked 
for a while, now it doesn't, some marionette error).


* Setting up a custom gaia with a custom gecko and remembering how to 
run gaia tests again often takes me a while.


Thanks,

Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-12 Thread Boris Zbarsky

On 3/12/15 6:51 PM, Jonathan Griffin wrote:

What other use cases would you like us to address, which aren't derivatives
of the above issues?


I ran into a problem just yesterday: I wanted to run mochitest-browser 
locally, to debug an error that happened very early in the test run startup.


So I did:

  mach mochitest-browser --debugger=gdb

and hit my breakpoint and so forth... then quit the debugger.

Then the test harness respawned another debugger to run more tests.  And 
then another.


When run outside the debugger, I couldn't get the test harness to stop 
at all (e.g. via hitting Ctrl-C a bunch of times).


The only way I found to stop it was to kill the mach process itself. 
That left ssltunnel and various other processes still hanging out that 
had to be killed one by one so I could try running the tests again.


Having some reliable way to kill a mochitest-browser run would be really 
helpful.


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are your pain points when running unittests?

2015-03-12 Thread Shu-yu Guo
To build off this idea, I'd like a run-until-failure mode (with an upper
limit, of course) on try itself. I don't want to spend N+ hours spinning my
CPU locally to repro an intermittent. I also don't want to wait until a
build is done to press the retrigger button 40 times.

My blue-sky wish would be to push a run-until-failure with RR try job, and
download the failing replays the next day to debug locally.

On Thu, Mar 12, 2015 at 4:34 PM, Seth Fowler s...@mozilla.com wrote:


  On Mar 12, 2015, at 4:17 PM, Gijs Kruitbosch gijskruitbo...@gmail.com
 wrote:
  It'd be better if we could more easily get more information about
 failures as they happened on infra (replay debugging stuff a la what roc
 has worked on, or better logs, or somehow making it possible to
 remote-debug the infra machines as/when they fail).
 
  ~ Gijs

 Yes!

 I guess (but don’t know for sure) that recording RR data for every test
 that gets run might be too expensive. But it’d be nice if we were able to
 request that RR data be recording for *particular* tests in the test
 manifest. Probably we should make it possible to record it unconditionally,
 and to record it only if the test failed. I can see using this in two ways:

 - When I’m investigating a failure, I can push a commit that records RR
 data for the test I’m concerned about.

 - We can annotate rare intermittent failures to record RR data whenever
 they fail. This obviously means that the tests must be run under RR all the
 time, but at least we won’t have to waste storage space on RR data for the
 common, passing case. This would be a huge help for some hard-to-reproduce
 intermittent failures.

 I’m sure there are a lot of gotchas here, but in an ideal world these
 features would be a huge help.

 - Seth
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform




-- 
   shu
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Project Silk on Desktop

2015-03-12 Thread Jared Wein
Within the small circle of Mozilla contributors it may feel spammy or
repetitive, but I wouldn't be surprised for people outside of the Mozilla
project to think of b2g and Firefox desktop as both separate userbases and
separate impacts.

On Thu, Mar 12, 2015 at 6:59 PM, Mason Chang mch...@mozilla.com wrote:

 Yeah it is, but I don’t really want to do another PR run when lots of
 people have already read about Silk on b2g. Feels spammy to me to do
 another one just a month after the previous one, but that’s my 2 cents.

 Mason

  On Mar 12, 2015, at 3:17 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
 
  On Fri, Mar 13, 2015 at 5:28 AM, Mason Chang mch...@mozilla.com
 mailto:mch...@mozilla.com wrote:
  Hi Mike,
 
   This sounds like a massive improvement to our rendering pipeline,
 definitely worth of some PR effort! Is that being considered?
 
  We’ve had a PR effort before when Silk landed on b2g. It hit hacker news
 and received over 10K views IIRC on mozilla hacks, so I’m not keen on doing
 another one.
 
  Isn't hackernews and 10K views on h.m.o a *good * thing?
 
  Rob
  --
  oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
  owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
  osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
 owohooo
  osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o
 o‘oRoaocoao,o’o oioso
  oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
 owohooo
  osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
 ooofo
  otohoeo ofoioroeo ooofo ohoeololo.

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rr with test infrastructure

2015-03-12 Thread Mike Hommey
On Fri, Mar 13, 2015 at 01:50:33PM +1300, Robert O'Callahan wrote:
 On Fri, Mar 13, 2015 at 12:34 PM, Seth Fowler s...@mozilla.com wrote:
 
  I guess (but don’t know for sure) that recording RR data for every test
  that gets run might be too expensive.
 
 
 It probably wouldn't be too expensive. The runtime overhead is low; the
 main cost is trace storage, but we can delete traces that don't reproduce
 bugs.
 
 AFAIK the limiting factors for running rr in our test infrastructure are:
 1) rr only supports x86(64) Linux, so for bugs on other platforms you'd be
 out of luck. Hopefully we'll have ARM support at some point, but non-Linux
 isn't going to happen.
 2) Our Linux tests run on EC2 and Amazon doesn't enable perfcounter
 virtualization for EC2. This could change anytime (it's a configuration
 change), but probably won't.
 3) Some bugs might not reproduce when run under rr.
 4) We'd need to figure out how make rr results available to developers. It
 would probably work to move the traces to an identically-configured VM
 running on the same CPU microarchitecture and run rr replay there.
 So I don't think we're going to have rr in our real test infrastructure
 anytime soon.

5) Not all EC2 hosts have the right CPU microarchitecture.

FWIW.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to not fix: Building with gcc-4.6 for Fx38+

2015-03-12 Thread bowen
On Tuesday, March 10, 2015 at 2:38:43 PM UTC, Ehsan Akhgari wrote:
 Have you tested bumping the gcc min version here 
 http://mxr.mozilla.org/mozilla-central/source/build/autoconf/toolchain.m4#104
  
 to see if there are any builders that still use gcc 4.6?

I haven't, no.
I assume you mean by pushing to try.
Here's a push in case the bugs don't exist in certain builds:
https://treeherder.mozilla.org/#/jobs?repo=tryrevision=044e896fc6fa

Thanks.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform