Re: Making proposal for API exposure official

2013-06-25 Thread Henri Sivonen
On Tue, Jun 25, 2013 at 6:08 AM, Brian Smith bsm...@mozilla.com wrote:
 At the same time, I doubt such a policy is necessary or helpful for the 
 modules
 that I am owner/peer of (PSM/Necko), at least at this time. In fact, though I
 haven't thought about it deeply, most of the recent evidence I've observed
 indicates that such a policy would be very harmful if applied to network and
 cryptographic protocol design and deployment, at least.

It seems to me that HTTP headers at least could use the policy. Consider:
X-Content-Security-Policy
Content-Security-Policy
X-WebKit-CSP
:-(

In retrospect, it should have been Content-Security-Policy from the
moment it shipped on by default on the release channel and the X-
variants should never have existed.

Also: https://tools.ietf.org/html/rfc6648

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.iki.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Henri Sivonen
On Fri, Jun 21, 2013 at 11:45 PM, Andrew Overholt overh...@mozilla.com wrote:
   https://wiki.mozilla.org/User:Overholt/APIExposurePolicy

 I'd appreciate your review feedback.  Thanks.

Thank you for putting this together.

In general, I'd like the point no prefixing to be made more
forcefully/clearly.

Further quotes from the doc.

 Standardization

It's a bit unclear if this section is trying to establish criteria for
shipping a feature on the release channel or criteria for starting an
implementation in Gecko without shipping it right away.

 the relevant standards body declares it ready for implementation

Since the next two points talk about implementations but this one
doesn't, I'm reading this as a standards body declaration happening
without other implementations in place.

For the purpose of shipping a feature on the release channel, I think
a declaration by a standards body isn't good enough. If no one else
wants to ship, which still end up with a single-vendor feature even if
we somehow managed to get a standards body to bless it. (And the W3C
blesses all sorts of things as evidenced by enterprise XML stuff and
capital-S Semantic Web stuff.)

For the purpose of starting an implementation in Gecko, the point in
time where the W3C declares a spec ready for implementation is
woefully late from the point of view of when you should have started
implementing. I think for the purpose of starting implementation we
should have a less precise criterion about when a spec at a standards
body seems stable enough to start implementation.

 at least two other browser vendors ship a compatible implementation of this 
 API

This formulation has two problems: First, it talks about browser
vendors instead of talking about browser engines. Google and Opera
shipping the same code from the Blink repo should not count as to
independent implementations. Second, it talks about shipping. If
everyone adopted these rules, we'd be deadlocked for shipping. Instead
of shipping, I think we should talk about a compatible implementation
in the nightly builds (or higher) of at least two other browser
engines when there is a reasonable assumption that the feature is
genuinely on track to graduating from nightly builds to release
eventually.

 at least one other browser vendor ships -- or publicly states their intention 
 to ship -- a compatible implementation of this API and there is a 
 specification that is no longer at risk of significant changes, on track to 
 become a standard with an relevant standards body, and acceptable to a number 
 of applicable parties

I think this point should be similarly tightened to talk about browser
engines instead of vendors and relaxed to allow other engines' nightly
builds to be considered when considering whether we can move a feature
to release.

 Exceptions

Is the intention that APIs shipped under the exceptions to the rules
still be unprefixed? (I think it's better to squat namespace and not
prefix than to risk unprefixed feature getting popular enough that we
need to consider unprefixing later.)

 Shipping

It seems to me that it should be clarified that shipping means
pressing on by default on the release channel and it's okay to have
code in m-c and even enabled in nightlies before that.

 Cleanup

window.navigator seems like a poor example of something to be cleaned
up. Surely it is now such a part of the platform that it mustn't be
changed.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.iki.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Brian Smith
Robert O'Callahan wrote:
 On Tue, Jun 25, 2013 at 3:08 PM, Brian Smith bsm...@mozilla.com wrote:
 
  At the same time, I doubt such a policy is necessary or helpful for the
  modules that I am owner/peer of (PSM/Necko), at least at this time. In
  fact, though I haven't thought about it deeply, most of the recent evidence
  I've observed indicates that such a policy would be very harmful if applied
  to network and cryptographic protocol design and deployment, at least.
 
 
 I think you should elaborate, because I think we should have consistent
 policy across products and modules.

I don't think that you or I should try to block this proposal on the grounds 
that it must be reworked to be sensible to apply to all modules, especially 
when the document already says that that is a non-goal and already explicitly 
calls out some modules to which it does not apply: Note that at this time, we 
are specifically focusing on new JS APIs and not on CSS, WebGL, WebRTC, or 
other existing features/properties.

Somebody clarified privately that many DOM/JS APIs don't live in the DOM 
module. So, let me rework my request a little bit. In the document, instead of 
creating a blacklist of web technologies to which the new policy would not 
apply (CSS, WebGL, WebRTC, etc.), please list the modules to which the policy 
would apply.

It seems (from the subject line on this thread, the title of the proposal, and 
the text of the proposal) that the things I work on are probably intended to be 
out of scope of the proposal. That's the thing I want clarification on. If it 
is intended that the stuff I work on (networking protocols, security protocols, 
and network security protocols) be covered by the policy, then I will 
reluctantly debate that after the end of the quarter. (I have many things to 
finish this week to Q2 goals.)

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Brian Smith
Henri Sivonen wrote:
 On Tue, Jun 25, 2013 at 6:08 AM, Brian Smith bsm...@mozilla.com wrote:
  At the same time, I doubt such a policy is necessary or helpful for the
  modules that I am owner/peer of (PSM/Necko), at least at this time.
  In fact, though I haven't thought about it deeply, most of the recent
  evidence I've observed indicates that such a policy would be very
  harmful if applied to network and cryptographic protocol design and
  deployment, at least.
 
 It seems to me that HTTP headers at least could use the policy. Consider:
 X-Content-Security-Policy
 Content-Security-Policy
 X-WebKit-CSP
 :-(
 
 In retrospect, it should have been Content-Security-Policy from the
 moment it shipped on by default on the release channel and the X-
 variants should never have existed.
 
 Also: https://tools.ietf.org/html/rfc6648

I understand how X-Content-Security-Policy, et al., seems concerning to people, 
especially people who have had to deal with the horrors of web API prefixing 
and CSS prefixing. If people are concerned about HTTP header prefixes then we 
can handle policy for that separately from Andrew's proposal, in a much more 
lightweight fashion. For example, we could just Let's all follow the advise of 
RFC6648 whenever practical. on https://wiki.mozilla.org/Networking. Problem 
solved.

I am less concerned about the policy of prefixing or not prefixing HTTP headers 
and similar small things, than I am about the potential for this proposal to 
restrict the creation and development of new networking protocols like SPDY and 
the things that will come after SPDY.

Cheers,
Brian
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Patrick McManus
I don't really think there is a controversy here network wise - mostly
applicability is a case of I know it when I see it and the emphasis here
is on things that are exposed at the webdev level is the right thing.
Sometimes that's markup, sometimes that's header names which can touch on
core protocol topics (but usually don't)... and just because
Content-Security-Policy probably is one of those things, it doesn't mean
every other header is too. That's a fine clarification that I think is
already in sync with the proposal.

Brian and I deal with a lot of things that can be negotiated on a protocol
level and therefore don't have to stand the test of time because they
aren't baked into markup or semantics that need to live forever... when
done well they have built in fallbacks that allow iteration and real
experience before baking them into standards. That's awesome for the
Internet and I don't sense anybody on this thread trying to disrupt that.

So while we should be experimenting with 50 different implementations to
make your network traffic faster and more secure we shouldn't be as freely
messing with the semantics of that. And I think we're decent at that.. an
interesting case in point is SPDY - I have a stack of requests from folks
doing sophisticated Real User Monitoring/RUM (e.g. Akamai) that want to be
able to track whether or not a page was loaded with some version of spdy
and therefore need some kind of content-js accessible indicator. Its
totally reasonable. But, while I have totally rearranged everything about
the network transfer with spdy and will probably be doing it again shortly,
I've been hesitant to add a small bit of markup to the DOM that might
fragment markup and javascript without some effort at standardization.
(Chrome has a mechanism if anybody is interested in taking that topic up
fwiw.).



On Tue, Jun 25, 2013 at 10:11 AM, Brian Smith bsm...@mozilla.com wrote:

 Robert O'Callahan wrote:
  On Tue, Jun 25, 2013 at 3:08 PM, Brian Smith bsm...@mozilla.com wrote:
 
   At the same time, I doubt such a policy is necessary or helpful for the
   modules that I am owner/peer of (PSM/Necko), at least at this time. In
   fact, though I haven't thought about it deeply, most of the recent
 evidence
   I've observed indicates that such a policy would be very harmful if
 applied
   to network and cryptographic protocol design and deployment, at least.
  
 
  I think you should elaborate, because I think we should have consistent
  policy across products and modules.

 I don't think that you or I should try to block this proposal on the
 grounds that it must be reworked to be sensible to apply to all modules,
 especially when the document already says that that is a non-goal and
 already explicitly calls out some modules to which it does not apply: Note
 that at this time, we are specifically focusing on new JS APIs and not on
 CSS, WebGL, WebRTC, or other existing features/properties.

 Somebody clarified privately that many DOM/JS APIs don't live in the DOM
 module. So, let me rework my request a little bit. In the document, instead
 of creating a blacklist of web technologies to which the new policy would
 not apply (CSS, WebGL, WebRTC, etc.), please list the modules to which the
 policy would apply.

 It seems (from the subject line on this thread, the title of the proposal,
 and the text of the proposal) that the things I work on are probably
 intended to be out of scope of the proposal. That's the thing I want
 clarification on. If it is intended that the stuff I work on (networking
 protocols, security protocols, and network security protocols) be covered
 by the policy, then I will reluctantly debate that after the end of the
 quarter. (I have many things to finish this week to Q2 goals.)

 Cheers,
 Brian

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Anne van Kesteren
On Tue, Jun 25, 2013 at 11:11 PM, Brian Smith bsm...@mozilla.com wrote:
 If it is intended that the stuff I work on (networking protocols, security 
 protocols, and network security protocols) be covered by the policy, then I 
 will reluctantly debate that after the end of the quarter. (I have many 
 things to finish this week to Q2 goals.)

I think eventually this should cover all web-exposed features,
including networking. I'm not sure how that would hinder stuff such as
SPDY; multiple vendors have interest in doing that so it's fine per
the policy.


--
http://annevankesteren.nl/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to implement: nsIDOMMozIccManager.getCardLockRetryCount

2013-06-25 Thread Thomas Zimmermann

Hi,

I intent to implement an extension to nsIDOMMozICCManager.

When unlocking a SIM card, there is a maximum number of remaining tries. 
The new interface 'getCardLockRetryCount' will allow for reading the 
number of remaining tries for a specific lock. During the unlock 
process, an app can display this information to the user, probably in 
conjunction with a descriptive message (e.g, 'You have 3 tries left.').


interface nsIDOMMozIccManager {
  ...
  nsIDOMDOMRequest getCardLockRetryCount(in DOMString lockType);
  ...
};

The parameter 'lockType' is a string that names the lock. The result is 
a DOM request. On success, it returns the number of retries in its 
result property; on error it returns a error name. An error happens if 
the given lock type is not known or not supported by the implementation.


There is more detailed documentation in bug 875710, comments 17 and 18 [1].

Any feedback and suggestions are welcome.

Best regards
Thomas

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=875710#c17
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Mounir Lamouri
On 21/06/13 21:45, Andrew Overholt wrote:
 I'd appreciate your review feedback.  Thanks.

Thank you for putting this together.

I am going to quote some parts of the document to give some context to
my comments.

  Note that at this time, we are specifically focusing on new JS APIs
 and not on CSS, WebGL, WebRTC, or other existing features/properties.

I think the JS APIs here is unclear. I think saying Web APIs would
be more appropriate, assuming this is what you meant.
Also, I do not understand why we are excluding CSS, WebGL and WebRTC. We
should definitely not make this policy retro-apply so existing features
should not be affected but if someone wants to add a new CSS property,
it is not clear why this shouldn't go trough this process.

 1. Is the API standardized or on its way to being so?
 2. Declaration of intent
 3. API review
 4. Implementation
 5. Shipping

I think 2) should be Declaration of intent to implement and we should
add an item between 4) and 5) named Declaration of intent to ship.
Those two distinct items are important I believe.

 2. at least two other browser vendors ship a compatible
 implementation of this API

I will join Henri: browser vendors should be browser engines and
ship is too restrictive. If a feature is implemented and available in
some version (even behind a flag) with a clear intent to ship it at some
point, this should be enough for us to follow.

 3.  at least one other browser vendor ships -- or publicly states
 their intention to ship -- a compatible implementation of this API
 and there is a specification that is no longer at risk of significant
 changes, on track to become a standard with an relevant standards
 body, and acceptable to a number of applicable parties

Actually, with this point, point 2) sounds barely useful. The only
situation when we could have 3) applying but not 2) would be if two
engines implement a feature that has no specification.

 2. ecosystem- and hardware-specific APIs that are not standard or of
 interest to the broader web at that time (or ever) may be shipped in
 a  way to limit their harm of the broader web (ex. only on a device
 or only in specific builds with clear disclaimers about applicability
 of exposed APIs). An example of this is the FM Radio API for Firefox
 OS.

When I read this, I read It is okay to have Mozilla ship a phone with
proprietary APIs. That means that we are okay with Mozilla creating the
situation Apple created on Mobile, a situation that Mozilla has been
criticising a lot. Shipping proprietary APIs on a specific device is
harming the broader Web if that device happens to be one of the most
used device out there...

 3. APIs solving use cases which no browser vendor shipping an engine
 other Gecko is interested in at that time. In cases such as this,
 Mozilla will solicit feedback from as many relevant parties as
 possible, begin the standardization process with a relevant standards
 body, and create a test suite as part of the standards process. An
 example of this is the Push Notifications API.

I am not a big fan of that exception. Given how fast paced the Web is
nowadays, we could easily put a lot of APIs in that category. Actually,
if we ask other vendors what they think about most Firefox OS APIs, we
will very likely get no answer. Does that mean that those APIs are good
to go?

 Declaring Intent
 API review
 Implementation
 Shipping

I think some clarifications are needed in those areas. First of all, we
should define two kind of Declaration of intent, one for
implementation and one for shipping. Implementation could be a Public
Service Announcement aimed to know if anyone strongly objects. In other
words, if I want to implement feature foo, I would have to start a
thread, explain what I want to do, in which context (platforms) and ask
if anyone objects.
Then should come the intent to ship email that would be more formal.

In my opinion, the reviewing part should be simplified. Intent emails
should be reviewed by dev-platform, not a API review team. In other
words, any one should be able to express its opinion and opinions will
be listened based on their technical value, not based on affiliation.
The code review should follow the usual rules and the IDL should have a
super-review as the current Mozilla rules requires it. Adding a commit
hook to dom/webidl/ to make sure that commits have a sr+ would be great
to enforce that rule even though it wouldn't prevent all problems.

The issue with having dev-platform finding a consensus with intent
emails is that we might end up in a infinite debate. In that case, we
should use the module system and have the module owner(s) of the
associated area of code make the decision. If the module owner(s) can't
take this decision, we could go upward and ask Brendan to make it.

It would be great to have a list of required information for intent
emails. A template could be quite helpful.

 1. should we have a joint mailing list with Blink for intent to
 implement/ship?

Re: Making proposal for API exposure official

2013-06-25 Thread Robert O'Callahan
On Wed, Jun 26, 2013 at 4:15 AM, Mounir Lamouri mou...@lamouri.fr wrote:

  3. APIs solving use cases which no browser vendor shipping an engine
  other Gecko is interested in at that time. In cases such as this,
  Mozilla will solicit feedback from as many relevant parties as
  possible, begin the standardization process with a relevant standards
  body, and create a test suite as part of the standards process. An
  example of this is the Push Notifications API.

 I am not a big fan of that exception. Given how fast paced the Web is
 nowadays, we could easily put a lot of APIs in that category.


I don't see this. Can you give some examples?


 Actually,
 if we ask other vendors what they think about most Firefox OS APIs, we
 will very likely get no answer. Does that mean that those APIs are good
 to go?


If no answer means we don't care about your use cases, then we can't
let that block our progress, because there are always going to be use-cases
we need to solve that no other vendor is currently interested in.

Rob
-- 
Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-06-25 Thread Ehsan Akhgari

On 2013-06-25 1:42 PM, Clint Talbert wrote:

On 6/24/2013 8:02 PM, Justin Lebar wrote:


Under what circumstances would you expect the code coverage build to
break but all our other builds to remain green?


Sorry, I should have been more clear. For builds, I think it would be
pretty unusual for them to break on code coverage and yet remain green
on non-coverage builds.  But I've seen tests do wild things with code
coverage enabled because timing changes so much.  The most worrisome
issues are when tests cause crashes on coverage enabled builds, and
that's what I'm looking for help in tracking down and fixing. Oranges on
coverage enabled builds I can live with (because they don't change
coverage numbers in large ways and can even point us at timing dependent
tests, which could be a good thing in the long run), but crashes
effectively prevent us from measuring coverage for that test
suite/chunk. Test crashes were one of the big issues with the old system
-- we could never get eyes on the crashes to debug what had happened and
get it fixed.


If such crashes are due to changes in the timings of things happening 
inside the tests, then they're probably crashes that our users might 
also experience, and I would expect them to be treated with the same 
attention as other such crashes.


Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Changes to file purging during builds

2013-06-25 Thread Gregory Szorc

tl;dr You may experience a change in behavior in build performance

In bug 884587, we just changed how file purging occurs during builds.

Due to historical reasons (notably bad build dependencies and to prevent 
old files from leaking into the distribution package), we wipe out large 
parts of objdir/dist at the beginning of a top-level build [1].


Until recently, we simply did a |rm -r| because there was no way for the 
build system to know which files should or shouldn't be retained.
The moz.build transition has given us a whole world view that now 
allows us to identify which files should or should not belong. Instead 
of performing |rm -r|, we now read in manifest files, scan directories 
being purged, and selectively delete files that are unaccounted for.


An unfortunate side-effect of this change is that we need to walk parts 
of objdir at the top of the build, which may be slow, especially if this 
directory isn't in the filesystem/stat cache. This may result in a 
multi-second jank at the beginning of the build. Hopefully, this jank 
is offset by the build performing less work (deleting fewer files at the 
top of the build means fewer files need to be restored during the build).


If you notice unacceptable jank during purging and you aren't building 
on an SSD, please buy an SSD (if you can). I'll say it again: *please 
build Firefox on an SSD if you can*. If you have an SSD, I will be very 
interested in investigating further. File a new bug that depends on bug 
884587 and we'll look into things.


[1] 
https://hg.mozilla.org/integration/mozilla-inbound/rev/8d90527c22c6#l1.12

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-06-25 Thread L. David Baron
On Monday 2013-06-24 18:50 -0700, Clint Talbert wrote:
 So, the key things I want to know:
 * Will you support code coverage? Would it be useful to your work to
 have a regularly scheduled code coverage build  test run?
 * Would you want to additionally consider using something like
 JS-Lint for our codebase?

For what it's worth, I found the old code coverage data useful.  It
was useful to me to browse through it for code that I was
responsible for, to see:
 * what code was being executed during our test runs and how that
   matched with what I thought was being tested (it didn't always
   match, it turns out)
 * what areas might be in need of better tests
When I was looking at it, I was mostly focusing on the mochitests in
layout/style/test/.

(I worry I might have been one of a very small number of people
doing this, though.)

I think using code coverage tools separately on standards-compliance
test suites might also be interesting, e.g., to see what sort of
coverage the test suite for a particular specification gives us, and
whether there are tests we could contribute to improve it.

-David

-- 
턞   L. David Baron http://dbaron.org/   턂
턢   Mozilla   http://www.mozilla.org/   턂
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to file purging during builds

2013-06-25 Thread Ehsan Akhgari

On 2013-06-25 2:52 PM, Gregory Szorc wrote:

tl;dr You may experience a change in behavior in build performance

In bug 884587, we just changed how file purging occurs during builds.


FWIW this has been backed out for now.

Cheers,
Ehsan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-06-25 Thread Jed Davis
On Mon, Jun 24, 2013 at 08:02:26PM -0700, Justin Lebar wrote:
 Under what circumstances would you expect the code coverage build to break
 but all our other builds to remain green?

Anywhere you're using -Werror.  I ran into this in a past life with
GCC's may-use-uninitialized warning; if it's still an issue, I'd suggest
doing coverage builds with -Wno-error.

If you have any coverage-specific code, then also changes to anything
that depends on -- but this might not be an issue for hosted programs,
which don't have to bring their own runtime.

And there are some assorted weirdnesses that might indirectly break
something: multiprocessor performance falls off a cliff because the
counters are global (unless things have change significantly since I
last dealt with this), and there are interally generated references to
coverage runtime functions that ignore the symbol visibility pragma
and possibly other things.  But Gecko might not be doing enough
interesting low-level things to care.

--Jed

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code coverage take 2, and other code hygiene tools

2013-06-25 Thread Kyle Huey
On Tue, Jun 25, 2013 at 1:40 PM, L. David Baron dba...@dbaron.org wrote:

 On Monday 2013-06-24 18:50 -0700, Clint Talbert wrote:
  So, the key things I want to know:
  * Will you support code coverage? Would it be useful to your work to
  have a regularly scheduled code coverage build  test run?
  * Would you want to additionally consider using something like
  JS-Lint for our codebase?

 For what it's worth, I found the old code coverage data useful.  It
 was useful to me to browse through it for code that I was
 responsible for, to see:
  * what code was being executed during our test runs and how that
matched with what I thought was being tested (it didn't always
match, it turns out)
  * what areas might be in need of better tests
 When I was looking at it, I was mostly focusing on the mochitests in
 layout/style/test/.

 (I worry I might have been one of a very small number of people
 doing this, though.)

 I think using code coverage tools separately on standards-compliance
 test suites might also be interesting, e.g., to see what sort of
 coverage the test suite for a particular specification gives us, and
 whether there are tests we could contribute to improve it.

 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla   http://www.mozilla.org/   턂
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


I've also found decoder's code coverage data to be useful in checking what
is exercised by our test suite, what might be dead code (this works better
for finding counterexamples, of course), etc.  I suspect the audience would
be small without an effort to integrate it into our engineering processes
though.  I'd like to see reviewers insisting on reasonable coverage of all
new added code by the test suite.

- Kyle
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Bugzilla Keyword Standardization Proposal

2013-06-25 Thread Marc Schifer

Posted this to dev.planning as well. 


We in QA have been discussing ways to help improve our workflows and provide 
some standardization across the teams in how we use Bugzilla. As part of this 
process we have come up with a proposal to make a small change to the 
definition of qawanted keyword and the addition of a new keyword qaurgent

Please read the proposal linked below and give us your feed back.

https://wiki.mozilla.org/QA/qawanted_proposal

Marc S.

___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-25 Thread Jonas Sicking
during the first 12 months of development of new user-facing
products, APIs that have not yet been embraced by other vendors or
thoroughly discussed by standards bodies may be shipped only as a part
of this product, thus clearly indicating their lack of standardization
and limiting any market share they may attain

I think this needs some refinements.

In B2G, we haven't even been able to approach standardizing a few of
the APIs because we've been too swamped with actually getting stuff
shipped. This despite having worked on shipping for something like two
years. I think we need to base any timelines here on shipping, not
starting of development.

Additionally, there are lots of pieces that are new and not part of
the normal web. We definitely are going to standardizing them all, but
we have to do it in an order that makes sense.

Additionally, it's unclear if the above is saying that unless we can
get something passing the rules in the Standardization section
within 12 months, that we should remove it from later versions of the
product? I don't think such a requirement would ever be implemented.

I don't have a lot of great answers here. The best thing I can think
of is something like:

For new products, APIs that have not yet been embraced by other
vendors or thoroughly discussed by standards bodies may be shipped
only as a part of this product. Standardization must however start
within X months after shipping initial version of the product.

In B2G we additionally have the extra fun of not being able to even
start standardizing a lot of the APIs because they have dependencies
on the runtime and security model, which is still being worked on. But
I think it's clear that there can be dependencies that needs to be
resolved in the right order.

I also think we should explicitly define a policy for what to do with
the APIs before we are able to standardize them. I.e. should we ship
them prefixed, under a window.mozilla namespace, or using a normal
name. I tend to disagree with Henri here and think that we should stay
out of the way from whatever name we think the final standard should
have. At least in cases where we expect to need to change the API a
lot as we improve it (as is the case with most B2G APIs).


I don't see the need for the second exception.

ecosystem- and hardware-specific APIs that are not standard or of
interest to the broader web at that time (or ever) may be shipped in a
way to limit their harm of the broader web (ex. only on a device or
only in specific builds with clear disclaimers about applicability of
exposed APIs). An example of this is the FM Radio API for Firefox OS.

I can't think of any APIs that fall into this category. FM Radio
should absolutely be standardized. Our current implementation
definitely falls under the new product exception, but eventually we
should fix it up and get it standardized.

/ Jonas

On Fri, Jun 21, 2013 at 10:45 PM, Andrew Overholt overh...@mozilla.com wrote:
 Back in November, Henri Sivonen started a thread here entitled Proposal:
 Not shipping prefixed APIs on the release channel [1].  The policy of not
 shipping moz-prefixed APIs in releases was accepted AFAICT.

 I've incorporated that policy into a broader one regarding web API exposure.
 I'd like to see us document this so everyone can easily find our stance in
 this area, similar to how they can with Blink [2].

 I've put a draft here:

   https://wiki.mozilla.org/User:Overholt/APIExposurePolicy

 I'd appreciate your review feedback.  Thanks.

 [1]
 https://groups.google.com/d/msg/mozilla.dev.platform/34JfwyEh5e4/emA1LAoVEVQJ

 [2]
 http://www.chromium.org/blink#new-features
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to file purging during builds

2013-06-25 Thread Mike Hommey
On Tue, Jun 25, 2013 at 11:52:03AM -0700, Gregory Szorc wrote:
 tl;dr You may experience a change in behavior in build performance
 
 In bug 884587, we just changed how file purging occurs during builds.
 
 Due to historical reasons (notably bad build dependencies and to
 prevent old files from leaking into the distribution package), we
 wipe out large parts of objdir/dist at the beginning of a top-level
 build [1].
 
 Until recently, we simply did a |rm -r| because there was no way for
 the build system to know which files should or shouldn't be
 retained.
 The moz.build transition has given us a whole world view that now
 allows us to identify which files should or should not belong.
 Instead of performing |rm -r|, we now read in manifest files, scan
 directories being purged, and selectively delete files that are
 unaccounted for.
 
 An unfortunate side-effect of this change is that we need to walk
 parts of objdir at the top of the build, which may be slow,
 especially if this directory isn't in the filesystem/stat cache.
 This may result in a multi-second jank at the beginning of the
 build. Hopefully, this jank is offset by the build performing less
 work (deleting fewer files at the top of the build means fewer files
 need to be restored during the build).
 
 If you notice unacceptable jank during purging and you aren't
 building on an SSD, please buy an SSD (if you can). I'll say it
 again: *please build Firefox on an SSD if you can*. If you have an
 SSD, I will be very interested in investigating further. File a new
 bug that depends on bug 884587 and we'll look into things.

Alternatively, if you have plenty of RAM, build in a RAM disk.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Test failures seemingly caused by plugin click-to-play code

2013-06-25 Thread Ehsan Akhgari
Hello fellow hackers,

There are plugin related test failures on inbound and aurora.  I've filed
bug 887094 about this and have consequently closed central, inbound, birch
and aurora.

I don't know where to start to help with this myself.  Hopefully someone in
the know will jump on board soon.

Cheers,
--
Ehsan
http://ehsanakhgari.org/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform