Retina display support
Hi, I have two different applications , both of them use gecko SDK version 2.0 for embedded browser. On retina machine , one of the applications shows clear retina supported text while other browser on other application shows blurred text on retina machine. I am not sure what is missing that i am seeing two different behaviors . Any idea on this would really help. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Retina display support
On 30.05.14 08:38, bhargava.animes...@gmail.com wrote: Hi, I have two different applications , both of them use gecko SDK version 2.0 for embedded browser. On retina machine , one of the applications shows clear retina supported text while other browser on other application shows blurred text on retina machine. I am not sure what is missing that i am seeing two different behaviors . Any idea on this would really help. You need to make sure that your app's Info.plist (located directly inside the .app bundle) has an NSPrincipalClass entry. The value doesn't matter that much; NSApplication or GeckoNSApplication are both suitable choices. -Markus ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On Friday, May 30, 2014 8:22:25 AM UTC+3, Matt Woodrow wrote: Thanks Avi! I can reproduce a regression like this (~100% slower on iconFade-close-DPIcurrent.all) with my machine forced to use the intel GPU, but not with the Nvidia one. Indeed, and it's not the first time we notice that Firefox performs much worse with Intel iGPUs compared to nvidia. This comment: https://bugzilla.mozilla.org/show_bug.cgi?id=894128#c30 compares scrolling performance on a Wikipedia page, on a different system than the one I used to produce these OMTC numbers with. It suggests that we were already doing badly enough with intel iGPUs even before OMTC (about 300% worse and much more noisy intervals than nvidia on a Wikipedia page if we're to believe those numbers), and it looks as if with OMTC the regression compared to nvidia increased even more (in relative terms). In comment 1 of the same bug 894128, I also compared the performance of Chrome and IE on the same pages. FWIW, IE is able to maintain 100% smooth scrolling on some really complex pages even on a _very_ low end Atom system (Intel iGPU), while Firefox doesn't come anywhere near it. While scroll and tab animations are possibly different things, I do think there's a line which connects these dots, and it's that for whatever reason, Firefox does really badly on Intel iGPUs. Which is unfortunate, because on many many systems these are the only available GPUs, and they're already considered good enough for the majority of users to not need a dedicated GPU. - avih ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On Friday, May 30, 2014 1:25:33 PM UTC+3, avi...@gmail.com wrote: FWIW, IE is able to maintain 100% smooth scrolling on some really complex pages even on a _very_ low end Atom system (Intel iGPU), while Firefox doesn't come anywhere near it. Of course, I'm hoping that APZ and maybe tiling will be able improve greatly on the scrolling case. Yet, this has been our case for a very long time now. - avih ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates
On (2014年05月29日 23:01), Mike Hoye wrote: On 2014-05-28, 9:07 PM, Joshua Cranmer wrote: Two more possible rationales: 1. The administrator is unwilling to pay for an SSL certificate and unaware of low-cost or free SSL certificate providers. 2. The administrator has philosophical beliefs about CAs, or the CA trust model in general, and is unwilling to participate in it. Neglecting the fact that encouraging click-through behavior of users can only weaken the trust model. 3. The administrator doesn't actually believe SSL certs protect you from any real harm, and is generating a cert using the least effort possible to make a user-facing dialog box go away. It's become clear in the last few months that the overwhelmingly most frequent users of MITM attacks are state actors with privileged network positions either obtaining or coercing keys from CAs, using attacks that the CA model effectively endorses, using tech you can buy off the shelf. In that light, it's not super-obvious what SSL certs protect you from apart from some jackass sniffing the coffeeshop wifi. - mhoye I am using self-signed certificate with full knowledge of pros and cons on a server of my own. (BTW, I have found it odd to call self-signed certificate an INVALID certificate. It is a certificate OUT OF widely known CA hierarchies, but not INVALID IMHO. Invalid connotates something like a reversed from and to dates, already expired valid period, revocation URL is empty, etc., but I digress.) I would like to add another possibility: 4. A clueless user/administrator: The manual of a useful tool suggests that a self-signed certificate is used (or does not mention explicitly to use non-self-signed certification in a clear manner.) Case in point: ownCloud 5 ownCloud is a very useful open-source version of do it yourself Dropbox replacement (well almost). You can configure your own server to act like a Dropbox look-alike. I have been using it for a few months and it is very useful (basically there is no space limit, and free as in free lunch aside from the fee for network connection, PC maintenance, and electric power.) Now, ownCloud server runs as a PHP script that is invoked by web server: in my case, apache. For https: access, it needs SSL certificate, and in my case, I chose the default installation which leads to the use of default certificate (self-signed certificate.) I mentioned the possibility 4 above, because ownCloud version 5 manual never mentioned the demerit or anything of self-signed certificate. It makes sense. If a user stores his/her data in a server operated by an operator of an ownCloud service, then trusting that certificate (self-signed) from the operator is no brainer. Surely the first time you access the server, you are warned by the browser (and ownCloud sync client), but after checking finger printing, etc., you can accept it and everything should be OK. [Now, if the self-signed certificate is signed with a key that is leaked to government snoops is another story, but again it is up to the user to decide how much trusts the operator of ownCloud has. He/she may layer the local encryption before the file is stored on a remote server.] Again, if SSL is used only for end-to-end encryption among (ALREADY) trusting parties that do not share a common key in advance, self-certified cert for SSL is OK. When I checked how to use SSL certificate for Apache under Debian, documents from Debian GNU/Linux are rather scant on the philosophical issues of self-certs. The following is what I checked early during setup of ownCloud server. --- quote from my local copy of /usr/share/doc/ssl-cert/README This is a quicky package to enable unattended installs of software that need to create ssl certificates. Basically, it's just a wrapper for openssl req that feeds it the correct user variables to create self-signed certificates. --- end quote Debian Wiki: https://wiki.debian.org/Self-Signed_Certificate does not mention any demerit at this level. It explains the proper openssl command line to produce a self-signed cert. (Maybe the pages in the higher level may discuss the pros and cons of self-certs vs certs signed by widely known CA. But google search finds the above on the first page.) So I suppose someone, who was interested in ownCloud and uses Debian GNU/Linux, never had bothered to setup Apache for services that would require https:, and unfamiliar with cryptography may opt to use the defaults suggested by these packages and documentation, and ends up with self-cert by following the commands and instructions. To clear the name of ownCloud developer community, I hasten to add that ownCloud version 6 (six) manual (released in the last month or so) clearly mentioned the demerit of self-signed cert and a way to obtain a free ssl certificate. --- begin quote Apache Configuration Enabling SSL An Apache installed under Ubuntu comes already set-up with a simple self-signed certificate. All you
Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible
On Thu, May 29, 2014 at 12:27 AM, Daniel Holbert dholb...@mozilla.com wrote: For now, our code isn't clean enough for this sort of static_assert to be doable. :-/ And we have at least one instance of a refcounted class that's semi-intentionally (albeit carefully) declared on the stack: gfxContext, per https://bugzilla.mozilla.org/show_bug.cgi?id=742100 Still, the static_assert could be a good way of finding (in a local build) all the existing refcounted classes that want a non-public destructor, I suppose. You could also specifically list the exceptions in the static_assert expression, so that it would catch any new additions unless they were added to the list of exceptions. (It would be good hygiene to also static_assert that the exceptions *are* destructible, so that if that's ever fixed they can be removed from the exception list.) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
There are likely two causes here. First, until we have APZ enabled its very unlikely that we can ever maintain a high frame-rate scrolling on low-end hardware. OMTC is a prerequisite for APZ (async pan/zoom). Low end hardware is simply not fast enough to repaint and buffer-rotate with 60FPS. Now for Intel hardware being slow there could be a couple reasons, and APZ might fix them actually. If I remember correctly Atom GPUs are PowerVR based, which is a tile based rendering architecture. It splits the frame buffer in small tiles and renders those. To do this efficiently it defers rendering for as long as possible. Other GPUs start rendering as soon as possible, whereas PowerVR waits until the entire frame is ready and then renders it then. We do a couple operations while rendering that might force a pipeline flush, which likely forces PowerVR to render right away, which is very bad for PowerVR’s particular render model. If you can point us to some specific hardware we really suck on we can definitely look into this. Andreas On May 30, 2014, at 6:25 AM, avi...@gmail.com wrote: On Friday, May 30, 2014 8:22:25 AM UTC+3, Matt Woodrow wrote: Thanks Avi! I can reproduce a regression like this (~100% slower on iconFade-close-DPIcurrent.all) with my machine forced to use the intel GPU, but not with the Nvidia one. Indeed, and it's not the first time we notice that Firefox performs much worse with Intel iGPUs compared to nvidia. This comment: https://bugzilla.mozilla.org/show_bug.cgi?id=894128#c30 compares scrolling performance on a Wikipedia page, on a different system than the one I used to produce these OMTC numbers with. It suggests that we were already doing badly enough with intel iGPUs even before OMTC (about 300% worse and much more noisy intervals than nvidia on a Wikipedia page if we're to believe those numbers), and it looks as if with OMTC the regression compared to nvidia increased even more (in relative terms). In comment 1 of the same bug 894128, I also compared the performance of Chrome and IE on the same pages. FWIW, IE is able to maintain 100% smooth scrolling on some really complex pages even on a _very_ low end Atom system (Intel iGPU), while Firefox doesn't come anywhere near it. While scroll and tab animations are possibly different things, I do think there's a line which connects these dots, and it's that for whatever reason, Firefox does really badly on Intel iGPUs. Which is unfortunate, because on many many systems these are the only available GPUs, and they're already considered good enough for the majority of users to not need a dedicated GPU. - avih ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On 30/05/2014 14:19, Andreas Gal wrote: Now for Intel hardware being slow there could be a couple reasons, and APZ might fix them actually. If I remember correctly Atom GPUs are PowerVR based, which is a tile based rendering architecture. It splits the frame buffer in small tiles and renders those. To do this efficiently it defers rendering for as long as possible. Other GPUs start rendering as soon as possible, whereas PowerVR waits until the entire frame is ready and then renders it then. We do a couple operations while rendering that might force a pipeline flush, which likely forces PowerVR to render right away, which is very bad for PowerVR’s particular render model. If you can point us to some specific hardware we really suck on we can definitely look into this. The test in https://bugzilla.mozilla.org/show_bug.cgi?id=894128#c30 is using an HD4000 iGPU which is an internal Intel design and not PowerVR-based. Has anybody tried using a Intel's Graphics Performance Analyzer [1] tools to see if we're hitting a slow path in the driver or some other suboptimal scenario? Gabriele [1] https://software.intel.com/en-us/vcsource/tools/intel-gpa signature.asc Description: OpenPGP digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On Friday, May 30, 2014 5:06:52 PM UTC+3, Gabriele Svelto wrote: On 30/05/2014 14:19, Andreas Gal wrote: If you can point us to some specific hardware we really suck on we can definitely look into this. Sure, and the hardware specs are also available at bug 894128 comment 0. 100% smooth scrolling with IE was observed on the following systems: - Acer Iconia w510 tablet, which has a Clover Trail Atom z2760, which indeed includes a PowerVR SGX 545 GPU. - Asus T100 laptop/tablet, which has a newer and considerably stronger Bay Trail Atom Z3740, AFAIK with with HD graphics technology similar to HD4000. - Asus N56VZ with i7-3630qm with HD4000 GPU (it also has nvidia gt650m with optimus, and comment 30 compares the intel/nvidia performance on this system). On all these systems, Firefox is far behind IE on this front, but with margins getting lower as the systems get stronger (i.e. in order of presentation - old atom, new atom, i7+hd4000). - avih ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On 30.05.2014 07:28, Matt Woodrow wrote: I definitely agree with this, but we also need OMTAnimations to be finished and enabled before any of the interesting parts of the UI can be converted. Given that, I don't think we can have this conversation at the expense of trying to fix the current set of regressions from OMTC. Even if off-main-thread animations worked and we somehow re-designed and re-implemented the tab strip today, this still wouldn't wipe away the gist of the regressions, which really isn't about the tab strip. The tab strip uses web technology or derivative thereof (XUL flexbox, but I guess that's not at fault here...). Telling web developers that they should only ever animate transforms / opacity or use canvas is a flawed strategy when Gecko performs worse than it used to and/or worse than other engines on animations involving reflows. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On Friday, May 30, 2014 5:48:26 PM UTC+3, avi...@gmail.com wrote: On all these systems, Firefox is far behind IE on this front, but with margins getting lower as the systems get stronger (i.e. in order of presentation - old atom, new atom, i7+hd4000). And just to complete the picture, the margin disappears for most practical concerns on the last system when using the i7 with nvidia gt650m GPU. I.e. on this fast system, it's mostly smooth as silk to a similar level which is observed with IE. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Intent to land: Voice/video client (Loop)
Summary: The Loop project aims to create a user-visible real-time communications service for existing Mozilla products, leveraging the WebRTC platform. One version of the client will be integrated with Firefox Desktop. It is intended to be interoperable with a Firefox OS application (not part of Gaia) that will be shipped by a third party on the Firefox OS 2.0 platform. The implementation of the client has already reached a proof-of-concept stage in a set of github repositories. This announcement of intent is being sent in advance of integration into the mozilla-central repositories. This integration will be a two-step process. The first step, which has already taken place, is to land the existing implementation in the Elm tree to validate proper integration into the RelEng systems. After testing, the code will be merged into m-c, and subsequent development will take place in the m-i/m-c trees. The code is currently controlled by the MOZ_LOOP preprocessor definition, which is only enabled for Nightly builds. The feature will iterate on Nightly until it is considered complete enough to ride the trains out. For more details: https://wiki.mozilla.org/Loop https://wiki.mozilla.org/Media/WebRTC https://blog.mozilla.org/futurereleases/2014/05/29/experimenting-with-webrtc-in-firefox-nightly/ Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=loop_mlp Link to standard: N/A Platform coverage: Firefox on Desktop. Estimated or target release: Firefox 33 or 34 (33 is a stretch goal for the team, 34 is a committed date). Preference behind which this will be implemented: For initial landing on Nightly, none. Will be behind loop.enabled before riding the trains. -- Adam Roach Principal Platform Engineer a...@mozilla.com +1 650 903 0800 x863 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to land: Voice/video client (Loop)
On Fri, May 30, 2014 at 5:03 PM, Adam Roach a...@mozilla.com wrote: Link to standard: N/A I take it this means there's no web-exposed API? -- http://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to land: Voice/video client (Loop)
On 5/30/14 10:14, Anne van Kesteren wrote: On Fri, May 30, 2014 at 5:03 PM, Adam Roach a...@mozilla.com wrote: Link to standard: N/A I take it this means there's no web-exposed API? That is correct. This is a browser feature, not accessible from content. -- Adam Roach Principal Platform Engineer a...@mozilla.com +1 650 903 0800 x863 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
Please read my email again. This kind of animation cannot be rendered with high FPS by any engine. It's simply conceptually expensive and inefficient for the DOM rendering model. We will work on matching other engines if we are slightly slower than we could be, but you will never reach solid performance on low end hardware with the current approach. While we work on squeezing out a few more FPS, please work on implementing a tab strip that can be rendered efficiently. Andreas Sent from Mobile. On May 30, 2014, at 10:46, Dao d...@design-noir.de wrote: On 30.05.2014 07:28, Matt Woodrow wrote: I definitely agree with this, but we also need OMTAnimations to be finished and enabled before any of the interesting parts of the UI can be converted. Given that, I don't think we can have this conversation at the expense of trying to fix the current set of regressions from OMTC. Even if off-main-thread animations worked and we somehow re-designed and re-implemented the tab strip today, this still wouldn't wipe away the gist of the regressions, which really isn't about the tab strip. The tab strip uses web technology or derivative thereof (XUL flexbox, but I guess that's not at fault here...). Telling web developers that they should only ever animate transforms / opacity or use canvas is a flawed strategy when Gecko performs worse than it used to and/or worse than other engines on animations involving reflows. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Update on sheriff-assisted checkin-needed bugs
Just as a quick follow-up to this - we're already seeing much lower checkin-needed backout rates since this change went into affect, so thank you all for your help! -Ryan - Original Message - From: Ryan VanderMeulen rvandermeu...@mozilla.com To: dev-b2g dev-...@lists.mozilla.org, dev.platform dev-platform@lists.mozilla.org, dev-g...@lists.mozilla.org Cc: Sheriffs sheri...@mozilla.com Sent: Friday, May 16, 2014 4:54:29 PM Subject: Update on sheriff-assisted checkin-needed bugs As many of you are aware, the sheriff team has been assisting with landing checkin-needed bugs for some time now. However, we've also had to deal with the fallout of a higher than average bustage frequency from them. As much as we enjoy shooting ourselves in the foot, our team has decided that we needed to tweak our process a bit to avoid tree closures and wasted time and energy. Therefore, our team has decided that we will now require that a link to a recent Try run be provided when requesting checkin before we will land the patch. To be clear, this *ONLY* affects checkin-needed bugs where we're assisting with the landing. We have no desire to police what other developers do before pushing. As has always been the case, developers are expected to ensure that their patches have received adequate testing prior to pushing whether they are receiving our assistance or not. Our team is also not going to dictate which specific builds/tests are required. We're not experts in your code and we'll defer to your judgment as to what counts as sufficient testing. As mentioned earlier today in another post, if in doubt, we do have a set of general best practices for Try that can be used as a guide [1]. We just want to ensure that patches have at least received some baseline level of testing before being pushed to production. We've been testing the water with this policy for the past couple weeks and have already seen a reduction in the number of backouts needed. For those of you mentoring bugs for new contributors, please also keep this in mind in order to keep patches from being held up in landing. And consider vouching for Level 1 commit access to further empower those contributors! Thanks! -Ryan [1] https://wiki.mozilla.org/Sheriffing/How:To:Recommended_Try_Practices ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: OMTC on Windows
On Friday, May 30, 2014 6:16:53 PM UTC+3, andre...@gmail.com wrote: Please read my email again. It was provided as an objective data and subjective assessment - and not as an opinion, in reply for your request for more info on systems where we suck, if I understood your request correctly. FWIW, I'm aware of the systems in place and the plans, and mentioned already that I hope that APZ and tiling could improve the scroll case. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to Implement: Encrypted Media Extensions
On 27/05/14 19:44, Chris Pearce wrote: Encrypted Media Extensions specifies a JavaScript interface for interacting with plugins that can be used to facilitate playback of DRM protected media content. We will also be implementing the plugin interface itself. We will be working in partnership with Adobe who are developing a compatible DRM plugin; the Adobe Access CDM. Is now the time to have the UX discussion? If not, when and where will that be happening? Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates
On 29/05/14 07:01, Mike Hoye wrote: It's become clear in the last few months that the overwhelmingly most frequent users of MITM attacks are state actors with privileged network positions either obtaining or coercing keys from CAs, I don't think that's clear at all. Citation needed. I think it's more likely that they are intercepting SSL using crypto or implementation vulnerabilities without explicit CA cooperation. using attacks that the CA model effectively endorses, using tech you can buy off the shelf. In that light, it's not super-obvious what SSL certs protect you from apart from some jackass sniffing the coffeeshop wifi. Even if you are right, the answer is still everyone apart from the US government. Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates
On 28/05/14 17:49, Joshua Cranmer wrote: * Insufficiently secure certificate (e.g., certificates that violate CA/Browser Forum rules or the like. I don't know if we actually consider this a failure right now, but it's a reasonable distinct failure class IMHO) We would refuse e.g. a cert with an MD5 signature. In the future, we hope to refuse certs of insufficient bitlength. It seems to me that some of these are more tolerable than others. There is a much different risk profile to accepting a certificate that expired two days ago versus one that fails an OCSP validation check. Actually, no. Because as soon as a certificate expires, the CA has no obligation to keep revocation information available for that cert. So the two are actually equivalent. That is to say, if a cert is expired, then you may not receive an OCSP response for it. And you can't make any assumptions about what that response might have been - it might have been revoked. We have an excellent chance to try to rethink CA infrastructure in this process beyond the notion of a trusted third-party CA system (which is already more or less broken, but that's beside the point). My own views on this matter is that the most effective notion of trust is some sort of key pinning: using a different key is a better indicator of an attack than having a valid certificate; under this model the CA system is largely information about how to trust a key you've never seen before. There is a minor gloss point here in that there are legitimate reasons to need to re-key servers (e.g., Heartbleed or the Debian OpenSSL entropy issue), and I don't personally have the security experience to be able to suggest a solution here. Forgive me, but that sounds like I'm going to propose a solution with one glaring flaw that has always sunk it in the past, and then gloss over that flaw by saying 'I don't have the security experience - someone else fix it'. Doesn't the EFF's SSL Observatory already track the SSL certificates to indicate potential MITMs? The SSL Observatory's available data is a one-off dump from several years ago. They are collecting more data as they go along, but it's not public. 1. Any solution should try to only permit the easy certificate override on account configuration. This minimizes scope for potential MITM attacks. That sounds like a reasonable idea, actually; by analogy with Bluetooth pairing. Gerv ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: CSSOM-View scroll-behavior property
On Tuesday, 27 May 2014 15:12:56 UTC-7, Robert O'Callahan wrote: On Wed, May 28, 2014 at 6:14 AM, kgil...@mozilla.com wrote: Is this behavior acceptable or would it be more desirable to always return the actual scroll position in DOM methods? All DOM methods that depend on the scroll position (not just scrollLeft/scrollTop but getBoundingClientRect etc too) should always use the most up-to-date value of the actual scroll position that the main thread has. That's how our existing smooth scrolling behaves. I.e., we don't lie :-). I'm pretty sure the CSSOM spec requires that for the ScrollBehavior smooth value, too. I agree and have added a comment to the bug (Bug 101538 / Comment 17) to indicate that the property should not have the write-only effects that I described earlier. - Kip ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates
On 5/30/2014 12:00 PM, Gervase Markham wrote: On 28/05/14 17:49, Joshua Cranmer wrote: We have an excellent chance to try to rethink CA infrastructure in this process beyond the notion of a trusted third-party CA system (which is already more or less broken, but that's beside the point). My own views on this matter is that the most effective notion of trust is some sort of key pinning: using a different key is a better indicator of an attack than having a valid certificate; under this model the CA system is largely information about how to trust a key you've never seen before. There is a minor gloss point here in that there are legitimate reasons to need to re-key servers (e.g., Heartbleed or the Debian OpenSSL entropy issue), and I don't personally have the security experience to be able to suggest a solution here. Forgive me, but that sounds like I'm going to propose a solution with one glaring flaw that has always sunk it in the past, and then gloss over that flaw by saying 'I don't have the security experience - someone else fix it'. Actually, that is essentially what I'm saying. I know other people at Mozilla have good security backgrounds and can discuss the issue, and I was hoping that they could weigh in with suggestions on this thread. I acknowledge that the re-keying is a difficult issue, but I also don't have the time to do the research myself on this topic, since I'm way backed up on a myriad of other obligations. Doesn't the EFF's SSL Observatory already track the SSL certificates to indicate potential MITMs? The SSL Observatory's available data is a one-off dump from several years ago. They are collecting more data as they go along, but it's not public. The EFF does things that aren't public?! :) More seriously, are they actively attempting to detect potential MITMs, and would they announce if they did detect one? Andrew had in his proposal a note that reporting of fingerprints could be used to detect MITMs, and I was implying that this was duplicating work others were already doing. -- Joshua Cranmer Thunderbird and DXR developer Source code archæologist ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Intent to implement: New HTMLInputElement.autocomplete values
Summary: I am implementing support for @autocomplete values other than off/on for HTMLInputElement.autocomplete. This allows web developers to indicate how UAs should autocomplete/auto-fill values in the fields (if they choose to do so) so they don't have to use heuristics to guess what data is expected. Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1009935 Link to standard: http://www.whatwg.org/specs/web-apps/current-work/#attr-fe-autocomplete Platform coverage: New values/tokens will be preffed on in the future for platforms which make use of the associated values so that websites can do feature detection. Estimated or target release: The main consumer is requestAutocomplete[1] although some of the values may be used by form manager and password manager before that point. Preference behind which this will be implemented: dom.forms.autocomplete.experimental We may use additional preferences or change the name once we know which values we will initially support for requestAutocomplete and the password manager. FYI: The various attribute values have been in the web-apps spec for over a year and Chrome is already making use of them for requestAutocomplete and, I believe, inline form auto-fill. [1] https://groups.google.com/d/topic/mozilla.dev.platform/F2mMPBme40I/discussion -- MattN ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Intent to implement: DOMMatrix
Primary eng emails caban...@adobe.com, dschu...@adobe.com *Proposal* *http://dev.w3.org/fxtf/geometry/#DOMMatrix http://dev.w3.org/fxtf/geometry/#DOMMatrix* *Summary* Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that offer a matrix abstraction. *Motivation* The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical matrix with the purpose of describing transformations in a graphical context. The following sections describe the details of the interface. The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix interface from SVG. In addition, DOMMatrix will be part of CSSOM where it will simplify getting and setting CSS transforms. *Mozilla bug* https://bugzilla.mozilla.org/show_bug.cgi?id=1018497 I will implement this behind the flag: layout.css.DOMMatrix *Concerns* None. Mozilla already implemented DOMPoint and DOMQuad *Compatibility Risk* Blink: unknown WebKit: in development [1] Internet Explorer: No public signals Web developers: unknown 1: https://bugs.webkit.org/show_bug.cgi?id=110001 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: DOMMatrix
I'll defer to the layout folks for this one. / Jonas On Fri, May 30, 2014 at 5:02 PM, Rik Cabanier caban...@gmail.com wrote: Primary eng emails caban...@adobe.com, dschu...@adobe.com *Proposal* *http://dev.w3.org/fxtf/geometry/#DOMMatrix http://dev.w3.org/fxtf/geometry/#DOMMatrix* *Summary* Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that offer a matrix abstraction. *Motivation* The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical matrix with the purpose of describing transformations in a graphical context. The following sections describe the details of the interface. The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix interface from SVG. In addition, DOMMatrix will be part of CSSOM where it will simplify getting and setting CSS transforms. *Mozilla bug* https://bugzilla.mozilla.org/show_bug.cgi?id=1018497 I will implement this behind the flag: layout.css.DOMMatrix *Concerns* None. Mozilla already implemented DOMPoint and DOMQuad *Compatibility Risk* Blink: unknown WebKit: in development [1] Internet Explorer: No public signals Web developers: unknown 1: https://bugs.webkit.org/show_bug.cgi?id=110001 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: DOMMatrix
I'm all for it! :-) Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: DOMMatrix
I never seem to be able to discourage people from dragging the W3C into specialist topics that are outside its area of expertise. Let me try again. Objection #1: The skew* methods are out of place there, because, contrary to the rest, they are not geometric transformations, they are just arithmetic on matrix coefficients whose geometric impact depends entirely on the choice of a coordinate system. I'm afraid of leaving them there will propagate widespread confusion about skews --- see e.g. the authors of http://dev.w3.org/csswg/css-transforms/#matrix-interpolation who seemed to think that decomposing a matrix into a product of things including a skew would have geometric significance, leading to clearly unwanted behavior as demonstrated in http://people.mozilla.org/~bjacob/transform-animation-not-covariant.html Objection #2: This DOMMatrix interface tries to be simultaneously a 4x4 matrices representing projective 3D transformations, and about 2x3 matrices representing affine 2D transformations; this mode switch corresponds to the is2D() getter. I have a long list of objections to this mode switch: - I believe that, being based on exact floating point comparisons, it is going to be fragile. For example, people will assert that !is2D() when they expect a 3D transformation, and that will intermittently fail when for whatever reason their 3D matrix is going to be exactly 2D. - I believe that these two classes of transformations (projective 3D and affine 2D) should be separate classes entirely, that that will make the API simpler and more efficiently implementable and that forcing authors to think about that choice more explicitly is doing them a favor. - I believe that that feature set, with this choice of two classes of transformations (projective 3D and affine 2D), is arbitrary and inconsistent. Why not support affine 3D or projective 2D, for instance? Objection #3: I dislike the way that this API exposes multiplication order. It's not obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B and which is doing A=B*A. Objection #4: By exposing a inverse() method but no solve() method, this API will encourage people who have to solve linear systems to do so by doing matrix.inverse().transformPoint(...), which is inefficient and can be numerically unstable. But then of course once we open the pandora box of exposing solvers, the API grows a lot more. My point is not to suggest to grow the API more. My point is to discourage you and the W3C from getting into the matrix API design business. Matrix APIs are bound to either grow big or be useless. I believe that the only appropriate Matrix interface at the Web API level is a plain storage class, with minimal getters (basically a thin wrapper around a typed array without any nontrivial arithmetic built in). Benoit 2014-05-30 20:02 GMT-04:00 Rik Cabanier caban...@gmail.com: Primary eng emails caban...@adobe.com, dschu...@adobe.com *Proposal* *http://dev.w3.org/fxtf/geometry/#DOMMatrix http://dev.w3.org/fxtf/geometry/#DOMMatrix* *Summary* Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that offer a matrix abstraction. *Motivation* The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical matrix with the purpose of describing transformations in a graphical context. The following sections describe the details of the interface. The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix interface from SVG. In addition, DOMMatrix will be part of CSSOM where it will simplify getting and setting CSS transforms. *Mozilla bug* https://bugzilla.mozilla.org/show_bug.cgi?id=1018497 I will implement this behind the flag: layout.css.DOMMatrix *Concerns* None. Mozilla already implemented DOMPoint and DOMQuad *Compatibility Risk* Blink: unknown WebKit: in development [1] Internet Explorer: No public signals Web developers: unknown 1: https://bugs.webkit.org/show_bug.cgi?id=110001 ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: DOMMatrix
Since DOMMatrix is replacing SVGMatrix, I don't see a way to implement it behind a flag. Should I wait to make that change and leave both SVGMatrix and DOMMatrix in the code for now? On Fri, May 30, 2014 at 8:53 PM, Robert O'Callahan rob...@ocallahan.org wrote: I'm all for it! :-) Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Intent to implement: DOMMatrix
On Fri, May 30, 2014 at 9:00 PM, Benoit Jacob jacob.benoi...@gmail.com wrote: I never seem to be able to discourage people from dragging the W3C into specialist topics that are outside its area of expertise. Let me try again. Objection #1: The skew* methods are out of place there, because, contrary to the rest, they are not geometric transformations, they are just arithmetic on matrix coefficients whose geometric impact depends entirely on the choice of a coordinate system. I'm afraid of leaving them there will propagate widespread confusion about skews --- see e.g. the authors of http://dev.w3.org/csswg/css-transforms/#matrix-interpolation who seemed to think that decomposing a matrix into a product of things including a skew would have geometric significance, leading to clearly unwanted behavior as demonstrated in http://people.mozilla.org/~bjacob/transform-animation-not-covariant.html Many people think that the skew* methods were a mistake. However, DOMMatrix is meant as a drop-in replacement for SVGMatrix which unfortunately has these methods: http://www.w3.org/TR/SVG11/coords.html#InterfaceSVGMatrix I would note though that skewing is very popular among animators so I would object to their removal. Objection #2: This DOMMatrix interface tries to be simultaneously a 4x4 matrices representing projective 3D transformations, and about 2x3 matrices representing affine 2D transformations; this mode switch corresponds to the is2D() getter. I have a long list of objections to this mode switch: - I believe that, being based on exact floating point comparisons, it is going to be fragile. For example, people will assert that !is2D() when they expect a 3D transformation, and that will intermittently fail when for whatever reason their 3D matrix is going to be exactly 2D. - I believe that these two classes of transformations (projective 3D and affine 2D) should be separate classes entirely, that that will make the API simpler and more efficiently implementable and that forcing authors to think about that choice more explicitly is doing them a favor. - I believe that that feature set, with this choice of two classes of transformations (projective 3D and affine 2D), is arbitrary and inconsistent. Why not support affine 3D or projective 2D, for instance? These objections sound valid. However WebKit, Blink and Microsoft already expose CSSMatrix that combines a 4x4 and 2x3 matrix: https://developer.apple.com/library/safari/documentation/AudioVideo/Reference/WebKitCSSMatrixClassReference/WebKitCSSMatrix/WebKitCSSMatrix.html and it is used extensively by authors. The spec is standardizing that existing class so we can remove the prefix. Objection #3: I dislike the way that this API exposes multiplication order. It's not obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B and which is doing A=B*A. The by methods do the transformation in-place. In this case, both are A = A * B Maybe you're thinking of preMultiply? Objection #4: By exposing a inverse() method but no solve() method, this API will encourage people who have to solve linear systems to do so by doing matrix.inverse().transformPoint(...), which is inefficient and can be numerically unstable. But then of course once we open the pandora box of exposing solvers, the API grows a lot more. My point is not to suggest to grow the API more. My point is to discourage you and the W3C from getting into the matrix API design business. Matrix APIs are bound to either grow big or be useless. I believe that the only appropriate Matrix interface at the Web API level is a plain storage class, with minimal getters (basically a thin wrapper around a typed array without any nontrivial arithmetic built in). We already went over this at length about a year ago. Dirk's been asking for feedback on this interface on www-style and public-fx so can you raise your concerns there? Just keep in mind that we have to support the SVGMatrix and CSSMatrix interfaces. 2014-05-30 20:02 GMT-04:00 Rik Cabanier caban...@gmail.com: Primary eng emails caban...@adobe.com, dschu...@adobe.com *Proposal* *http://dev.w3.org/fxtf/geometry/#DOMMatrix http://dev.w3.org/fxtf/geometry/#DOMMatrix* *Summary* Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that offer a matrix abstraction. *Motivation* The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical matrix with the purpose of describing transformations in a graphical context. The following sections describe the details of the interface. The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix interface from SVG. In addition, DOMMatrix will be part of CSSOM where it will simplify getting and setting CSS transforms. *Mozilla bug* https://bugzilla.mozilla.org/show_bug.cgi?id=1018497 I will implement this behind the flag: layout.css.DOMMatrix *Concerns* None. Mozilla already implemented