Re: Oculus VR support somehwat-non-free code in the tree
On Tue, Apr 15, 2014 at 1:41 AM, Vladimir Vukicevic vladim...@gmail.com wrote: 1. Check in the LibOVR sources as-is, in other-licenses/oculus. Add a configure flag, maybe --disable-non-free, that disables building it. Build and ship it as normal in our builds. I think this would a) set a terrible precedent that companies that do something sufficiently cool can get Mozilla to add their non-Free code to Firefox b) lessen Oculus' incentive to work with us on option #2 below. So I'm opposed to this. 2. Contact Oculus with our concerns about the license, and see if they would be willing to relicense to something more standard. I think we should pursue this. The MPL might actually fit their needs pretty well Yes. Also worth noting about the special health-related limitation: Sun had an anti-nuclear facility restriction in its Java license for years. Yet, the sky did not fall when Sun relicensed Java under GPLv2, which doesn't have field-of-use restrictions. Any objections to the above, or alternative suggestions? This is a departure in our current license policy, but not a huge one. How is turning Firefox into non-Free software not a huge departure? -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is there any replacement for Domain Policy in CAPS ( Bug 913734 )
于 2014/4/15 星期二 4:06, Bobby Holley 写道: On Mon, Apr 14, 2014 at 11:12 AM, xunxun xunxun1...@gmail.com mailto:xunxun1...@gmail.com wrote: I want to use a configurable way like https://developer.mozilla.org/en-US/docs/Midas/Security_preferences, which can solve some strange issues on a single js, NoScript is too big for the problem. For example, I use the policy by default on my custom build: |pref(capability.policy.policynames, pcxnojs);| |pref(capability.policy.pcxnojs.sites, http://nsclick.baidu.com;);| |pref(capability.policy.pcxnojs.javascript.enabled, noAccess);| |nsclick.baidu.com http://nsclick.baidu.com |can cause firefox tab closing costs too much time, so I use the policy to avoid it. NoScript has the capability to do exactly that. You're also welcome to write an extension that uses the same API, though you should know that only one extension may use the API at a given time (so your users would not be able to install NoScript). See the API here: http://mxr.mozilla.org/mozilla-central/source/caps/idl/nsIScriptSecurityManager.idl#221 Thanks. I haven't read how to write a firefox extension yet, and I also want to find a method that do not depent on another extension. If I revert the Bug913734 changes, will this cause another issues? -- Best Regards, xunxun ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is there any replacement for Domain Policy in CAPS ( Bug 913734 )
于 2014/4/15 星期二 6:46, Neil 写道: xunxun wrote: For example, I use the policy by default on my custom build: pref(capability.policy.policynames, pcxnojs); pref(capability.policy.pcxnojs.sites, http://nsclick.baidu.com;); pref(capability.policy.pcxnojs.javascript.enabled, noAccess); nsclick.baidu.com can cause firefox tab closing costs too much time, so I use the policy to avoid it. Can you not use the content blocker to block scripts from nsclick.baidu.com? (I don't know what UI Firefox has for it, I normally use the Data Manager. I've successfully blocked Facebook and Twitter for example.) I don't know what is the Data Manager, is it this : https://addons.mozilla.org/en-US/firefox/addon/data-manager/ ? But I don't want to solve the problem using an extension, because I only want to block nsclick.baidu.com --- only one domain. -- Best Regards, xunxun ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G emulator issues
I ran crashtest/reftest/marionette/xpcshell/mochitest on emulator-x86-kk, have filed related bugs and make them block bug 753928. Basically: 1) need to carry --emulator x86 automatically (bug 996443) 2) to add x86 emulator for xpcshell tests (bug 996473) 3) PROCESS-CRASH at the end of reftest/crashtest (bug 996449) With some temporary solutions to above, all the test variants run on emulator-x86-kk and are about six times faster than ARM emulators. Best regards, Vicamo 於 4/9/14, 2:55 AM, Jonathan Griffin 提到: On 4/8/2014 1:05 AM, Thomas Zimmermann wrote: There are tests that instruct the emulator to trigger certain HW events. We can't run them on actual phones. To me, the idea of switching to a x86-based emulator seems to be the most promising solution. What would be necessary? Best regards Thomas We'd need these things: 1 - a consensus we want to move to x86-based emulators, which presumes that architecture-specific problems aren't likely or important enough to warrant continued use of arm-based emulators 2 - RelEng would need to stand up x86-based KitKat emulator builds 3 - The A*Team would need to get all of the tests running against these builds 4 - The A*Team and developers would have to work on fixing the inevitable test failures that occur when standing up any new platform I'll bring this topic up at the next B2G Engineering Meeting. Jonathan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
I’d like to add voice to Henri’s opinion here, because it’s important. The pivotal part of this discussion is about the precedent this establishes and the long-term repercussions it will have. On 15 Apr 2014, at 02:35, Vladimir Vukicevic vladim...@gmail.com wrote: Yes -- perhaps unsurprisingly, I disagree with Gerv on some of the particulars here. Gerv's opinions are his own, and are not official Mozilla policy. That post I'm sure came out of a discussion regarding this very issue here. In particular, my stance is that we build open source software because we believe there is value in that, and that it is the best way to build innovative, secure, and meaningful software. We don't build open source software for the sake of building open source. Sounds like you already had this - possibly heated - discussion with Gerv; I _think_ that your last one-liner came from that frustration. Do I have to remind you that most of our contributors are here, helping us out in so many wonderful ways, because we’re doing things for the sake of open source? This is a _very_ slippery slope you’re treading. Even though Gerv’s post is not official Mozilla policy at all, I do like to pass it on as a great guideline for our various contributors to browse through. (until it becomes policy in some form, heh!) One thing I don’t understand, Vladimir, are the practical reasons behind your effort to ‘rush’ this to the tree. We’ve seen a similar ‘hype’ for connecting an innovative device to the web with the Kinect. A community of hackers promptly hacked together an open source driver and we saw many Youtube videos that demoed exciting practical applications. Soon thereafter Microsoft release the official, open source driver. Apart from the fact that this particular hype didn’t become mainstream, I can see the parallel with the Oculus VR: we need to have a free, reverse engineered driver or a fully open source official driver. I think Oculus will understand this IF they want a hacker community as large as can be to drive adoption (and sales). I know two of those Kinect hackers, who can roll an OpenVR driver as a contractor or something, I’m quite sure of that. If an OpenVR driver is needed at all. Why not use LibOVR/ SDK to produce the mind-blowing Youtube demo videos and develop the OpenVR driver in parallel? Mike. On 15 Apr 2014, at 09:08, Henri Sivonen hsivo...@hsivonen.fi wrote: On Tue, Apr 15, 2014 at 1:41 AM, Vladimir Vukicevic vladim...@gmail.com wrote: 1. Check in the LibOVR sources as-is, in other-licenses/oculus. Add a configure flag, maybe --disable-non-free, that disables building it. Build and ship it as normal in our builds. I think this would a) set a terrible precedent that companies that do something sufficiently cool can get Mozilla to add their non-Free code to Firefox b) lessen Oculus' incentive to work with us on option #2 below. So I'm opposed to this. 2. Contact Oculus with our concerns about the license, and see if they would be willing to relicense to something more standard. I think we should pursue this. The MPL might actually fit their needs pretty well Yes. Also worth noting about the special health-related limitation: Sun had an anti-nuclear facility restriction in its Java license for years. Yet, the sky did not fall when Sun relicensed Java under GPLv2, which doesn't have field-of-use restrictions. Any objections to the above, or alternative suggestions? This is a departure in our current license policy, but not a huge one. How is turning Firefox into non-Free software not a huge departure? -- Henri Sivonen hsivo...@hsivonen.fi https://hsivonen.fi/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is XPath still a thing?
On Mon, Apr 14, 2014 at 10:54 PM, Jorge Villalobos jo...@mozilla.com wrote: FWIW, many add-ons use XPath. If there's anything we should be recommending add-on developers to migrate to, please let me know. It would be interesting to know what they use it for and why querySelector() et al don't meet their needs, but there's no immediate threat of what we have going away so it's not that important either. -- http://annevankesteren.nl/ ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is XPath still a thing?
On 15.04.2014 12:17, Anne van Kesteren wrote: On Mon, Apr 14, 2014 at 10:54 PM, Jorge Villalobos jo...@mozilla.com wrote: FWIW, many add-ons use XPath. If there's anything we should be recommending add-on developers to migrate to, please let me know. It would be interesting to know what they use it for and why querySelector() et al don't meet their needs, but there's no immediate threat of what we have going away so it's not that important either. As mentioned for user configurable HTML based View/Print with our Reminderfox. I guess querySelector() will not do that job at all. If you are interested how that works I would be glad to discuss (also I think not via ng) Guenter ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On Tue, Apr 15, 2014 at 10:41 AM, Vladimir Vukicevic vladim...@gmail.comwrote: I have a prototype of VR display and sensor integration with the web, along with an implementation for the Oculus VR. Despite there really being only one vendor right now, there is a lot of interest in VR. I'd like to add the web and Firefox to that flurry of activity... especially given our successes and leadership position on games and asm.js. Hurrah! I'd like to get this checked in so that we can either have it enabled by default in nightlies (and nightlies only), or at least allow it enabled via a pref. However, there's one issue -- the LibOVR library has a not-fully-free-software license [1]. It's compatible with our licenses, but it is not fully free. I don't think their Human-Readable Summary is a very good summary. -- Section 2 says that modifications to the SDK code are not just shared with Oculus but Oculus actually owns the copyright to such modifications. If someone patches a Mozilla-tree copy of the code, say to add FreeBSD support, that would kick in. Until now, contributors have always retained copyright over their Mozilla contributions. -- Section 3.1 prohibits use of the SDK with any hardware not approved by Oculus VR. I guess there isn't any hardware in that category right now, but if someone builds compatible hardware, we'd be in a bad situation if we were depending on this code. Have we had any contact with Oculus at all over these issues? Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
Hi 1. Check in the LibOVR sources as-is, in other-licenses/oculus. Add a configure flag, maybe --disable-non-free, that disables building it. Build and ship it as normal in our builds. '--with-non-free' But actually I'd support the option of keeping the SDK separated and dlopen'ing the lib when needed. That is better from a legal perspective and keeps the tree free from non-free software. The latter is especially important as VR support is currently just a toy and not something we have to provide to be competitive. How many public interfaces does libovr contain? Maybe there could be a compatible, but free, library in the tree that provides a hardware-independent interface. And internally, it would use the SDK if that's available. Best regards Thomas 2. Contact Oculus with our concerns about the license, and see if they would be willing to relicense to something more standard. The MPL might actually fit their needs pretty well, though we are effectively asking them to relicense their SDK code. There is no specific driver for the OVR; it shows up as a USB HID device, and LibOVR knows how to interpret the data stream coming from it. This gets them easy compat with all operating systems, and the support I'd add would be for Windows, Mac, and Linux. 3. Start investigating Open VR, with the intent being to replace the Oculus-specific library with a more standard one before we standardize and ship the API more broadly than to nightly users. The goal would be to remove LibOVR before we ship (or keep it in assuming it gets relicensed, if appropriate), and replace it with a standard Open VR library. There are a few other options that are worse: 1. We could ship the VR glue in nightly, but the Oculus support packaged as an addon. This is doable, but it requires significant rework in changing the interfaces to use XPCOM, to do category-based registration of the Oculus provider, in building and packaging the addon, etc. It also requires a separate install step for developers/users. 2. We could ship the VR integration as a plugin. vr.js does this already. But we are trying to move away from plugins, and there's no reason why the Oculus can't function in places where plugins are nonexistent, such as mobile. Delivering this to developers via a plugin would be admitting that we can't actually deliver innovative features without the plugin API, which is untrue and pretty silly. 3. Require developers to install the SDK themselves, and deploy it to all of the build machines so that we can build it. This is IMO a very non-pargmatic option; it requires a ton more fragile work (everyone needs to get and keep the SDK updated; releng needs to do the same on build machines) and sacrifices developer engagement (additional SDKs suck -- see the DirectX SDK that we're working on eliminating the need for) in order to try to preserve some form of purity. 3. We do nothing. This option won't happen: I'm tired of not having Gecko and Firefox at the forefront of web technology in all aspects. Any objections to the above, or alternative suggestions? This is a departure in our current license policy, but not a huge one. There were some concerns expressed about that, but I'm hoping that we can take a pragmatic path here. - Vlad [1] https://developer.oculusvr.com/license ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is XPath still a thing?
On 15/04/2014 11:17, Anne van Kesteren wrote: On Mon, Apr 14, 2014 at 10:54 PM, Jorge Villalobos jo...@mozilla.com wrote: FWIW, many add-ons use XPath. If there's anything we should be recommending add-on developers to migrate to, please let me know. It would be interesting to know what they use it for and why querySelector() et al don't meet their needs, but there's no immediate threat of what we have going away so it's not that important either. There are a few use cases XPath caters for that CSS selectors can't do. The two predominant ones that springs to mind are finding an element based on its text content, and finding any element relative to its parent (or ancestor). ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Policy for disabling tests which run on TBPL
I want to express my thanks to everyone who contributed to this thread. We have a lot of passionate and smart people who care about this topic- thanks again for weighing in so far. Below is a slightly updated policy from the original, and following that is an attempt to summarize the thread and turn what makes sense into actionable items. = Policy for handling intermittent oranges = This policy will define an escalation path for when a single test case is identified to be leaking or failing and is causing enough disruption on the trees. Disruption is defined as: 1) Test case is on the list of top 20 intermittent failures on Orange Factor (http://brasstacks.mozilla.com/orangefactor/index.html) 2) It is causing oranges =8% of the time 3) We have 100 instances of this failure in the bug in the last 30 days Escalation is a responsibility of all developers, although the majority will fall on the sheriffs. Escalation path: 1) Ensure we have a bug on file, with the test author, reviewer, module owner, and any other interested parties, links to logs, etc. 2) We need to needinfo? and expect a response within 2 business days, this should be clear in a comment. 3) In the case we don't get a response, request a needinfo? from the module owner with the expectation of 2 days for a response and getting someone to take action. 4) In the case we go another 2 days with no response from a module owner, we will disable the test. Ideally we will work with the test author to either get the test fixed or disabled depending on available time or difficulty in fixing the test. If a bug has activity and work is being done to address the issue, it is reasonable to expect the test will not be disabled. Inactivity in the bug is the main cause for escalation. This is intended to respect the time of the original test authors by not throwing emergencies in their lap, but also strike a balance with keeping the trees manageable. Exceptions: 1) If this test has landed (or been modified) in the last 48 hours, we will most likely back out the patch with the test 2) If a test is failing at least 30% of the time, we will file a bug and disable the test first 3) When we are bringing a new platform online (Android 2.3, b2g, etc.) many tests will need to be disabled prior to getting the tests on tbpl. 4) In the rare case we are disabling the majority of the tests (either at once or slowly over time) for a given feature, we need to get the module owner to sign off on the current state of the tests. = Documentation = We have thousands of tests disabled, many are disabled for different build configurations or platforms. This can be dangerous as we slowly reduce our coverage. By running a daily report (bug 996183) to outline the total tests available vs each configuration (b2g, debug, osx, e10s, etc.) we can bring visibility to the state of each platform and if we are disabling more than we fix. We need to have a clear guide on how to run the tests, how to write a test, how to debug a test, and use metadata to indicate if we have looked at this test and when. When an intermittent bug is filed, we need to clearly outline what information will aid the most in reproducing and fixing this bug. Without a documented process for fixing oranges, this falls on the shoulders of the original test authors and a few determined hackers. = General Policy = I have adjusted the above policy to mention backing out new tests which are not stable, working to identify a regression in the code or tests, and adding protection so we do not disable coverage for a specific feature completely. In addition, I added a clearer definition of what is a disruptive test and clarified the expectations around communicating in the bug vs escalating. What is more important is the culture we have around commiting patches to Mozilla repositories. We need to decide as an organization if we care about zero oranges (or insert acceptable percentage). We also need to decide what is acceptable coverage levels and what our general policy is for test reviews (at checkin time and in the future). These need to be answered outside of this policy- but the sooner we answer these questions, the better we can all move forward towards the same goal. = Tools = Much of the discussion was around tools. As a member of the Automation and Tools team, I should be advocating for more tools, in this case I am leaning more towards less tools and better process. One common problem is dealing with the noise around infrastructure and changing environments and test harnesses. Is this documented, how can we filter that out? Having our tools support ways to detect this and annotate changes unrelated to tests or builds will go a long way. Related is updating our harnesses and the way we run tests so they are more repeatable. I have filed bug 996504 to track work on this. Another problem we can look at with tooling is annotating the expected outcome
Re: Is XPath still a thing?
On 15/04/2014 14:21, Andreas Tolfsen wrote: On 15/04/2014 11:17, Anne van Kesteren wrote: On Mon, Apr 14, 2014 at 10:54 PM, Jorge Villalobos jo...@mozilla.com wrote: FWIW, many add-ons use XPath. If there's anything we should be recommending add-on developers to migrate to, please let me know. It would be interesting to know what they use it for and why querySelector() et al don't meet their needs, but there's no immediate threat of what we have going away so it's not that important either. There are a few use cases XPath caters for that CSS selectors can't do. The two predominant ones that springs to mind are finding an element based on its text content, and finding any element relative to its parent (or ancestor). I'm not sure what you mean here, but I was under the impression (nay, I just tested and verified) that document.querySelector(#foo #bar) works for finding an element based on its ancestor (or parent, if you use a child rather than a descendant selector). ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Policy for disabling tests which run on TBPL
On Tue, Apr 15, 2014 at 6:21 AM, jmaher joel.ma...@gmail.com wrote: This policy will define an escalation path for when a single test case is identified to be leaking or failing and is causing enough disruption on the trees. Disruption is defined as: 1) Test case is on the list of top 20 intermittent failures on Orange Factor (http://brasstacks.mozilla.com/orangefactor/index.html) 2) It is causing oranges =8% of the time 3) We have 100 instances of this failure in the bug in the last 30 days Are these conditions joined by 'and' or by 'or'? If 'or', there will always be at least 20 tests meeting this set of criteria ... - Kyle ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G emulator issues
I ran crashtest/reftest/marionette/xpcshell/mochitest on emulator-x86-kk, have filed related bugs and make them block bug 753928. Basically: 1) need to carry --emulator x86 automatically (bug 996443) 2) to add x86 emulator for xpcshell tests (bug 996473) 3) PROCESS-CRASH at the end of reftest/crashtest (bug 996449) With some temporary solutions to above, all the test variants run on emulator-x86-kk and are about six times faster than ARM emulators. 6x is good, if everything works and the tools are all in place - though it means you're not running the real code used on devices, which could be a problem. 於 4/9/14, 2:55 AM, Jonathan Griffin 提到: On 4/8/2014 1:05 AM, Thomas Zimmermann wrote: There are tests that instruct the emulator to trigger certain HW events. We can't run them on actual phones. To me, the idea of switching to a x86-based emulator seems to be the most promising solution. What would be necessary? I don't think the *fundamental* problem is that the emulator is slow; I think it's that the emulator doesn't simulate the environment very well, and because of that, being slow (and running slow debug code) makes things break. Before worrying about x86 emulator (or going *too* far down the run it on faster hardware road), we should verify that faster hardware will produce less spurious oranges. Manually standing up a few testers and letting them run the mochitest load (even by hand) until we have enough data to see what moving the tests will do. I do think with the current emulator running it on faster hardware *will* help wallpaper the fundamental problems. I base this on my experience with the M10 media tests that began this thread - they ran fine on a 2.5 year old xeon (~10s, no timeouts) and took hundreds of seconds (and timed out) on the AWS testers. So moving the media tests will likely be a large win. But this (or x86) doesn't address the fundamental problem, which is that the emulator clearly isn't emulating the underlying envirment well, in particular timers (see some of the discussion in this thread). If we can address the fundamental problem (even crudely), the need for high-perf testers may decline or even go away. -- Randell Jesup, Mozilla Corp remove news for personal email ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Policy for disabling tests which run on TBPL
On Tuesday, April 15, 2014 9:42:25 AM UTC-4, Kyle Huey wrote: On Tue, Apr 15, 2014 at 6:21 AM, jmaher joel.ma...@gmail.com wrote: This policy will define an escalation path for when a single test case is identified to be leaking or failing and is causing enough disruption on the trees. Disruption is defined as: 1) Test case is on the list of top 20 intermittent failures on Orange Factor (http://brasstacks.mozilla.com/orangefactor/index.html) 2) It is causing oranges =8% of the time 3) We have 100 instances of this failure in the bug in the last 30 days Are these conditions joined by 'and' or by 'or'? If 'or', there will always be at least 20 tests meeting this set of criteria ... - Kyle Great question Kyle- The top 20 doesn't always include specific tests- sometimes it is infrastructure, hardware/vm, or test harness, mozharness, etc. related. If a test meets any of the above criteria and is escalated, then we should expect to follow some basic criteria about either working on fixing it or disabling it as spelled out in the escalation path. For the large majority of the cases, bugs filed for specific test cases will meet all 3 conditions. We have had some cases where we have thousands of stars over years, but it isn't on the top 20 list all the time. Likewise when we have 10 infra bugs, a frequent orange on the trees won't be in the top 20. -Joel ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is XPath still a thing?
On 15/04/2014 14:33, Gijs Kruitbosch wrote: On 15/04/2014 14:21, Andreas Tolfsen wrote: On 15/04/2014 11:17, Anne van Kesteren wrote: It would be interesting to know what they use it for and why querySelector() et al don't meet their needs, but there's no immediate threat of what we have going away so it's not that important either. There are a few use cases XPath caters for that CSS selectors can't do. The two predominant ones that springs to mind are finding an element based on its text content, and finding any element relative to its parent (or ancestor). I'm not sure what you mean here, but I was under the impression (nay, I just tested and verified) that document.querySelector(#foo #bar) works for finding an element based on its ancestor (or parent, if you use a child rather than a descendant selector). Consider something like //table/tr/td[contains(., foo)/../td[2]/input which will find the table cell containing the text foo, then find the second cell in the same row with an input element. A few other limitations I've been informed of are: - Complex conditional statements such as looking up all classes of A, but not A combined with B or A with C. - Subqueries such as //li[.//a], meaning all li elements that have a as a child or descendant. Although possibly this is addressed by ! in CSS4? ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 2014-04-15, 12:47 AM, Andreas Gal wrote: Vlad asked a specific question in the first email. Are we comfortable using another open (albeit not open enough for MPL) license on trunk while we rewrite the library? Can we compromise on trunk in order to innovate faster and only ship to GA once the code is MPL friendly via re-licensing or re-writing? What is our view on this narrow question? While I support the pragmatic approach here - if we're rewriting the while building things on top of it, what is the fallback position if the library rewrite ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 2014-04-15, 10:25 AM, Mike Hoye wrote: On 2014-04-15, 12:47 AM, Andreas Gal wrote: Vlad asked a specific question in the first email. Are we comfortable using another open (albeit not open enough for MPL) license on trunk while we rewrite the library? Can we compromise on trunk in order to innovate faster and only ship to GA once the code is MPL friendly via re-licensing or re-writing? What is our view on this narrow question? While I support the pragmatic approach here - if we're rewriting the [library - mh] while building things on top of it, what is the fallback position if the library rewrite ... goes sideways for technical or legal reasons? A plan B we can live with seems like a deciding factor here. Also, the window-close button being right next to the send button in Thunderbird turns into a real problem after your third coffee, let me tell you. - mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: B2G emulator issues
於 4/15/14, 9:42 PM, Randell Jesup 提到: I ran crashtest/reftest/marionette/xpcshell/mochitest on emulator-x86-kk, have filed related bugs and make them block bug 753928. Basically: 1) need to carry --emulator x86 automatically (bug 996443) 2) to add x86 emulator for xpcshell tests (bug 996473) patch available, in review. 3) PROCESS-CRASH at the end of reftest/crashtest (bug 996449) Actually we have more trouble than this, but I think that can be improved with time. The top of the list should be the lack of gdb/gdbserver and maybe other debugging tools for x86 emulators. Rebuild AOSP toolchain doesn't seem to be a trivial task. :( With some temporary solutions to above, all the test variants run on emulator-x86-kk and are about six times faster than ARM emulators. 6x is good, if everything works and the tools are all in place - though it means you're not running the real code used on devices, which could be a problem. However, emulator is also not real code used on devices. ;) 於 4/9/14, 2:55 AM, Jonathan Griffin 提到: On 4/8/2014 1:05 AM, Thomas Zimmermann wrote: There are tests that instruct the emulator to trigger certain HW events. We can't run them on actual phones. To me, the idea of switching to a x86-based emulator seems to be the most promising solution. What would be necessary? I don't think the *fundamental* problem is that the emulator is slow; I think it's that the emulator doesn't simulate the environment very well, and because of that, being slow (and running slow debug code) makes things break. Before worrying about x86 emulator (or going *too* far down the run it on faster hardware road), we should verify that faster hardware will produce less spurious oranges. Manually standing up a few testers and letting them run the mochitest load (even by hand) until we have enough data to see what moving the tests will do. I do think with the current emulator running it on faster hardware *will* help wallpaper the fundamental problems. I base this on my experience with the M10 media tests that began this thread - they ran fine on a 2.5 year old xeon (~10s, no timeouts) and took hundreds of seconds (and timed out) on the AWS testers. So moving the media tests will likely be a large win. But this (or x86) doesn't address the fundamental problem, which is that the emulator clearly isn't emulating the underlying envirment well, in particular timers (see some of the discussion in this thread). If we can address the fundamental problem (even crudely), the need for high-perf testers may decline or even go away. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
2014-04-14 18:41 GMT-04:00 Vladimir Vukicevic vladim...@gmail.com: 3. We do nothing. This option won't happen: I'm tired of not having Gecko and Firefox at the forefront of web technology in all aspects. Is VR already Web technology i.e. is another browser vendor already exposing this, or would we be the first to? If VR is not yet a thing on the Web, could you elaborate on why you think it should be? I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. Is VR a departure from this, or is it already much more mainstream than I thought it was? Benoit ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is there any replacement for Domain Policy in CAPS ( Bug 913734 )
On Tue, Apr 15, 2014 at 1:01 AM, xunxun xunxun1...@gmail.com wrote: Thanks. I haven't read how to write a firefox extension yet, and I also want to find a method that do not depent on another extension. That may or may not be possible. You could try Neil's suggestion. If I revert the Bug913734 changes, will this cause another issues? If you try to revert bug 913734, you're going to have a bad time. ;-) ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 2014-04-15, 6:52 AM, Robert O'Callahan wrote: I'd like to get this checked in so that we can either have it enabled by default in nightlies (and nightlies only), or at least allow it enabled via a pref. However, there's one issue -- the LibOVR library has a not-fully-free-software license [1]. It's compatible with our licenses, but it is not fully free. I don't think their Human-Readable Summary is a very good summary. -- Section 2 says that modifications to the SDK code are not just shared with Oculus but Oculus actually owns the copyright to such modifications. If someone patches a Mozilla-tree copy of the code, say to add FreeBSD support, that would kick in. Until now, contributors have always retained copyright over their Mozilla contributions. -- Section 3.1 prohibits use of the SDK with any hardware not approved by Oculus VR. I guess there isn't any hardware in that category right now, but if someone builds compatible hardware, we'd be in a bad situation if we were depending on this code. I think the former is actually going to be a problem for us very soon after we import this code. We should really try to get them to relicense their SDK if at all possible. Cheers, Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Recommendations on source control and code review
On 4/14/14, 10:31 AM, smaug wrote: As a reviewer I usually want to see _also_ a patch which contains all the changes. Otherwise it can be very difficult to see the big picture. But sure, having large patches split to smaller pieces may help. btw, if you have opinions about code review tools, see the mozilla-code-review list for discussions on the integration of Review Board, BMO, and hg: https://groups.google.com/forum/#!forum/mozilla-code-review chris ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Enhancing product security with CSP for internal pages
On 15.04.2014 00:43, Neil wrote: Frederik Braun wrote: A few months ago I had the idea to add a Content Security Policy (CSP) to our internal pages, like about:newtab for example. So this just applies to about: pages? Primarily yes. I think some people are already working on other bits and pieces. Of course this also only makes sense for privileged about: pages. This would mean about:home can stay as is. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Is there any replacement for Domain Policy in CAPS ( Bug 913734 )
xunxun wrote: 于 2014/4/15 星期二 6:46, Neil 写道: xunxun wrote: For example, I use the policy by default on my custom build: pref(capability.policy.policynames, pcxnojs); pref(capability.policy.pcxnojs.sites, http://nsclick.baidu.com;); pref(capability.policy.pcxnojs.javascript.enabled, noAccess); nsclick.baidu.com can cause firefox tab closing costs too much time, so I use the policy to avoid it. Can you not use the content blocker to block scripts from nsclick.baidu.com? (I don't know what UI Firefox has for it, I normally use the Data Manager. I've successfully blocked Facebook and Twitter for example.) I don't know what is the Data Manager, is it this : https://addons.mozilla.org/en-US/firefox/addon/data-manager/ ? But I don't want to solve the problem using an extension, because I only want to block nsclick.baidu.com --- only one domain. That extension is the only way I know offhand of configuring the content blocker, but that doesn't mean that it's the only way, and you can also uninstall the extension when you've finished configuring, as the setting is stored separately. -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Enhancing product security with CSP for internal pages
Frederik Braun wrote: On 15.04.2014 00:43, Neil wrote: Frederik Braun wrote: A few months ago I had the idea to add a Content Security Policy (CSP) to our internal pages, like about:newtab for example. So this just applies to about: pages? Primarily yes. I think some people are already working on other bits and pieces. Of course this also only makes sense for privileged about: pages. This would mean about:home can stay as is. So about:certError and about:blocked stay as is too? -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
Mike Hoye wrote: the window-close button being right next to the send button in Thunderbird turns into a real problem after your third coffee, let me tell you. Don't you mean before your third coffee? -- Warning: May contain traces of nuts. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Enhancing product security with CSP for internal pages
On 15.04.2014 22:45, Neil wrote: Frederik Braun wrote: On 15.04.2014 00:43, Neil wrote: Frederik Braun wrote: A few months ago I had the idea to add a Content Security Policy (CSP) to our internal pages, like about:newtab for example. So this just applies to about: pages? Primarily yes. I think some people are already working on other bits and pieces. Of course this also only makes sense for privileged about: pages. This would mean about:home can stay as is. So about:certError and about:blocked stay as is too? Yep ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
Arguably if you wait for other vendors to expose VR before you do it, you'll end up having to implement a sub-standard proprietary API like you did with Web Audio. If you're first to the market (even with a prototype that's preffed off), you can exert a lot more pressure on how things turn out, IMO. As long as the web facing part of the API isn't tied to the open-but-not-free Oculus API, you can always swap out the underlying parts later while continuing to reap the benefits of being able to take design leadership on VR. As for whether VR is actually relevant, that's a tough question, but the fact that Facebook was willing to drop an absurd amount of money on them tells you that they certainly think it will be important. On Tue, Apr 15, 2014 at 8:14 AM, Benoit Jacob jacob.benoi...@gmail.com wrote: 2014-04-14 18:41 GMT-04:00 Vladimir Vukicevic vladim...@gmail.com: 3. We do nothing. This option won't happen: I'm tired of not having Gecko and Firefox at the forefront of web technology in all aspects. Is VR already Web technology i.e. is another browser vendor already exposing this, or would we be the first to? If VR is not yet a thing on the Web, could you elaborate on why you think it should be? I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. Is VR a departure from this, or is it already much more mainstream than I thought it was? Benoit ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote: If VR is not yet a thing on the Web, could you elaborate on why you think it should be? I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. That is not intentional. There are historical and pragmatic reasons why the Web operates well in fast follow mode, but there's no reason why we can't lead as well. If the Web is going to be a strong platform it can't always be the last to get shiny things. And if Firefox is going to be strong we need to lead on some shiny things. So we need to solve Vlad's problem. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 15/04/2014 22:34, K. Gadd wrote: Arguably if you wait for other vendors to expose VR before you do it, you'll end up having to implement a sub-standard proprietary API like you did with Web Audio. We had an alternative implementation + API ( https://wiki.mozilla.org/Audio_Data_API ). I don't know if it exactly predated Web Audio, but I'm fairly sure qualifying us as waiting for other vendors in that domain is not fair, which also casts doubt on your assertion that If you're first to the market (even with a prototype that's preffed off), you can exert a lot more pressure on how things turn out, IMO. ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 2014-04-15, 5:34 PM, K. Gadd wrote: Arguably if you wait for other vendors to expose VR before you do it, you'll end up having to implement a sub-standard proprietary API like you did with Web Audio. If you're first to the market (even with a prototype that's preffed off), you can exert a lot more pressure on how things turn out, IMO. As long as the web facing part of the API isn't tied to the open-but-not-free Oculus API, you can always swap out the underlying parts later while continuing to reap the benefits of being able to take design leadership on VR. The crappiness of Web Audio as an API was partly because it was designed based on what a good sane C API would look like and marketed heavily by Google before anyone had looked at the API to decide whether it's good or not. Being first doesn't always imply being right. I trust that we'll try to do the right thing about the API, but that is not what Vlad is asking help for yet. :-) Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 2014-04-15, 5:58 PM, Gijs Kruitbosch wrote: On 15/04/2014 22:34, K. Gadd wrote: Arguably if you wait for other vendors to expose VR before you do it, you'll end up having to implement a sub-standard proprietary API like you did with Web Audio. We had an alternative implementation + API ( https://wiki.mozilla.org/Audio_Data_API ). I don't know if it exactly predated Web Audio, but I'm fairly sure qualifying us as waiting for other vendors in that domain is not fair, which also casts doubt on your assertion that If you're first to the market (even with a prototype that's preffed off), you can exert a lot more pressure on how things turn out, IMO. The Audio Data API does predate Web Audio but it's actually much worse, and I'm glad that we did not end up with that as _the_ solution for audio on the Web. :-) Now, let's get back to solving Vlad's problem. Cheers, Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
You can’t beat the competition by fast following the competition. Our competition are native, closed, proprietary ecosystems. To beat them, the Web has to be on the bleeding edge of technology. I would love to see VR support in the Web platform before its available as a builtin capability in any major native platform. Andreas On Apr 15, 2014, at 2:57 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote: If VR is not yet a thing on the Web, could you elaborate on why you think it should be? I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. That is not intentional. There are historical and pragmatic reasons why the Web operates well in fast follow mode, but there's no reason why we can't lead as well. If the Web is going to be a strong platform it can't always be the last to get shiny things. And if Firefox is going to be strong we need to lead on some shiny things. So we need to solve Vlad's problem. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On Tue, Apr 15, 2014 at 10:33:51AM -0400, Mike Hoye wrote: On 2014-04-15, 10:25 AM, Mike Hoye wrote: On 2014-04-15, 12:47 AM, Andreas Gal wrote: Vlad asked a specific question in the first email. Are we comfortable using another open (albeit not open enough for MPL) license on trunk while we rewrite the library? Can we compromise on trunk in order to innovate faster and only ship to GA once the code is MPL friendly via re-licensing or re-writing? What is our view on this narrow question? While I support the pragmatic approach here - if we're rewriting the [library - mh] while building things on top of it, what is the fallback position if the library rewrite ... goes sideways for technical or legal reasons? A plan B we can live with seems like a deciding factor here. Is the risk of a rewrite not working greater than epsilon? I don't know where legal thinks the line is (IANAL of course), but afaik clean rooming something especially given the source code isn't hard just time consuming. Trev Also, the window-close button being right next to the send button in Thunderbird turns into a real problem after your third coffee, let me tell you. - mhoye ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform signature.asc Description: Digital signature ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Using rr to track down intermittent test failures
We just released rr 1.2 and I think this would be a good time for people to try to use it for one of the tasks it was designed for: debugging intermittent test failures. Consult http://rr-project.org for more information, but the gist is this: you can use rr to record the execution of Firefox test suites (with 32-bit Linux Firefox builds). If you see a test failure occur while recording with rr, you will be able to replay that execution and use gdb to debug the replay. The replay is exact, so the same failure will reoccur every time you replay, and in exactly the same way, including addresses of objects etc. This means that once you've recorded a test failure, it's just a matter of time before you figure out the bug. The steps to get started doing this are roughly as follows: 1) Install rr on a Westmere-or-later Linux system (or VM with performance counters virtualized), build 32-bit Firefox (opt or debug) and verify that recording Firefox works for you. If it doesn't, please file a github issue or check https://github.com/mozilla/rr/issues/973 if your system is Haswell-based. 2) (Optional) Enable chaos mode by editing mfbt/ChaosMode.h to return true for isActive. 3) (Optional) Enable shuffling of mochitests and reftests (sorry, I have a patch that needs to land for the latter). 4) Create a script somewhere called rr-record that does exec ~/rr/bin/rr record $* 5) Run your testsuite using mach, and pass the options --debugger rr-record 6) If necessary, run step 5 in a loop until you see test failures you want to fix, clearing out ~/.rr periodically so your disk doesn't fill. 7) Run rr replay -a to verify that the replay works. If it doesn't, report an rr bug. 8) Run rr replay to debug with gdb. Consult the rr documentation for more information about that. cjones and I are usually around on #research if you have any issues with rr, and we'd love to hear from you. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On Tuesday, April 15, 2014 5:57:13 PM UTC-4, Robert O'Callahan wrote: On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote: I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. That is not intentional. There are historical and pragmatic reasons why the Web operates well in fast follow mode, but there's no reason why we can't lead as well. If the Web is going to be a strong platform it can't always be the last to get shiny things. And if Firefox is going to be strong we need to lead on some shiny things. So we need to solve Vlad's problem. It's very much a question of pragmatism, and where we draw the line. There are many options that we can do that avoid having to consider almost-open or almost-free licenses, or difficulties such as not being able to accept contributions for this one chunk of code. But they all result in the end result being weaker; developers or worse, users have to go through extra steps and barriers to access the functionality. I think that putting up those barriers dogmatically doesn't really serve our goals well; instead, we need to find a way to be fast and scrappy while still staying within the spirit of our mission. Note that for purposes of this discussion, VR support is minimal.. some properties to read to get some info about the output device (resolution, eye distance, distortion characteristics, etc) and some more to get the orientation of the device. This is not a highly involved API nor is it specific to Oculus, but more as a first-put based on hardware that's easily available. I also briefly suggested an entirely separate non-free repository -- you can clone non-free into the top level mozilla-central directory, or create it in other ways, and configure can figure things out based on what's present or not. That's an option, and it might be a way to avoid some of these issues. - Vlad ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using rr to track down intermittent test failures
On 16/04/2014 00:05, Robert O'Callahan wrote: We just released rr 1.2 and I think this would be a good time for people to try to use it for one of the tasks it was designed for: debugging intermittent test failures. This is awesome! Three questions: 1) Is anyone working on something similar that works for frontend code (particularly, chrome JS)? I realize we have a JS debugger, but sometimes activating the debugger at the wrong time makes the bug go away, and then there's timing issues, and that the debugger doesn't stop all the event loops and so stopping at a breakpoint sometimes still has other code execute in the same context... AIUI your post, because the replay will replay the same Firefox actions, firing up the JS debugger is impossible because you can't make the process do anything. 2) Is anyone working on making this available on our TBPL infra for try pushes? 3) Is support for other platforms than Linux/gdb (thinking Mac/lldb particularly) planned? ~ Gijs ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
2014-04-15 18:28 GMT-04:00 Andreas Gal andreas@gmail.com: You can’t beat the competition by fast following the competition. Our competition are native, closed, proprietary ecosystems. To beat them, the Web has to be on the bleeding edge of technology. I would love to see VR support in the Web platform before its available as a builtin capability in any major native platform. Can't we? (referring to: You can’t beat the competition by fast following the competition.) The Web has a huge advantage over the competition (native, closed, proprietary ecosystems): The web only needs to be good enough. Look at all the wins that we're currently scoring with Web games. (I mention games because that's relevant to this thread). My understanding of this year's GDC announcements is that we're winning. To achieve that, we didn't really give the web any technical superiority over other platforms; in fact, we didn't even need to achieve parity. We merely made it good enough. For example, the competition is innovating with a completely new platform to run native code on the web, but with asm.js and emscripten we're showing that javascript is in fact good enough, so we end up winning anyway. What we need to ensure to keep winning is 1) that the Web remains good enough and 2) that it remains true, that the Web only needs to be good enough. In this respect, more innovation is not necessarily better, and in fact, the cost of innovating in the wrong direction could be particularly high for the Web compared to other platforms. We need to understand the above 2) point and make sure that we don't regress it. 2) probably has something to do with the fact that the Web is the one write once, run anywhere platform and, on top of that, also offers run forever. Indeed, compared to other platforms, we care much more about portability and we are much more serious about committing to long-term platform stability. Now my point is that we can only do that by being picky with what we support. There's no magic here; we don't get the above 2) point for free. Benoit Andreas On Apr 15, 2014, at 2:57 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.com wrote: If VR is not yet a thing on the Web, could you elaborate on why you think it should be? I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. That is not intentional. There are historical and pragmatic reasons why the Web operates well in fast follow mode, but there's no reason why we can't lead as well. If the Web is going to be a strong platform it can't always be the last to get shiny things. And if Firefox is going to be strong we need to lead on some shiny things. So we need to solve Vlad's problem. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
The following is all my opinion, of course: Arguably many of the wins currently being seen in web games were only possible because your biggest competitor (Google, w/NaCL) basically ceded the market by failing to ship promptly and failing to support their developers. I don't think the asm.js games situation acts as a good example, given that. Game developers were super enthused about flascc/alchemy before, including Unity, and only abandoned it once it was clear Adobe was going to ruin it through mismanagement. Game developers were super enthused about NaCL, including Unity, and only abandoned it once it was clear Google would never make it actually usable for delivering products to customers. Only now do we see Unity and Unreal targeting asm.js, after those two options are dead (though Mozilla certainly worked hard to get them on board, and that is a huge success!) This is not to undermine the value of asm.js or to dismiss the hard work done on it, but you have to consider the context before treating it as a model for future decisions. Had things gone different, asm.js would have been thoroughly beaten to market and might not have been able to build up as much momentum. Maybe there's an argument to be made that Mozilla's proprietary competitors will always stumble and fall, though. It does seem to happen a lot. On Tue, Apr 15, 2014 at 4:17 PM, Benoit Jacob jacob.benoi...@gmail.com wrote: 2014-04-15 18:28 GMT-04:00 Andreas Gal andreas@gmail.com: You can’t beat the competition by fast following the competition. Our competition are native, closed, proprietary ecosystems. To beat them, the Web has to be on the bleeding edge of technology. I would love to see VR support in the Web platform before its available as a builtin capability in any major native platform. Can't we? (referring to: You can’t beat the competition by fast following the competition.) The Web has a huge advantage over the competition (native, closed, proprietary ecosystems): The web only needs to be good enough. Look at all the wins that we're currently scoring with Web games. (I mention games because that's relevant to this thread). My understanding of this year's GDC announcements is that we're winning. To achieve that, we didn't really give the web any technical superiority over other platforms; in fact, we didn't even need to achieve parity. We merely made it good enough. For example, the competition is innovating with a completely new platform to run native code on the web, but with asm.js and emscripten we're showing that javascript is in fact good enough, so we end up winning anyway. What we need to ensure to keep winning is 1) that the Web remains good enough and 2) that it remains true, that the Web only needs to be good enough. In this respect, more innovation is not necessarily better, and in fact, the cost of innovating in the wrong direction could be particularly high for the Web compared to other platforms. We need to understand the above 2) point and make sure that we don't regress it. 2) probably has something to do with the fact that the Web is the one write once, run anywhere platform and, on top of that, also offers run forever. Indeed, compared to other platforms, we care much more about portability and we are much more serious about committing to long-term platform stability. Now my point is that we can only do that by being picky with what we support. There's no magic here; we don't get the above 2) point for free. Benoit Andreas On Apr 15, 2014, at 2:57 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.com wrote: If VR is not yet a thing on the Web, could you elaborate on why you think it should be? I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. That is not intentional. There are historical and pragmatic reasons why the Web operates well in fast follow mode, but there's no reason why we can't lead as well. If the Web is going to be a strong platform it can't always be the last to get shiny things. And if Firefox is going to be strong we need to lead on some shiny things. So we need to solve Vlad's problem. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing
Re: Using rr to track down intermittent test failures
On Wed, Apr 16, 2014 at 11:05 AM, Robert O'Callahan rob...@ocallahan.orgwrote: 4) Create a script somewhere called rr-record that does exec ~/rr/bin/rr record $* Sorry; this script, of course, needs to exec rr from wherever it was installed. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On Apr 15, 2014, at 4:17 PM, Benoit Jacob jacob.benoi...@gmail.com wrote: 2014-04-15 18:28 GMT-04:00 Andreas Gal andreas@gmail.com: You can’t beat the competition by fast following the competition. Our competition are native, closed, proprietary ecosystems. To beat them, the Web has to be on the bleeding edge of technology. I would love to see VR support in the Web platform before its available as a builtin capability in any major native platform. Can't we? (referring to: You can’t beat the competition by fast following the competition.”) Yes, we can. Look at some of the performance characteristics of FFOS on low-end hardware. We beat Android and other native systems on a regular basis on key performance metrics like startup performance by leveraging architectural advantages of the Web stack (lazy loading, etc). Or compare opening the App Store app on Mac OS X with going to a marketplace website like amazon.com. We load a rich content experience faster over the net than my SSD high end Mac loads from disk because the Web has evolved to a place where it has better capabilities for these tasks than native. The Web has a huge advantage over the competition (native, closed, proprietary ecosystems): The web only needs to be good enough. Aiming low is always wrong. Always. It is true that the Web has massive reach, but thats not an excuse to be stagnant and reach for the “lowest common denominator” as you are proposing it. The massive reach of the Web helps us to get innovation to people faster. It doesn’t remove the need to innovate. Look at all the wins that we're currently scoring with Web games. (I mention games because that's relevant to this thread). My understanding of this year's GDC announcements is that we're winning. To achieve that, we didn't really give the web any technical superiority over other platforms; in fact, we didn't even need to achieve parity. We merely made it good enough. For example, the competition is innovating with a completely new platform to run native code on the web, but with asm.js and emscripten we're showing that javascript is in fact good enough, so we end up winning anyway. We aren’t winning just yet. We barely got the foundation laid for Web gaming (even though I agree that we likely have tipped the scale now). In any case, we got here through technical excellence and innovation. asm.js is not merely good enough as you are claiming. It is the fastest, mostly widely available way to deliver portable game code to devices, with performance rivaling native performance. Thats very different from “lets just trail the market and do as little as we need to. What we need to ensure to keep winning is 1) that the Web remains good enough and 2) that it remains true, that the Web only needs to be good enough. In this respect, more innovation is not necessarily better, and in fact, the cost of innovating in the wrong direction could be particularly high for the Web compared to other platforms. We need to understand the above 2) point and make sure that we don't regress it. 2) probably has something to do with the fact that the Web is the one write once, run anywhere platform and, on top of that, also offers run forever. Indeed, compared to other platforms, we care much more about portability and we are much more serious about committing to long-term platform stability. Now my point is that we can only do that by being picky with what we support. There's no magic here; we don't get the above 2) point for free. I think you get the history of the Web all wrong. The Web has always been and will always be like the Wild West. Innovation happens all over the place, and we iterate towards a stable, standardized point after innovation happened. This is the biggest strength of the Web. Its not governed by a committee approving and managing the pace of innovation (or worse, by a single company controlling the ecosystem like Google or Apple). Nobody owns the Web and nobody can stop innovation. Of the 4 or so major browser vendors, if 2 move in some direction the other 2 have to follow suit or suffer the consequences of not being competitive on some characteristics. At the same time, nobody can go alone and fork the Web because nobody has enough market share to force a standard on their own. This is why Google’s proprietary extensions like NaCl and Dart are failing to get traction. Innovation is the life blood of the Web and we need heretics like Vlad to push its boundaries. I remember when Vlad first started pushing for WebGL. A lot of people felt its crazy talk to expose GL to the Web and today we can’t imagine a Web without it. Knowing Vlad and his track record, we will think the same about WebVR in a few years. Lets clear the roadblocks for him to take us there. Andreas Benoit Andreas On Apr 15, 2014, at 2:57 PM, Robert O'Callahan rob...@ocallahan.org wrote:
Re: Using rr to track down intermittent test failures
On 2014-04-15, 7:14 PM, Gijs Kruitbosch wrote: On 16/04/2014 00:05, Robert O'Callahan wrote: We just released rr 1.2 and I think this would be a good time for people to try to use it for one of the tasks it was designed for: debugging intermittent test failures. This is awesome! Three questions: 1) Is anyone working on something similar that works for frontend code (particularly, chrome JS)? I realize we have a JS debugger, but sometimes activating the debugger at the wrong time makes the bug go away, and then there's timing issues, and that the debugger doesn't stop all the event loops and so stopping at a breakpoint sometimes still has other code execute in the same context... AIUI your post, because the replay will replay the same Firefox actions, firing up the JS debugger is impossible because you can't make the process do anything. I think you want to consult the spidermonkey hackers to see how feasible that will be... 2) Is anyone working on making this available on our TBPL infra for try pushes? Bug 996910. :-) 3) Is support for other platforms than Linux/gdb (thinking Mac/lldb particularly) planned? rr relies heavily on Linux. I think an x86-64 port is feasible, but I don't think we can ever get it to work on a non-FOSS OS. Cheers, Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using rr to track down intermittent test failures
On Wed, Apr 16, 2014 at 11:05 AM, Robert O'Callahan rob...@ocallahan.orgwrote: 1) Install rr on a Westmere-or-later Linux system (or VM with performance counters virtualized), build 32-bit Firefox (opt or debug) and verify that recording Firefox works for you. If it doesn't, please file a github issue or check https://github.com/mozilla/rr/issues/973 if your system is Haswell-based. As a rule of thumb, anything Core i3/i5/i7 or later will work, and anything Core 2 or older will not work. BTW our support starts with Nehalem which is actually older than Westmere. See https://software.intel.com/en-us/articles/intel-architecture-and-processor-identification-with-cpuid-model-and-family-numbers Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Policy for disabling tests which run on TBPL
Thank you for putting this together. It is important. jmaher writes: This policy will define an escalation path for when a single test case is identified to be leaking or failing and is causing enough disruption on the trees. Exceptions: 1) If this test has landed (or been modified) in the last 48 hours, we will most likely back out the patch with the test 2) If a test is failing at least 30% of the time, we will file a bug and disable the test first I have adjusted the above policy to mention backing out new tests which are not stable, working to identify a regression in the code or tests, I see the exception for regressions from test changes, but I didn't notice mention of regressions from code. If a test has started failing intermittently and is failing at least 30% of the time, then I expect it is not difficult to identify the trigger for the regression. It is much more difficult to identify an increase in frequency of failure. Could we make it clear that preferred solution is to back out the cause of the regression? Either something in the opening paragraph or perhaps add the exception: If a regressing push or series of pushes can be identified then the changesets in those pushes are reverted. We won't always have a single push because other errors preventing tests from running are often not backed out until after several subsequent pushes. ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On 2014-04-15, 7:14 PM, Vladimir Vukicevic wrote: On Tuesday, April 15, 2014 5:57:13 PM UTC-4, Robert O'Callahan wrote: On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote: I'm asking because the Web has so far mostly been a common denominator, conservative platform. For example, WebGL stays at a distance behind the forefront of OpenGL innovation. I thought of that as being intentional. That is not intentional. There are historical and pragmatic reasons why the Web operates well in fast follow mode, but there's no reason why we can't lead as well. If the Web is going to be a strong platform it can't always be the last to get shiny things. And if Firefox is going to be strong we need to lead on some shiny things. So we need to solve Vlad's problem. It's very much a question of pragmatism, and where we draw the line. There are many options that we can do that avoid having to consider almost-open or almost-free licenses, or difficulties such as not being able to accept contributions for this one chunk of code. But they all result in the end result being weaker; developers or worse, users have to go through extra steps and barriers to access the functionality. I think that putting up those barriers dogmatically doesn't really serve our goals well; instead, we need to find a way to be fast and scrappy while still staying within the spirit of our mission. Note that for purposes of this discussion, VR support is minimal.. some properties to read to get some info about the output device (resolution, eye distance, distortion characteristics, etc) and some more to get the orientation of the device. This is not a highly involved API nor is it specific to Oculus, but more as a first-put based on hardware that's easily available. I also briefly suggested an entirely separate non-free repository -- you can clone non-free into the top level mozilla-central directory, or create it in other ways, and configure can figure things out based on what's present or not. That's an option, and it might be a way to avoid some of these issues. Hmm, that might actually work very easily, I _think_ closing that repo into extensions/ under a name such as oculus and then building with --enable-extension=oculus would just make the build system traverse that directory, right? Then we can add our Gecko code on top of it based on build system conditions such as |if 'oculus' in CONFIG['MOZ_EXTENSIONS']| in moz.build files, so if you have the oculus repo cloned under extensions/ and have the right mozconfig entry, everything will work out of the box. Does that seem useful? Cheers, Ehsan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
I know I'm lobbing in from the sidelines on what is essentially a licensing debate internal to Mozilla...but wouldn't any VR implementation like Vlad described be best done as an extension of existing open web standards where possible. The 6dof data that comes from the Oculus Rift is essentially the same as the DeviceOrientation API. When Oculus add the new translation tracking to their API then that would really just be similar to the DeviceMotion API. And the stereo scene rendering is already easily done with WebGL shaders so there's nothing new required there as far as I can see. We've also put together a plugin for our open source awe.js framework that uses getUserMedia() to turn the Rift into a video-see-thru AR device too. And for the 6dof tracking we just use the open source oculus-bridge app that makes this data available via a WebSocket which is enough for this type of proof of concept. http://www.youtube.com/watch?v=kIHih4Cc1agfeature=youtu.be Of course if that just turned up as the DeviceOrientation API when you plugged in the Rift then that would be even better. But once you guys have finished your licensing discussion I think it would also be great to have the discussion about using/extending existing open web standards rather than re-inventing some just for VR. AR and VR are just on one mixed reality continuum and they are both already starting to have an impact on what the next web will look and feel like. NOTE: If you combine all of this with the Depth Stream Extension we're working on for gUM too then this creates some amazing hybrid opportunities for web based VR that is also aware of the environment/scene around you. http://www.w3.org/wiki/Media_Capture_Depth_Stream_Extension Also note that Chrome have already implemented support for Project Tango too and hopefully this will end up being upgraded to support the new Depth Stream Extension when that's ready. On a slightly related note we've also implemented Kinect support that exposes the OpenNI Skeleton data via a WebSocket. This allows you to use the Kinect to project your body into a WebGL scene. This is great for VR and is definitely a new area where no existing open web standard is already working. roBman On 16/04/14 9:43 AM, Andreas Gal wrote: On Apr 15, 2014, at 4:17 PM, Benoit Jacob jacob.benoi...@gmail.com wrote: 2014-04-15 18:28 GMT-04:00 Andreas Gal andreas@gmail.com: You can’t beat the competition by fast following the competition. Our competition are native, closed, proprietary ecosystems. To beat them, the Web has to be on the bleeding edge of technology. I would love to see VR support in the Web platform before its available as a builtin capability in any major native platform. Can't we? (referring to: You can’t beat the competition by fast following the competition.”) Yes, we can. Look at some of the performance characteristics of FFOS on low-end hardware. We beat Android and other native systems on a regular basis on key performance metrics like startup performance by leveraging architectural advantages of the Web stack (lazy loading, etc). Or compare opening the App Store app on Mac OS X with going to a marketplace website like amazon.com. We load a rich content experience faster over the net than my SSD high end Mac loads from disk because the Web has evolved to a place where it has better capabilities for these tasks than native. The Web has a huge advantage over the competition (native, closed, proprietary ecosystems): The web only needs to be good enough. Aiming low is always wrong. Always. It is true that the Web has massive reach, but thats not an excuse to be stagnant and reach for the “lowest common denominator” as you are proposing it. The massive reach of the Web helps us to get innovation to people faster. It doesn’t remove the need to innovate. Look at all the wins that we're currently scoring with Web games. (I mention games because that's relevant to this thread). My understanding of this year's GDC announcements is that we're winning. To achieve that, we didn't really give the web any technical superiority over other platforms; in fact, we didn't even need to achieve parity. We merely made it good enough. For example, the competition is innovating with a completely new platform to run native code on the web, but with asm.js and emscripten we're showing that javascript is in fact good enough, so we end up winning anyway. We aren’t winning just yet. We barely got the foundation laid for Web gaming (even though I agree that we likely have tipped the scale now). In any case, we got here through technical excellence and innovation. asm.js is not merely good enough as you are claiming. It is the fastest, mostly widely available way to deliver portable game code to devices, with performance rivaling native performance. Thats very different from “lets just trail the market and do as little as we need to. What we need
Re: Using rr to track down intermittent test failures
On Wed, Apr 16, 2014 at 11:14 AM, Gijs Kruitbosch gijskruitbo...@gmail.comwrote: 1) Is anyone working on something similar that works for frontend code (particularly, chrome JS)? I realize we have a JS debugger, but sometimes activating the debugger at the wrong time makes the bug go away, and then there's timing issues, and that the debugger doesn't stop all the event loops and so stopping at a breakpoint sometimes still has other code execute in the same context... AIUI your post, because the replay will replay the same Firefox actions, firing up the JS debugger is impossible because you can't make the process do anything. Your understanding is correct. Implementing some kind of proper JS debugging on top of rr is technically feasible, and would actually be super awesome for our front-end contributors and also Web developers --- but it would be a lot of work. It's possible, but rather painful, to figure out what JS is doing using gdb. More and better gdb helper scripts would improve that situation. So rr isn't a great solution for JS developers at this time ... unless you're really desperate. Looking at browser-chrome, maybe we should be desperate :-). If you are desperate, try using rr to record and replay bugs that matter to you. If that works, it may be worth investing to make JS debugging through rr+gdb a bit more palatable. 2) Is anyone working on making this available on our TBPL infra for try pushes? Having a set of rr-enabled tests running on TBPL would be good but there are a few issues: -- Some bugs might not reproduce at all under rr. So it would be unwise to stop running our not-under-rr tests. -- The test slaves running rr would need to meet rr's specs and be bare-metal or VMs with perf counter virtualization enabled. -- We would need a story for debugging test failures recorded by rr in our test farm. ssh into the box is the easiest, technically. We haven't done any work to make traces portable and generally that's going to be either fragile or slow to replay. 3) Is support for other platforms than Linux/gdb (thinking Mac/lldb particularly) planned? Mac might be feasible but would be a massive amount of work. Other projects would be more valuable (e.g. x86-64, Android, improved debugging features). So, no. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using rr to track down intermittent test failures
On 4/15/2014 7:05 PM, Robert O'Callahan wrote: The steps to get started doing this are roughly as follows: 1) Install rr on a Westmere-or-later Linux system (or VM with performance counters virtualized), build 32-bit Firefox (opt or debug) and verify that recording Firefox works for you. If it doesn't, please file a github issue or check https://github.com/mozilla/rr/issues/973 if your system is Haswell-based. Do you have any idea on a timeframe for x86-64 support? I have a 64-bit Ubuntu install, and historically it's a bit of a pain to get 32-bit Firefox running. (Alternately, if someone wants to figure out instructions for getting a 32-bit Firefox built and running on Ubuntu, that would also be helpful.) -Ted ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using rr to track down intermittent test failures
- Original Message - Do you have any idea on a timeframe for x86-64 support? I have a 64-bit Ubuntu install, and historically it's a bit of a pain to get 32-bit Firefox running. (Alternately, if someone wants to figure out instructions for getting a 32-bit Firefox built and running on Ubuntu, that would also be helpful.) Your wish is granted: https://github.com/padenot/fx-32-on-64.sh -Nathan ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Using rr to track down intermittent test failures
On Wed, Apr 16, 2014 at 12:24 PM, Ted Mielczarek t...@mielczarek.org wrote: Do you have any idea on a timeframe for x86-64 support? It's technically not that hard, but it's a reasonably large project so it's not going to happen right away. Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform
Re: Oculus VR support somehwat-non-free code in the tree
On Apr 15, 2014, at 9:00 PM, Robert O'Callahan rob...@ocallahan.org wrote: On Wed, Apr 16, 2014 at 11:14 AM, Vladimir Vukicevic vladim...@gmail.comwrote: Note that for purposes of this discussion, VR support is minimal.. some properties to read to get some info about the output device (resolution, eye distance, distortion characteristics, etc) and some more to get the orientation of the device. This is not a highly involved API nor is it specific to Oculus, but more as a first-put based on hardware that's easily available. A couple of related questions that might matter: How much code are we talking about? (I'm too lazy to register to find out) https://github.com/jdarpinian/LibOVR Its really not a lot of code. There is some signal processing math to do sensor filtering and fusion, but its a few hundred lines only. The rest is standard USB HID device access glue for osx/linux/win. Andreas Any ideas on how Oculus might evolve their code in the future? Will it add new functionality we want? Rob -- Jtehsauts tshaei dS,o n Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o Whhei csha iids teoa stiheer :p atroa lsyazye,d 'mYaonu,r sGients uapr,e tfaokreg iyvoeunr, 'm aotr atnod sgaoy ,h o'mGee.t uTph eann dt hwea lmka'n? gBoutt uIp waanndt wyeonut thoo mken.o w ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform ___ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform