Re: Request Feedback - Submitting Canvas Frames, WebVR Compositor

2016-05-19 Thread Vladimir Vukicevic
This looks good to me in general -- for Gecko, this combined with offscreen
canvas and canvas/webgl in workers is going to be the best way to get
performant WebGL-based VR.  This is likely going to be the better way to
solve the custom-vsync for VR issue; while the large patch queue that I
have does work, it adds significant complexity to Gecko's vsync, and is
unlikely to get used by any other system.  You'll want to make sure that
the gfx folks weigh in on this as well.

Some comments -- if you want direct front-buffer rendering from canvas,
this will be tricky; you'll have to add support to canvas itself, because
right now the rendering context always allocates its own buffers.  That is,
you won't be able to render directly to the texture that goes in to an
Oculus compositor layer textureset, for example, even though that's what
you really want to do.  But I'd get the core working first and then work on
eliminating that copy and sharing the textureset surfaces with webgl canvas.

Same thing with support for Oculus Home as well as allowing for HTML
layers; those should probably be later steps (HTML/2D layers will need to
be rendered on the main thread and submitted from there, so timing them
between worker-offscreen-canvas layers and the main thread could be tricky).

- Vlad

On Tue, May 10, 2016 at 6:18 PM Kearwood "Kip" Gilbert 
wrote:

> Hello All,
>
> In order to support features in the WebVR 1.0 API (
> https://mozvr.com/webvr-spec/) and to improve performance for WebVR, I
> would like to implement an optimized path for submitting Canvas and
> OffscreenCanvas frames to VR headsets.  The WebVR 1.0 API introduces "VR
> Layers", explicit frame submission, and presenting different content to the
> head mounted display independently of the output the regular 2d monitor.  I
> would like some feedback on a proposed “VR Compositor” concept that would
> enable this.
>
> *What would be common between the “VR Compositor” and the regular “2d
> Compositor”?*
> - TextureHost and TextureChild would be used to transfer texture data
> across processes.
> - When content processes crash, the VR Compositor would continue to run.
> - There is a parallel between regular layers created by layout and “VR
> Layers”.
> - There would be one VR Compositor serving multiple content processes.
> - The VR Compositor would not allow unprivileged content to read back
> frames submitted by other content and chrome ux.
> - Both compositors would exist in the “Compositor” process, but in
> different threads.
>
> *What is different about the “VR Compositor”?*
> - The VR Compositor would extend the PVRManager protocol to include VR
> Layer updates.
> - The VR Compositor will not obscure the main 2d output window or require
> entering full screen to activate a VR Headset.
> - In most cases, there will be no visible window created by the VR
> Compositor as the VR frames are presented using VR specific API’s that
> bypass the OS-level window manager.
> - The VR Compositor will not run synchronously with a refresh driver as it
> can simultaneously present content with mixed frame rates.
> - Texture updates submitted for VR Layers would be rendered as soon as
> possible, often asynchronously with other VR Layer updates.
> - VR Layer textures will be pushed from both Canvas elements and
> OffscreenCanvas objects, enabling WebVR in WebWorkers.
> - The VR compositor will guarantee perfect frame uniformity, with each
> frame associated with a VR headset pose frame explicitly passed into
> VRDisplay.SubmitFrame.  No frames will be dropped, even if multiple frames
> are sent within a single hardware vsync.
> - For most devices (i.e. Oculus and HTC Vive), the VR Compositor will
> perform front-buffer rendering.
> - VR Layers asynchronously move with the user’s HMD pose between VR Layer
> texture updates if given geometry and a position within space.
> - The VR Compositor implements latency hiding effects such as Asynchronous
> Time Warp and Pose Prediction.
> - The VR Compositor will be as minimal as possible.  In most cases, the VR
> Compositor will offload the actual compositing to the VR device runtimes.
>  (Both Oculus and HTC Vive include a VR compositor)
> - When the VR device runtime does not supply a VR Compositor, we will
> emulate this functionality.  (i.e. for Cardboard VR)
> - All VR hardware API calls will be made exclusively from the VR
> Compositor’s thread.
> - The VR Compositor will implement focus handling, window management, and
> other functionality required for Firefox to be launched within environments
> such as Oculus Home and SteamVR.
> - To support backwards compatibility and fall-back views of 2d web content
> within the VR headset, the VR compositor could provide an nsWidget /
> nsWindow interface to the 2d compositor.  The 2d compositor output would be
> projected onto the geometry of a VR Layer and updated asynchronously with
> HMD poses.
> - The VR Compositor will not allocate unnecessary 

crash stats query tools

2016-04-21 Thread vladimir
Hi all,

I wrote some tools a while back intended to make it possible to do complex 
crash stats queries locally, using downloaded crash stats data.  It can do 
queries using a mongodb-like query language; even based on functions (running a 
function on each crash to decide whether it should be included or not).  You 
can use these queries/buckets to create custom top crash lists, or otherwise 
pull out data from crash stats.

They're node.js tools; you can find the repository and some instructions here: 
https://github.com/vvuk/crystalball  You'll need an API key from crash stats, 
and be aware that the initial data download is expensive on the server; you can 
copy the cache files to multiple machines instead of re-downloading (they're 
static; all the data for a given day is downloaded).

Let me know if anyone finds this useful, or if there are features you'd like to 
see added (pull requests accepted as well).

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: WebVR

2015-10-29 Thread vladimir
On Monday, October 26, 2015 at 9:39:57 PM UTC-4, Ehsan Akhgari wrote:
> First things first, congratulations on getting this close!
> 
> What's the status of the specification?  I just had a quick skim and it 
> seems extremely light on details.

The spec is still a draft, and the API is expected to change significantly 
(specifically the fullscreen window integration is going to change).  The 
intent to ship here is a bit premature; the intent is to pref it on in nightly 
& aurora, not ship it all the way to release.

> There is quite a bit of details missing.  The security model is 
> essentially blank, and the descriptions in section 4 seem to be 
> high-level overviews of what the DOM interfaces do, rather that detailed 
> descriptions that can be used in order to implement the specification.

Yep.

> Also some things that I was expecting to see in the API seem to be 
> missing.  For example, what should happen if the VR device is 
> disconnected as the application is running?  It seems like right now the 
> application can't even tell that happened.

Also something that's coming in an upcoming revision of the API.

> Another question: do you know if Chrome is planning to ship this feature 
> at some point?  Has there been interoperability tests?

They are currently in the same boat as us, shipping it in dev or one-off 
builds.  We're working with them on the specification, and we're generally 
interoperable currently.

- Vlad

> On 2015-10-26 3:19 PM, Kearwood "Kip" Gilbert wrote:
> > As of Oct 29, 2015 I intend to turn WebVR on by default for all
> > platforms. It has been developed behind the dom.vr.enabled preference.
> > A compatible API has been implemented (but not yet shipped) in Chromium
> > and Blink.
> >
> > Bug to turn on by default:
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1218482
> >
> > Link to standard: https://mozvr.github.io/webvr-spec/webvr.html
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: WebVR

2015-10-29 Thread vladimir
On Wednesday, October 28, 2015 at 11:38:26 AM UTC-4, Gervase Markham wrote:
> On 26/10/15 19:19, Kearwood "Kip" Gilbert wrote:
> > As of Oct 29, 2015 I intend to turn WebVR on by default for all
> > platforms. It has been developed behind the dom.vr.enabled preference. 
> > A compatible API has been implemented (but not yet shipped) in Chromium
> > and Blink.
> 
> At one point, integrating with available hardware required us to use
> proprietary code. Is shipping proprietary code in Firefox any part of
> this plan, or not?

No.

 - Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


landing soon: core APIs for VR

2014-11-19 Thread Vladimir Vukicevic
Hi all,

We've had a lot of excitement around our VR efforts and the MozVR site, and we 
want to capitalize on this momentum.  Very soon, I'll be landing the early 
support code for VR in mozilla-central, pref'd off by default.  This includes 
adding the core VR interfaces, display item and layers functionality for VR 
rendering, as well as supporting code such as extensions to the Fullscreen API.

Core VRDevices API:
https://bugzilla.mozilla.org/show_bug.cgi?id=1036604

Layers/gfx pieces:
https://bugzilla.mozilla.org/show_bug.cgi?id=1036602

Fullscreen API extensions:
https://bugzilla.mozilla.org/show_bug.cgi?id=1036606
https://bugzilla.mozilla.org/show_bug.cgi?id=1036597

This code is sufficient to perform WebGL-based VR rendering with an output 
going to an Oculus Rift.  None of the CSS-based VR code is ready to land yet.  
Additionally, this won't work out of the box, even if the pref is flipped, 
until we ship the Oculus runtime pieces (but initially instructions will be 
available on where to get the relevant DLL and where to put them).

The following things need to take place to pref this code on by default for 
nightly/aurora builds (not for release):

- Figure out how to ship/package/download/etc. the Oculus runtime pieces.
- Add support for Linux
- Add basic tests for VRDevice APIs

Beyond that there is a lot of work left to be done, both around supporting 
additional headset and input devices (Cardboard, etc.) as well as platforms 
(Android, FxOS), but that can be done directly in the tree instead of needing 
to maintain separate one-off builds for this work.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: Windows builds will default to XPCOM_DEBUG_BREAK=warn

2014-08-12 Thread Vladimir Vukicevic
I'm about to land bug 1046222, which changes the default behaviour of 
XPCOM_DEBUG_BREAK on Windows to warn, instead of trap.  If run under the 
debugger, trap will cause the debugger to stop as if it hit a breakpoint in 
debug builds.

This would be useful behaviour if we didn't still have a whole ton of 
assertions, but as it is it's an unnecessary papercut for windows developers, 
or people doing testing/debugging on windows -- some of whom may not know that 
they should set XPCOM_DEBUG_BREAK in the debugger, and are instead clicking 
continue through tons of assertions until they get to what they care about!

The change will bring Windows assertion behaviour in line with other platforms.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Vladimir Vukicevic
On Tuesday, August 12, 2014 11:22:05 AM UTC-4, Aryeh Gregor wrote:
 For refcounted types, isn't a raw pointer in a local variable a red
 flag to reviewers to begin with?  If GetT() returns a raw pointer
 today, like nsINode::GetFirstChild() or something, storing the result
 in a raw pointer is a potential use-after-free, and that definitely
 has happened already.  Reviewers need to make sure that refcounted
 types aren't ever kept in raw pointers in local variables, unless
 perhaps it's very clear from the code that nothing can possibly call
 Release() (although it still makes me nervous).

Putting the burden on reviewers when something can be automatically checked 
doesn't seem like a good idea -- it requires reviewers to know that GetT() 
*does* return a refcounted type, for example.  As dbaron pointed out, there are 
cases where we do actually return and keep things around as bare pointers.

It's unfortunate that we can't create a nsCOMPtr that will disallow 
assignment to a bare pointer without an explicit .get(), but will still allow 
conversion to a bare pointer for arg passing purposes.  (Or can we? I admit my 
C++-fu is not that strong in this area...)  It would definitely be nice to get 
rid of already_AddRefed (not least because the spelling of Refed always 
grates when I see it :).

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OMTC on Windows

2014-05-28 Thread Vladimir Vukicevic
(Note: I have not looked into the details of CART/TART and their interaction 
with OMTC)

It's entirely possible that (b) is true *now* -- the test may have been good 
and proper for the previous environment, but now the environment 
characteristics were changed such that the test needs tweaks.  Empirically, I 
have not seen any regressions on any of my Windows machines (which is basically 
all of them); things like tab animations and the like have started feeling 
smoother even after a long-running browser session with many tabs.  I realize 
this is not the same as cold hard numbers, but it does suggest to me that we 
need to take another look at the tests now.

- Vlad

- Original Message -
 From: Gijs Kruitbosch gijskruitbo...@gmail.com
 To: Bas Schouten bschou...@mozilla.com, Gavin Sharp 
 ga...@gavinsharp.com
 Cc: dev-tech-...@lists.mozilla.org, mozilla.dev.platform group 
 dev-platform@lists.mozilla.org, release-drivers
 release-driv...@mozilla.org
 Sent: Thursday, May 22, 2014 4:46:29 AM
 Subject: Re: OMTC on Windows
 
 Looking on from m.d.tree-management, on Fx-Team, the merge from this
 change caused a 40% CART regression, too, which wasn't listed in the
 original email. Was this unforeseeen, and if not, why was this
 considered acceptable?
 
 As gavin noted, considering how hard we fought for 2% improvements (one
 of the Australis folks said yesterday 1% was like Christmas!) despite
 our reasons of why things were really OK because of some of the same
 reasons you gave (e.g. running in ASAP mode isn't realistic, TART is
 complicated, ...), this hurts - it makes it seem like (a) our
 (sometimes extremely hacky) work was done for no good reason, or (b) the
 test is fundamentally flawed and we're better off without it, or (c)
 when the gfx team decides it's OK to regress it, it's fine, but not when
 it happens to other people, quite irrespective of reasons given.
 
 All/any of those being true would give me the sad feelings. Certainly it
 feels to me like (b) is true if this is really meant to be a net
 perceived improvement despite causing a 40% performance regression in
 our automated tests.
 
 ~ Gijs
 
 On 18/05/2014 19:47, Bas Schouten wrote:
  Hi Gavin,
 
  There have been several e-mails on different lists, and some communication
  on some bugs. Sadly the story is at this point not anywhere in a condensed
  form, but I will try to highlight a couple of core points, some of these
  will be updated further as the investigation continues. The official bug
  is bug 946567 but the numbers and the discussion there are far outdated
  (there's no 400% regression ;)):
 
  - What OMTC does to tart scores differs wildly per machine, on some
  machines we saw up to 10% improvements, on others up to 20% regressions.
  There also seems to be somewhat more of a regression on Win7 than there is
  on Win8. What the average is for our users is very hard to say, frankly I
  have no idea.
  - One core cause of the regression is that we're now dealing with two D3D
  devices when using Direct2D since we're doing D2D drawing on one thread,
  and D3D11 composition on the other. This means we have DXGI locking
  overhead to synchronize the two. This is unavoidable.
  - Another cause is that we're now having two surfaces in order to do double
  buffering, this means we need to initialize more resources when new layers
  come into play. This again, is unavoidable.
  - Yet another cause is that for some tests we composite 'ASAP' to get
  interesting numbers, but this causes some contention scenario's which are
  less likely to occur in real-life usage. Since the double buffer might
  copy the area validated in the last frame from the front buffer to the
  backbuffer in order to prevent having to redraw much more. If the
  compositor is compositing all the time this can block the main thread's
  rasterization. I have some ideas on how to improve this, but I don't know
  how much they'll help TART, in any case, some cost here will be
  unavoidable as a natural additional consequence of double buffering.
  - The TART number story is complicated, sometimes it's hard to know what
  exactly they do, and don't measure (which might be different with and
  without OMTC) and how that affects practical performance. I've been told
  this by Avi and it matches my practical experience with the numbers. I
  don't know the exact reasons and Avi is probably a better person to talk
  about this than I am :-).
 
  These are the core reasons that we were able to identify from profiling.
  Other than that the things I said in my previous e-mail still apply. We
  believe we're offering significant UX improvements with async video and
  are enabling more significant improvements in the future. Once we've fixed
  the obvious problems we will continue to see if there's something that can
  be done, either through tiling or through other improvements, particularly
  in the last point I mentioned there might be some, not 'too' complex
 

Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Vladimir Vukicevic
On Thursday, May 8, 2014 5:25:49 AM UTC-4, Henri Sivonen wrote:
 Making the Web little-endian may indeed have been the right thing.
 Still, at least from the outside, it looks like the WebGL group didn't
 make an intentional wise decision to make the Web little-endian but
 instead made a naive decision that coupled with the general Web
 developer behavior and the dominance of little-endian hardware
 resulted in the Web becoming little-endian.
 
 http://www.khronos.org/registry/typedarray/specs/latest/#2.1 still
 says The typed array view types operate with the endianness of the
 host computer.  instead of saying The typed array view types operate
 in the little-endian byte order. Don't build big endian systems
 anymore.
 
 *Maybe* that's cunning politics to get a deliberate
 little-endianization pass without more objection, but from the spec
 and a glance at the list archives it sure looks like the WebGL group
 thought that it's reasonable to let Web developers deal with the API
 behavior differing on big-endian and little-endian computers, which
 isn't at all a reasonable expectation given everything we know about
 Web developers.

This is a digression, and I'm happy to discuss the endianness of typed 
arrays/webgl in a separate thread, but this decision was made because it made 
the most sense, both from a technical perspective (even for big endian 
machines!) and from an implementation perspective.

You seem to have a really low opinion of Web developers.  That's unfortunate, 
but it's your opinion.  It's not one that I share.  The Web is a complex 
platform.  It lets you do simple things simply, and it makes complex/difficult 
things possible.  You need to have some development skill to do the 
complex/difficult things.  I'd rather have that than make those things 
impossible.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standards side of VR

2014-04-16 Thread Vladimir Vukicevic
Yep, my plan was to to not let this get beyond Nightly, maybe Aurora, but
not further until the functionality and standards were firmer.

- Vlad


On Wed, Apr 16, 2014 at 12:08 PM, Anne van Kesteren ann...@annevk.nlwrote:

 On Wed, Apr 16, 2014 at 4:59 PM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
  I think a great way to deal with that is to keep features on the beta
  channel and continue to make breaking changes to them before we feel
 ready
  to ship them.  The reality is that once we ship an API our ability to
 make
  any backwards incompatible changes to it will be severely diminished if
  websites that our users depend on break because of that (albeit in the
 case
  of a very new technology such as VR which will not be useful on all Web
  applications, the trade-off might be different.)

 Yeah, enabling on beta combined with talks with our competitors about
 shipping might be a good way to go.


 --
 http://annevankesteren.nl/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support somehwat-non-free code in the tree

2014-04-15 Thread Vladimir Vukicevic
On Tuesday, April 15, 2014 5:57:13 PM UTC-4, Robert O'Callahan wrote:
 On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote:
 
  I'm asking because the Web has so far mostly been a common denominator,
  conservative platform. For example, WebGL stays at a distance behind the
  forefront of OpenGL innovation. I thought of that as being intentional.
 
 That is not intentional. There are historical and pragmatic reasons why the
 Web operates well in fast follow mode, but there's no reason why we can't
 lead as well. If the Web is going to be a strong platform it can't always
 be the last to get shiny things. And if Firefox is going to be strong we
 need to lead on some shiny things.
 
 So we need to solve Vlad's problem.

It's very much a question of pragmatism, and where we draw the line.  There are 
many options that we can do that avoid having to consider almost-open or 
almost-free licenses, or difficulties such as not being able to accept 
contributions for this one chunk of code.  But they all result in the end 
result being weaker; developers or worse, users have to go through extra steps 
and barriers to access the functionality.  I think that putting up those 
barriers dogmatically doesn't really serve our goals well; instead, we need to 
find a way to be fast and scrappy while still staying within the spirit of our 
mission.

Note that for purposes of this discussion, VR support is minimal.. some 
properties to read to get some info about the output device (resolution, eye 
distance, distortion characteristics, etc) and some more to get the orientation 
of the device.  This is not a highly involved API nor is it specific to Oculus, 
but more as a first-put based on hardware that's easily available.

I also briefly suggested an entirely separate non-free repository -- you can 
clone non-free into the top level mozilla-central directory, or create it in 
other ways, and configure can figure things out based on what's present or not. 
 That's an option, and it might be a way to avoid some of these issues.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Oculus VR support somehwat-non-free code in the tree

2014-04-14 Thread Vladimir Vukicevic
Hey all,

I have a prototype of VR display and sensor integration with the web, along 
with an implementation for the Oculus VR.  Despite there really being only one 
vendor right now, there is a lot of interest in VR.  I'd like to add the web 
and Firefox to that flurry of activity... especially given our successes and 
leadership position on games and asm.js.

I'd like to get this checked in so that we can either have it enabled by 
default in nightlies (and nightlies only), or at least allow it enabled via a 
pref.  However, there's one issue -- the LibOVR library has a 
not-fully-free-software license [1].  It's compatible with our licenses, but it 
is not fully free.

There are a couple of paths forward, many of which can take place 
simultaneously.  I'd like to suggest that we do all of the following:

1. Check in the LibOVR sources as-is, in other-licenses/oculus.  Add a 
configure flag, maybe --disable-non-free, that disables building it.  Build and 
ship it as normal in our builds.

2. Contact Oculus with our concerns about the license, and see if they would be 
willing to relicense to something more standard.  The MPL might actually fit 
their needs pretty well, though we are effectively asking them to relicense 
their SDK code.  There is no specific driver for the OVR; it shows up as a USB 
HID device, and LibOVR knows how to interpret the data stream coming from it.  
This gets them easy compat with all operating systems, and the support I'd add 
would be for Windows, Mac, and Linux.

3. Start investigating Open VR, with the intent being to replace the 
Oculus-specific library with a more standard one before we standardize and ship 
the API more broadly than to nightly users.

The goal would be to remove LibOVR before we ship (or keep it in assuming it 
gets relicensed, if appropriate), and replace it with a standard Open VR 
library.

There are a few other options that are worse:

1. We could ship the VR glue in nightly, but the Oculus support packaged as an 
addon.  This is doable, but it requires significant rework in changing the 
interfaces to use XPCOM, to do category-based registration of the Oculus 
provider, in building and packaging the addon, etc.  It also requires a 
separate install step for developers/users.

2. We could ship the VR integration as a plugin.  vr.js does this already.  But 
we are trying to move away from plugins, and there's no reason why the Oculus 
can't function in places where plugins are nonexistent, such as mobile.  
Delivering this to developers via a plugin would be admitting that we can't 
actually deliver innovative features without the plugin API, which is untrue 
and pretty silly.

3. Require developers to install the SDK themselves, and deploy it to all of 
the build machines so that we can build it.  This is IMO a very non-pargmatic 
option; it requires a ton more fragile work (everyone needs to get and keep the 
SDK updated; releng needs to do the same on build machines) and sacrifices 
developer engagement (additional SDKs suck -- see the DirectX SDK that we're 
working on eliminating the need for) in order to try to preserve some form of 
purity.

3. We do nothing.  This option won't happen: I'm tired of not having Gecko and 
Firefox at the forefront of web technology in all aspects.

Any objections to the above, or alternative suggestions?  This is a departure 
in our current license policy, but not a huge one.  There were some concerns 
expressed about that, but I'm hoping that we can take a pragmatic path here.

   - Vlad

[1] https://developer.oculusvr.com/license
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support somehwat-non-free code in the tree

2014-04-14 Thread Vladimir Vukicevic
On Monday, April 14, 2014 7:29:43 PM UTC-4, Ralph Giles wrote:
  The goal would be to remove LibOVR before we ship (or keep it in assuming 
  it gets relicensed, if appropriate), and replace it with a standard Open 
  VR library.
 
 Can you dlopen the sdk, so it doesn't have to be in-tree? That still
 leaves the problem of how to get it on a user's system, but perhaps an
 add-on can do that part while the interface code in is-tree.

Unfortunately, no -- the interface is all C++, and the headers are licensed 
under the same license.  A C layer could be written, but then we're back to 
having to ship it separately via addon or plugin anyway.

 Finally, did you see Gerv's post at
 
 http://blog.gerv.net/2014/03/mozilla-and-proprietary-software/

Yes -- perhaps unsurprisingly, I disagree with Gerv on some of the particulars 
here.  Gerv's opinions are his own, and are not official Mozilla policy.  That 
post I'm sure came out of a discussion regarding this very issue here.  In 
particular, my stance is that we build open source software because we believe 
there is value in that, and that it is the best way to build innovative, 
secure, and meaningful software.  We don't build open source software for the 
sake of building open source.

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: UNIFIED_SOURCES breaks breakpoints in LLDB (Was: Unified builds)

2013-11-20 Thread Vladimir Vukicevic
I just did a unified and non-unified build on my windows desktop -- non SSD.  
VS2012, using mozmake.  Full clobber. (mozmake -s -j8)

Unified: 20 min
Non-Unified: 36 min

This is huge!  I was curious about the cost for incremental builds...

touch gfx/2d/Factory.cpp (part of a unified file), rebuild using binaries 
target:

Unified: 53s
Non-Unified: 58s

touch gfx/thebes/gfxPlatform.cpp (note: this dir/file is not unified), rebuild 
using binaries target:

Unified: 56s
Non-Unified: 56s

(I need to rerun this on my computer with a SSD; I had a single-file binaries 
rebuild down to 10s there)

... and was very surprised to see no real difference, often non-unified taking 
slightly longer.  So.  Big win, thanks guys!

- Vlad
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove the function timer code

2012-09-19 Thread Vladimir Vukicevic

On 9/19/2012 12:04 AM, Ehsan Akhgari wrote:

A while ago (I think more than a couple of years ago now), Vlad
implemented FunctionTimer which is a facility to time how much each
function exactly takes to run.  Then, I went ahead and instrumented a
whole bunch of code which was triggered throughout our startup path to
get a sense of what things are expensive there and what we can do about
that.  That code is hidden behind the NS_FUNCTION_TIMER build-time flag,
turned on by passing --enable-functiontimer.

This dates back to the infancy of our profiling tools, and we have a
much better built-in profiler these days which is usable without needing
a build-time option.  I don't even have the scripts I used to parse the
output and the crude UI I used to view the log around any more.  I've
stopped building with --enable-functiontimer for a long time now, and I
won't be surprised if that flag would even break the builds these days.

So, I'd like to propose that we should remove all of that code.  Is
anybody using function timers these days?  (I'll take silence as
consent! :-)


Yep, sounds fine to me -- though we don't have equivalent functionality 
right now, e.g. we don't quite have the ability to time/measure 
regions, if it's not being maintained it's not useful.


- Vlad


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform