Re: PSA: RIP MOZ_ASSUME_UNREACHABLE

2014-09-22 Thread Benoit Jacob
Great work Chris! Thanks for linking to the study; the link gives me error
400, github links are tricky:

2014-09-22 4:06 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

 [1] https://raw.githubusercontent.com/bjacob/builtin-unreachable-study


Repo link: https://github.com/bjacob/builtin-unreachable-study
Notes file:
https://raw.githubusercontent.com/bjacob/builtin-unreachable-study/master/notes

Benoit


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting rid of already_AddRefed?

2014-08-12 Thread Benoit Jacob
As far as I know, the only downside in replacing already_AddRefed by
nsCOMPtr would be to incur more useless calls to AddRef and Release. In the
case of threadsafe i.e. atomic refcounting, these use atomic
instructions, which might be expensive enough on certain ARM CPUs that this
might matter. So if you're interested, you could take a low-end ARM CPU
that we care about and see if replacing already_AddRefed by nsCOMPtr causes
any measurable performance regression.

Benoit


2014-08-12 10:59 GMT-04:00 Benjamin Smedberg benja...@smedbergs.us:

 Just reading bug 1052477, and I'm wondering about what are intentions are
 for already_AddRefed.

 In that bug it's proposed to change the return type of NS_NewAtom from
 already_AddRefed to nsCOMPtr. I don't think that actually saves any
 addref/release pairs if done properly, since you'd typically .forget() into
 the return value anyway. But it does make it slightly safer at callsites,
 because the compiler will guarantee that the return value is always
 released instead of us relying on every already_AddRefed being saved into a
 nsCOMPtr.

 But now that nsCOMPtr/nsRefPtr support proper move constructors, is there
 any reason for already_AddRefed to exist at all in our codebase? Could we
 replace every already_AddRefed return value with a nsCOMPtr?

 --BDS
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building with a RAM disk

2014-07-18 Thread Benoit Jacob
What OS are we talking about?

(On Linux, ramdisks are mountpoints like any other so that would be
trivial; but then again, on Linux, the kernel is good enough at using extra
RAM for disk cache automatically, that you get the benefits of a RAMdisk
automatically).

Benoit


2014-07-18 22:39 GMT-04:00 Geoff Lankow ge...@darktrojan.net:

 Today I tried to build Firefox on a RAM disk for the first time, and
 although I succeeded through trial and error, it occurs to me that there
 are probably things I could do better. Could someone who regularly does
 this make a blog post or an MDN page about their workflow and some tips and
 tricks? I think it'd be useful to many people but I (read: Google) couldn't
 find anything helpful.

 Thanks!
 GL
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: DebugOnly fields aren't zero-sized in non-DEBUG builds

2014-07-16 Thread Benoit Jacob
That sounds like a good idea, if possible.


2014-07-16 14:41 GMT-04:00 Ehsan Akhgari ehsan.akhg...@gmail.com:

 Should we make DebugOnly MOZ_STACK_CLASS?


 On 2014-07-15, 9:21 PM, Nicholas Nethercote wrote:

 Hi,

 The comment at the top of mfbt/DebugOnly.h includes this text:

   * Note that DebugOnly instances still take up one byte of space, plus
 padding,
   * when used as members of structs.

 I'm in the process of making js::HashTable (a very common class)
 smaller by converting some DebugOnly fields to instead be guarded by
 |#ifdef DEBUG| (bug 1038601).

 Below is a list of remaining DebugOnly members that I found using
 grep. People who are familiar with them should inspect them to see if
 they belong to classes that are commonly instantiated, and thus if
 some space savings could be made.

 Thanks.

 Nick


 uriloader/exthandler/ExternalHelperAppParent.h:  DebugOnlybool
 mDiverted;
 layout/style/CSSVariableResolver.h:  DebugOnlybool mResolved;
 layout/base/DisplayListClipState.h:  DebugOnlybool mClipUsed;
 layout/base/DisplayListClipState.h:  DebugOnlybool mRestored;
 layout/base/DisplayListClipState.h:  DebugOnlybool mExtraClipUsed;
 gfx/layers/Layers.h:  DebugOnlyuint32_t mDebugColorIndex;
 ipc/glue/FileDescriptor.h:  mutable DebugOnlybool
 mHandleCreatedByOtherProcessWasUsed;
 ipc/glue/MessageChannel.cpp:DebugOnlybool mMoved;
 ipc/glue/BackgroundImpl.cpp:  DebugOnlybool mActorDestroyed;
 content/media/MediaDecoderStateMachine.h:  DebugOnlybool
 mInRunningStateMachine;
 dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnlyRequestType
 mRequestType;
 dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnlyRequestType
 mRequestType;
 dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnlyRequestType
 mRequestType;
 dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnlyRequestType mRequestType;
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: DebugOnly fields aren't zero-sized in non-DEBUG builds

2014-07-15 Thread Benoit Jacob
It may be worth reminding people that this is not specific to DebugOnly but
general to all C++ classes: In C++, there is no such thing as a class with
size 0. So expecting DebugOnlyT to be of size 0 is not misunderstanding
DebugOnly, it is misunderstanding C++. The only way to have empty classes
behave as if they had size 0, is to inherit from them instead of having
them as the types of members.That's called the Empty Base Class
Optimization.
http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Empty_Base_Optimization

Since DebugOnlyT incurs a size overhead in non-debug builds, maybe we
should officially consider it bad practice to have any DebugOnlyT class
members. Having to guard them in #ifdef DEBUG takes away much of the point
of DebugOnlyT, doesn't it?

Benoit


2014-07-15 21:21 GMT-04:00 Nicholas Nethercote n.netherc...@gmail.com:

 Hi,

 The comment at the top of mfbt/DebugOnly.h includes this text:

  * Note that DebugOnly instances still take up one byte of space, plus
 padding,
  * when used as members of structs.

 I'm in the process of making js::HashTable (a very common class)
 smaller by converting some DebugOnly fields to instead be guarded by
 |#ifdef DEBUG| (bug 1038601).

 Below is a list of remaining DebugOnly members that I found using
 grep. People who are familiar with them should inspect them to see if
 they belong to classes that are commonly instantiated, and thus if
 some space savings could be made.

 Thanks.

 Nick


 uriloader/exthandler/ExternalHelperAppParent.h:  DebugOnlybool mDiverted;
 layout/style/CSSVariableResolver.h:  DebugOnlybool mResolved;
 layout/base/DisplayListClipState.h:  DebugOnlybool mClipUsed;
 layout/base/DisplayListClipState.h:  DebugOnlybool mRestored;
 layout/base/DisplayListClipState.h:  DebugOnlybool mExtraClipUsed;
 gfx/layers/Layers.h:  DebugOnlyuint32_t mDebugColorIndex;
 ipc/glue/FileDescriptor.h:  mutable DebugOnlybool
 mHandleCreatedByOtherProcessWasUsed;
 ipc/glue/MessageChannel.cpp:DebugOnlybool mMoved;
 ipc/glue/BackgroundImpl.cpp:  DebugOnlybool mActorDestroyed;
 content/media/MediaDecoderStateMachine.h:  DebugOnlybool
 mInRunningStateMachine;
 dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBParent.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnlyRequestType mRequestType;
 dom/indexedDB/ipc/IndexedDBChild.h:  DebugOnlyRequestType mRequestType;
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox heap-textures usage

2014-07-03 Thread Benoit Jacob
Please file a bug on bugzilla, product: Core, component: Graphics, and CC
bas.schouten. This about:memory report says you have 4G of textures; that
seems too much; and the fact that 'explicit' is above 4G suggests that that
is for real and not just a bug in this counter-based memory reporter.

Benoit


2014-07-03 15:13 GMT-04:00 Wesley Hardman whardma...@gmail.com:

 I was wondering why I was running low on memory, then noticed Firefox.

 Heap textures seems rather large (can't drill down any).  I don't have
 that many tabs open (Window1 3 + Window2 2) for a total of 5.  CC/GC didn't
 help any.  I also closed the only tab that might be heavy on the graphics.
  I did have the dev tools open for a while with Log Request and Response
 Bodies turned on.

 Any ideas?

 Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:30.0) Gecko/20100101
 Firefox/30.0

 4,783.33 MB (100.0%) -- explicit
 ├──4,073.86 MB (85.17%) -- gfx
 │  ├──4,073.18 MB (85.15%) ── heap-textures
 │  └──0.68 MB (00.01%) ++ (5 tiny)
 ├185.48 MB (03.88%) -- js-non-window
 │├──145.96 MB (03.05%) -- zones
 ││  ├──102.89 MB (02.15%) ++ zone(0x76cb800)
 ││  └───43.07 MB (00.90%) ++ (31 tiny)
 │└───39.52 MB (00.83%) ++ (2 tiny)
 ├182.37 MB (03.81%) -- window-objects
 │├───67.81 MB (01.42%) -- top(none)/detached
 ││   ├──63.47 MB (01.33%) ++
 window(chrome://browser/content/browser.xul)
 ││   └───4.34 MB (00.09%) ++ (5 tiny)
 │├───60.18 MB (01.26%) ++ (10 tiny)
 │└───54.39 MB (01.14%) ++ top([URL], id=1954)
 ├134.19 MB (02.81%) -- heap-overhead
 │├──131.69 MB (02.75%) ── waste
 │└2.50 MB (00.05%) ++ (2 tiny)
 ├114.44 MB (02.39%) ++ (21 tiny)
 └─92.99 MB (01.94%) ── heap-unclassified


 0.22 MB ── canvas-2d-pixels
 0.00 MB ── gfx-d2d-surface-cache
 4.00 MB ── gfx-d2d-surface-vram
   208.90 MB ── gfx-d2d-vram-draw-target
 5.34 MB ── gfx-d2d-vram-source-surface
 1.38 MB ── gfx-surface-win32
 0.00 MB ── gfx-textures
   0 ── ghost-windows
 4,394.14 MB ── heap-allocated
 4,528.33 MB ── heap-committed
   3.05% ── heap-overhead-ratio
   0 ── host-object-urls
 0.00 MB ── imagelib-surface-cache
24.58 MB ── js-main-runtime-temporary-peak
 5,082.21 MB ── private
 5,221.40 MB ── resident
 6,651.32 MB ── vsize
 8,376,692.50 MB ── vsize-max-contiguous
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What are the most important new APIs to document in Q3/Q4?

2014-06-26 Thread Benoit Jacob
2014-06-26 9:09 GMT-04:00 Eric Shepherd esheph...@mozilla.com:

 Hi! The docs team is trying to build our schedule for the next quarter or
 two, and part of that is deciding which APIs to spend lots of our time
 writing about. I'd like to know what y'all think the most important APIs
 are for docs attention in the next few months.

 Here are a few possibilities we've heard of. I'd like your opinions on
 which of these are the most important -- for Mozilla, the open Web, and of
 course for Firefox OS. PLEASE feel free to suggest others. I'm sure there
 are APIs we don't know about at all, or aren't on this list.

 DO NOT ASSUME WE KNOW YOUR API EXISTS. Not even if it should be obvious.
 Especially not then. :)

 * WebRTC
 * WebGL (our current docs are very weak and out of date)


Not expressing any opinion on whether WebGL should be prioritized, but
recently I had to teach some WebGL and, not being fully satisfied with
existing tutorials, I made a code-only tutorial made of 12 increasingly
involved WebGL examples,

http://bjacob.github.io/webgl-tutorial/

and I would be happy to work a bit with someone to turn it into proper
documentation.

Benoit



 * Service Workers
 * Shared Workers
 * Web Activities
 * ??

 What are your top five APIs that you think need documentation attention?
 For the purposes of this discussion, consider any that you know aren't
 already documented (you don't have to search MDN -- if there happen to be
 any you're annoyed by lack of/sucky docs, list 'em). Also consider any that
 will land in Q3 or Q4.

 We will collate the input we get to build our plan for the next quarter
 and to start a rough sketch for Q4!

 Thanks in advance!

 --
 Eric Shepherd
 Developer Documentation Lead
 Mozilla
 Blog: http://www.bitstampede.com/
 Twitter: @sheppy

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-06-20 Thread Benoit Jacob
Here's an update on this front.

In Bug 1027251 https://bugzilla.mozilla.org/show_bug.cgi?id=1027251 we
added a static_assert as discussed in this thread, which discovered all
remaining instances, and we fixed the easy ones, which were the majority.

The harder ones have been temporarily whitelisted. See
HasDangerousPublicDestructorT.

There are 11 such classes. Follow-up bugs have been filed for each of them,
blocking the tracking Bug 1028132
https://bugzilla.mozilla.org/show_bug.cgi?id=1028132.

Help is very welcome to fix these 11 classes! I won't have more time to
work on this for now.

The trickiest one is probably going to be mozilla::ipc::SharedMemory, which
is refcounted but to which IPDL-generated code takes nsAutoPtr's... so if
you have data that you care about, don't put it in a SharedMemory, for
now... the bug for this one is Bug 1028148
https://bugzilla.mozilla.org/show_bug.cgi?id=1028148.

This is only about nsISupportsImpl.h refcounting. We considered doing the
same for MFBT RefCounted (Bug 1028122
https://bugzilla.mozilla.org/show_bug.cgi?id=1028122) but we can't,
because as C++ base classes have no access to protected derived class
members, RefCounted inherently forces making destructors public (unless we
befriend everywhere), which is also the reason why we had concluded earlier
that RefCounted is a bad idea.

We haven't started checking final-ness yet. It's an open question AFAIK how
we would enforce that, as there are legitimate and widespread uses for
non-final refcounting. We would probably have to offer separate _NONFINAL
refcounting macros, or something like that.

Thanks,
Benoit




2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com:

 Hi dev-platform,

 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:

  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')

  (b) Your class should be labeled as MOZ_FINAL (or, see below).


 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.

 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.

 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.

 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route to trigger the same sort of bug -- someone can come along and add
 a subclass, perhaps not realizing that they're subclassing a refcounted
 class, and the subclass will (by default) have a public destructor,
 which means then that anyone can declare
   MySubclass foo;
 and run into the exact same problem with the subclass.  A MOZ_FINAL
 annotation will prevent that by keeping people from naively adding
 subclasses.

 BUT WHAT IF I NEED SUBCLASSES
 =
 First, if your class is abstract, then it shouldn't have AddRef/Release
 implementations to begin with.  Those belong on the concrete subclasses
 -- not on your abstract base class.

 But if your class is concrete and refcounted and needs to have
 subclasses, then:
  - Your base class *and each of its subclasses* should have virtual,
 protected destructors, to prevent the MySubclass foo; problem
 mentioned above.
  - Your subclasses themselves should also probably be declared as
 MOZ_FINAL, to keep someone from naively adding another subclass
 without recognizing the above.
  - Your subclasses should definitely *not* declare their own
 AddRef/Release methods. (They should share the base class's methods 
 refcount.)

 For more information, see
 https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
 this sort of thing in a bunch of existing classes.  I definitely didn't
 catch everything there, so please feel encouraged to continue this work
 in other bugs. (And if you catch any cases that look like potential
 double-frees, mark them as security-sensitive.)

 Thanks!
 ~Daniel
 

Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 15:31 GMT-04:00 Botond Ballo bba...@mozilla.com:

 Cairo-based 2D drawing API (latest revision):
   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf


I would like the C++ committee's attention to be drawn to the dangers, for
committee, to try to make decisions outside of its domain of expertise. I
see more potential for harm than for good in having the C++ committee join
the ranks of non graphics specialists thinking they know how to do
graphics...

If that helps, we can give a pile of evidence for how having generalist Web
circles trying to standardize graphics APIs has repeatedly given
unnecessarily poor APIs...

Benoit






 Reflection proposals (these are very early-stage proposals, but they
 give an idea of the directions people are exploring):
   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3987.pdf
   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3996.pdf
   http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4027.pdf


 The Committee is meeting next week in Rapperswil, Switzerland. I will be
 attending.

 If anyone has any feedback on the above proposals, or any other proposals,
 or anything else you'd like me to communicate at the meeting, or anything
 I can find out for you at the meeting, please let me know!

 Shortly after the meeting I will blog about what happened there - stay
 tuned!

 Cheers,
 Botond
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 15:56 GMT-04:00 Botond Ballo bba...@mozilla.com:

 - Original Message -
  From: Benoit Jacob jacob.benoi...@gmail.com
  To: Botond Ballo bba...@mozilla.com
  Cc: dev-platform dev-platform@lists.mozilla.org
  Sent: Monday, June 9, 2014 3:45:20 PM
  Subject: Re: C++ standards proposals of potential interest, and upcoming
 committee meeting
 
  2014-06-09 15:31 GMT-04:00 Botond Ballo bba...@mozilla.com:
 
   Cairo-based 2D drawing API (latest revision):
 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf
  
 
  I would like the C++ committee's attention to be drawn to the dangers,
 for
  committee, to try to make decisions outside of its domain of expertise. I
  see more potential for harm than for good in having the C++ committee
 join
  the ranks of non graphics specialists thinking they know how to do
  graphics...

 Does this caution apply even if the explicit goal of this API is to allow
 people learning C++ and/or creating simple graphical applications to be
 able to do so with minimal overhead (setting up third-party libraries and
 such), rather than necessarily provide a tool for expert-level/heavy-duty
 graphics work?


That would ease my concerns a lot, if that were the case, but skimming
through the proposal, it explicitly seems not to be the case.

The Motivation and Scope section shows that this aims to target drawing
GUIs and cover other needs of graphical applications, so it's not just
about learning or tiny use cases.

Even more worryingly, the proposal talks about GPUs and Direct3D and OpenGL
and even Mantle, and that scares me, given what we know about how sad it is
to have to take an API like Cairo (or Skia, or Moz2D, or Canvas 2D, it
doesn't matter) and try to make it efficiently utilize GPUs. The case of a
Cairo-like or Skia-like API could totally be made, but the only mention of
GPUs should be to say that they are mostly outside of its scope; anything
more enthusiastic than that confirms fears that the proposal's authors are
not talking out of experience.

Benoit





 Cheers,
 Botond

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 16:12 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-09 15:56 GMT-04:00 Botond Ballo bba...@mozilla.com:

 - Original Message -
  From: Benoit Jacob jacob.benoi...@gmail.com
  To: Botond Ballo bba...@mozilla.com
  Cc: dev-platform dev-platform@lists.mozilla.org
  Sent: Monday, June 9, 2014 3:45:20 PM
  Subject: Re: C++ standards proposals of potential interest, and
 upcoming committee meeting
 
  2014-06-09 15:31 GMT-04:00 Botond Ballo bba...@mozilla.com:
 
   Cairo-based 2D drawing API (latest revision):
 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4021.pdf
  
 
  I would like the C++ committee's attention to be drawn to the dangers,
 for
  committee, to try to make decisions outside of its domain of expertise.
 I
  see more potential for harm than for good in having the C++ committee
 join
  the ranks of non graphics specialists thinking they know how to do
  graphics...

 Does this caution apply even if the explicit goal of this API is to allow
 people learning C++ and/or creating simple graphical applications to be
 able to do so with minimal overhead (setting up third-party libraries and
 such), rather than necessarily provide a tool for expert-level/heavy-duty
 graphics work?


 That would ease my concerns a lot, if that were the case, but skimming
 through the proposal, it explicitly seems not to be the case.

 The Motivation and Scope section shows that this aims to target drawing
 GUIs and cover other needs of graphical applications, so it's not just
 about learning or tiny use cases.

 Even more worryingly, the proposal talks about GPUs and Direct3D and
 OpenGL and even Mantle, and that scares me, given what we know about how
 sad it is to have to take an API like Cairo (or Skia, or Moz2D, or Canvas
 2D, it doesn't matter) and try to make it efficiently utilize GPUs. The
 case of a Cairo-like or Skia-like API could totally be made, but the only
 mention of GPUs should be to say that they are mostly outside of its scope;
 anything more enthusiastic than that confirms fears that the proposal's
 authors are not talking out of experience.


It's actually even worse than I realized: the proposal is peppered with
performance-related comments about GPUs. Just search for GPU in it, there
are 42 matches, most of them scarily talking about GPU performance
characteristics (a typical one is GPU resources are expensive to copy).

This proposal should either not care at all about GPU details, which would
be totally fine for a basic software 2D renderer, which could already cover
the needs of many applications; or, if it were to seriously care about
running fast on GPUs, it would not use Cairo as its starting point and it
would look totally different (it would try to lend itself ot seamlessly
batching and reordering drawing primitives; typically, a declarative /
scene-graph API would be a better starting point).

Benoit





 Benoit





 Cheers,
 Botond



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposals of potential interest, and upcoming committee meeting

2014-06-09 Thread Benoit Jacob
2014-06-09 16:27 GMT-04:00 Jet Villegas j...@mozilla.com:

 It seems healthy for the core C++ language to explore new territory here.
 Modern primitives for things like pixels and colors would be a good thing,
 I think. Let the compiler vendors compete to boil it down to the CPU/GPU.


In the Web world, we have such an API, Canvas 2D, and the compiler
vendors are the browser vendors. After years of intense competition
between browser vendors, and very high cost to all browser vendors, nobody
has figured yet how to make Canvas2D efficiently utilize GPUs. There are
basically two kinds of Canvas2D applications: those for which GPUs have
been useless so far, and those which have benefited much more from getting
ported to WebGL, than they did from accelerated Canvas 2D.

Benoit





 There will always be the argument for keeping such things out of Systems
 languages, but that school of thought won't use those features anyway. I
 was taught to not divide by 2 because bit-shifting is how you do fast
 graphics in C/C++. I sure hope the compilers have caught up and such
 trickery is no longer required--Graphics shouldn't be such a black art.

 --Jet

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-08 Thread Benoit Jacob
2014-06-08 8:56 GMT-04:00 fb.01...@gmail.com:

 On Monday, June 2, 2014 12:11:29 AM UTC+2, Benoit Jacob wrote:
  My ROI for arguing on standards mailing on matrix math topics lists has
  been very low, presumably because these are specialist topics outside of
  the area of expertise of these groups.
 
  Here are a couple more objections by the way:
 
  [...]
 
  Benoit

 Benoit, would you mind producing a strawman for ES7, or advising someone
 who can? Brendan Eich is doing some type stuff which is probably relevant
 to this (also for SIMD etc.). I firmly believe proper Matrix handling 
 APIs for JS are wanted by quite a few people. DOMMatrix-using APIs may then
 be altered to accept JS matrices (or provide a way to translate from
 JSMatrix to DOMMatrix and back again). This may help in the long term while
 the platform can have the proposed APIs. Thanks!


Don't put matrix arithmetic concepts directly in a generalist language like
JS, or in its standard library. That's too much of a specialist topic and
with too many compromises to decide on.

Instead, at the language level, simply make sure that the language offers
the right features to allow third parties to build good matrix classes on
top of it.

For example, C++'s templates, OO concepts, alignment/SIMD extensions, etc,
make it a decent language to implement matrix libraries on top of, and as a
result, C++ programmers are much better serve by the offering of
independent matrix libraries, than they would be by a standard library
attempt at matrix library design. Another example is Fortran, which IIRC
has specific features enabling fast array arithmetic, but lets the actual
matrix arithmetic up to 3rd-party libraries (BLAS, LAPACK). I think that
all the history shows that leaving matrix arithmetic up to 3rd parties is
best, but there are definitely language-level issues to discuss to enable
3rd parties to do that well.

Benoit





 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-07 Thread Benoit Jacob
2014-06-07 12:49 GMT-04:00 L. David Baron dba...@dbaron.org:

 On Monday 2014-06-02 20:45 -0700, Rik Cabanier wrote:
  - change isIdentity() so it's a flag.

 I'm a little worried about this one at first glance.

 I suspect isIdentity is going to be used primarily for optimization.
 But we want optimizations on the Web to be good -- we should care
 about making it easy for authors to care about performance.  And I'm
 worried that a flag-based isIdentity will not be useful for
 optimization because it won't hit many of the cases that authors
 care about, e.g., translating and un-translating, or scaling and
 un-scaling.


Note that the current way that isIdentity() works also fails to offer that
characteristic, outside of accidental cases, due to how floating point
works.

The point of this optimizations is not so much to detect when a generic
transformation happens to be of a special form, it is rather to represent
transformations as a kind of variant type where matrix transformation is
one possible variant type, and exists alongside the default, more optimized
type, identity transformation.

Earlier in this thread I pleaded for the removal of isIdentity(). What I
mean is that as it only is defensible as a variant optimization as
described above, it doesn't make sense in a _matrix_ class. If we want to
have such a variant type, we should call it a name that does not contain
the word matrix, and we should have it one level above where we actually
do matrix arithmetic.

Strawman class diagram:

  Transformation
  /  |  \
 /   |   \
/|\
   / | \
Identity   MatrixOther transform types
   e.g. Translation

In such a world, the class containing the word Matrix in its name would
not have a isIdentity() method; and for use cases where having a variant
type that can avoid being a full blown matrix is meaningful, we would have
such a variant type, like Transformation in the above diagram, and the
isIdentity() method there would be merely asking the variant type for its
type field.

Benoit




 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Benoit Jacob
2014-06-05 2:48 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic msrecko...@mozilla.com
 wrote:

 In general, is “this is how it worked with SVGMatrix” one of the design
 principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that takes
 degrees...


 degrees is easier to understand for authors.
 With the new DOMMatrix constructor, you can specify radians:

 var m = new DOMMatrix('rotate(1.75rad)' ;

 Not specifying the unit will make it default to degrees (like angles in
 SVG)



The situation isn't symmetric: radians are inherently simpler to implement
(thus slightly faster), basically because only in radians is it true that
sin(x) ~= x for small x.

I also doubt that degrees are simpler to understand, and if anything you
might just want to provide a simple name for the constant 2*pi:

var turn = Math.PI * 2;

Now, what is easier to understand:

rotate(turn / 5)

or

rotate(72)

?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Benoit Jacob
2014-06-05 9:08 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Thu, Jun 5, 2014 at 5:05 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-05 2:48 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Wed, Jun 4, 2014 at 2:20 PM, Milan Sreckovic msrecko...@mozilla.com
 wrote:

 In general, is “this is how it worked with SVGMatrix” one of the design
 principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that takes
 degrees...


 degrees is easier to understand for authors.
 With the new DOMMatrix constructor, you can specify radians:

 var m = new DOMMatrix('rotate(1.75rad)' ;

 Not specifying the unit will make it default to degrees (like angles in
 SVG)



 The situation isn't symmetric: radians are inherently simpler to
 implement (thus slightly faster), basically because only in radians is it
 true that sin(x) ~= x for small x.

 I also doubt that degrees are simpler to understand, and if anything you
 might just want to provide a simple name for the constant 2*pi:

 var turn = Math.PI * 2;

 Now, what is easier to understand:

 rotate(turn / 5)

 or

 rotate(72)


 The numbers don't lie :-)
 Just do a google search for CSS transform rotate. I went over 20 pages
 of results and they all used deg.


The other problem is that outside of SVG, other parts of the platform that
are being proposed to use SVGMatrix were using radians. For example, the
Canvas 2D context uses radians

http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-rotate

Not to mention that JavaScript also uses radians, e.g. in Math.cos().

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-05 Thread Benoit Jacob
2014-06-05 18:59 GMT-04:00 Matt Woodrow mwood...@mozilla.com:

 On 6/06/14 12:05 am, Benoit Jacob wrote:


 The situation isn't symmetric: radians are inherently simpler to implement
 (thus slightly faster), basically because only in radians is it true that
 sin(x) ~= x for small x.

 I also doubt that degrees are simpler to understand, and if anything you
 might just want to provide a simple name for the constant 2*pi:

 var turn = Math.PI * 2;

 Now, what is easier to understand:

 rotate(turn / 5)

 or

 rotate(72)

 ?



 I don't think this is a fair comparison, you used a fraction of a constant
 for one and a raw number for the other.

 Which is easier to understand:

 var turn = 360;

 rotate(turn / 5)

 or

 rotate(1.25663706143592)

 ?


I just meant that neither radians nor degrees are significantly easier than
the other, since in practice this is just changing the value for the turn
constant that people shouldn't be writing manually, i.e. even in degrees
people should IMHO write turn/4 instead of 90.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-04 Thread Benoit Jacob
2014-06-04 20:28 GMT-04:00 Cameron McCormack c...@mcc.id.au:

 On 05/06/14 07:20, Milan Sreckovic wrote:

 In general, is “this is how it worked with SVGMatrix” one of the
 design principles?

 I was hoping this would be the time matrix rotate() method goes to
 radians, like the canvas rotate, and unlike SVGMatrix version that
 takes degrees...


 By the way, in the SVG Working Group we have been discussing (but haven't
 decided yet) whether to perform a wholesale overhaul of the SVG DOM.

 http://dev.w3.org/SVG/proposals/improving-svg-dom/

 If we go through with that, then we could drop SVGMatrix and use DOMMatrix
 (which wouldn't then need to be compatible with SVGMatrix) for all the SVG
 DOM methods we wanted to retain that deal with matrices. I'm hoping we'll
 resolve whether to go ahead with this at our next meeting, in August.


Thanks, that's very interesting input in this thread, as the entire
conversation here has been based on the axiom that we have to keep
compatibility with SVGMatrix

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-02 23:45 GMT-04:00 Rik Cabanier caban...@gmail.com:

 To recap I think the following points have been resolved:
 - remove determinant (unless someone comes up with a strong use case)
 - change is2D() so it's a flag instead of calculated on the fly
 - change isIdentity() so it's a flag.
 - update constructors so they set/copy the flags appropriately

 Still up for discussion:
 - rename isIdentity
 - come up with better way for the in-place transformations as opposed to
 by
 - is premultiply needed?



This list misses some of the points that I care more about:
 - Should DOMMatrix really try to be both 3D projective transformations and
2D affine transformations or should that be split into separate classes?
 - Should we really take SVG's matrix and other existing bad matrix APIs
and bless them and engrave them in the marble of The New HTML5 That Is Good
By Definition?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 16:20 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case? Does it
  outweigh that? Note that if the intent is precisely to offer some kind
 of
  measure of invertibility, then that is yet another thing that would be
 best
  done with a singular values decomposition (along with solving, and with
  computing a polar decomposition, useful for interpolating matrices), by
  returning the ratio between the lowest and the highest singular value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check for
 invertibility of a matrix and part of actually inverting the matrix. Even
 Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
 determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }


 This seems to be the main use case for Determinant(). Any objections if we
 add isInvertible to DOMMatrixReadOnly?


Can you give an example of how this API would be used and how it would
*not* force the implementation to compute the determinant twice if people
call isInvertible() and then inverse() ?

Benoit




 Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only an exact comparison of the determinant to 0 is
 relevant. That's a case where no fuzzy comparison is meaningful. If one
 wanted to guard against cancellation-induced imprecision, one would have to
 look at cofactors themselves, not just at the determinant.

 In full generality, the determinant is just the volume of the unit cube
 under the matrix transformation. It is exactly zero if and only if the
 matrix is singular. That doesn't by itself give any interpretation of other
 nonzero values of the determinant, not even very small ones.

 For special classes of matrices, things are different. Some classes of
 matrices have a specific determinant, for example rotations have
 determinant one, which can be used to do useful things. So in a
 sufficiently advanced or specialized matrix API, the determinant is useful
 to expose. DOMMatrix is special in that it is not advanced and not
 specialized.

 Benoit


 Greetings,
 Dirk




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 17:34 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-03 16:20 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case? Does
 it
  outweigh that? Note that if the intent is precisely to offer some
 kind of
  measure of invertibility, then that is yet another thing that would
 be best
  done with a singular values decomposition (along with solving, and
 with
  computing a polar decomposition, useful for interpolating matrices),
 by
  returning the ratio between the lowest and the highest singular value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check for
 invertibility of a matrix and part of actually inverting the matrix. Even
 Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use the
 determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }


 This seems to be the main use case for Determinant(). Any objections if
 we add isInvertible to DOMMatrixReadOnly?


 Can you give an example of how this API would be used and how it would
 *not* force the implementation to compute the determinant twice if people
 call isInvertible() and then inverse() ?


Actually, inverse() is already spec'd to throw if the inversion fails. In
that case (assuming we keep it that way) there is no need at all for any
isInvertible kind of method. Note that in floating-point arithmetic there
is no absolute notion of invertibility; there just are different matrix
inversion algorithms each failing on different matrices, so invertibility
only makes sense with respect to one inversion algorithm, so it is actually
better to keep the current exception-throwing API than to introduce a
separate isInvertible getter.

Benoit



 Benoit




 Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only an exact comparison of the determinant to 0 is
 relevant. That's a case where no fuzzy comparison is meaningful. If one
 wanted to guard against cancellation-induced imprecision, one would have to
 look at cofactors themselves, not just at the determinant.

 In full generality, the determinant is just the volume of the unit cube
 under the matrix transformation. It is exactly zero if and only if the
 matrix is singular. That doesn't by itself give any interpretation of other
 nonzero values of the determinant, not even very small ones.

 For special classes of matrices, things are different. Some classes of
 matrices have a specific determinant, for example rotations have
 determinant one, which can be used to do useful things. So in a
 sufficiently advanced or specialized matrix API, the determinant

Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 18:26 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Tue, Jun 3, 2014 at 2:40 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-03 17:34 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-03 16:20 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Tue, Jun 3, 2014 at 6:06 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-03 3:34 GMT-04:00 Dirk Schulze dschu...@adobe.com:


 On Jun 2, 2014, at 12:11 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:

  Objection #6:
 
  The determinant() method, being in this API the only easy way to get
  something that looks roughly like a measure of invertibility, will
 probably
  be (mis-)used as a measure of invertibility. So I'm quite confident
 that it
  has a strong mis-use case. Does it have a strong good use case?
 Does it
  outweigh that? Note that if the intent is precisely to offer some
 kind of
  measure of invertibility, then that is yet another thing that would
 be best
  done with a singular values decomposition (along with solving, and
 with
  computing a polar decomposition, useful for interpolating
 matrices), by
  returning the ratio between the lowest and the highest singular
 value.

 Looking at use cases, then determinant() is indeed often used for:

 * Checking if a matrix is invertible.
 * Part of actually inverting the matrix.
 * Part of some decomposing algorithms as the one in CSS Transforms.

 I should note that the determinant is the most common way to check
 for invertibility of a matrix and part of actually inverting the matrix.
 Even Cairo Graphics, Skia and Gecko’s representation of matrix3x3 do use
 the determinant for these operations.


 I didn't say that determinant had no good use case. I said that it had
 more bad use cases than it had good ones. If its only use case if checking
 whether the cofactors formula will succeed in computing the inverse, then
 make that part of the inversion API so you don't compute the determinant
 twice.

 Here is a good use case of determinant, except it's bad because it
 computes the determinant twice:

   if (matrix.determinant() != 0) {// once
 result = matrix.inverse(); // twice
   }

 If that's the only thing we use the determinant for, then we're better
 served by an API like this, allowing to query success status:

   var matrixInversionResult = matrix.inverse();   // once
   if (matrixInversionResult.invertible()) {
 result = matrixInversionResult.inverse();
   }


 This seems to be the main use case for Determinant(). Any objections if
 we add isInvertible to DOMMatrixReadOnly?


 Can you give an example of how this API would be used and how it would
 *not* force the implementation to compute the determinant twice if people
 call isInvertible() and then inverse() ?


 Actually, inverse() is already spec'd to throw if the inversion fails. In
 that case (assuming we keep it that way) there is no need at all for any
 isInvertible kind of method. Note that in floating-point arithmetic there
 is no absolute notion of invertibility; there just are different matrix
 inversion algorithms each failing on different matrices, so invertibility
 only makes sense with respect to one inversion algorithm, so it is actually
 better to keep the current exception-throwing API than to introduce a
 separate isInvertible getter.


 That would require try/catch around all the invert() calls. This is ugly
 but more importantly, it will significantly slow down javascript execution.
 I'd prefer that we don't throw at all but we have to because SVGMatrix did.


So, if we have to have inverse() throw, do you agree that this removes the
need for any isInvertible() kind of method? For the reason I explained
above (invertibility is relative to a particular inversion algorithm) I
would rather have inversion and invertibility-checking be provided by a
single function. If we do have the option of not throwing, then that could
be a single function returning both the inverse and a boolean.

Benoit





  Typical bad uses of the determinant as measures of invertibility
 typically occur in conjunction with people thinking they do the right 
 thing
 with fuzzy compares, like this typical bad pattern:

   if (matrix.determinant()  1e-6) {
 return error;
   }
   result = matrix.inverse();

 Multiple things are wrong here:

  1. First, as mentioned above, the determinant is being computed twice
 here.

  2. Second, floating-point scale invariance is broken: floating point
 computations should generally work for all values across the whole 
 exponent
 range, which for doubles goes from 1e-300 to 1e+300 roughly. Take the
 matrix that's 0.01*identity, and suppose we're dealing with 4x4 matrices.
 The determinant of that matrix is 1e-8, so that matrix is incorrectly
 treated as non-invertible here.

  3. Third, if the primary use for the determinant is invertibility and
 inversion is implemented by cofactors (as it would be for 4x4 matrices)
 then in that case only

Re: Intent to implement: DOMMatrix

2014-06-03 Thread Benoit Jacob
2014-06-03 18:29 GMT-04:00 Robert O'Callahan rob...@ocallahan.org:

 On Wed, Jun 4, 2014 at 10:26 AM, Rik Cabanier caban...@gmail.com wrote:

 That would require try/catch around all the invert() calls. This is ugly
 but more importantly, it will significantly slow down javascript
 execution.
 I'd prefer that we don't throw at all but we have to because SVGMatrix
 did.


 Are you sure that returning a special value (e.g. all NaNs) would not fix
 more code than it would break?

 I think returning all NaNs instead of throwing would be much better
 behavior.


FWIW, I totally agree! That is exaclty what NaN is there for, and floating
point would be a nightmare if division-by-zero threw.

To summarize, my order of preference is:

  1. (my first choice) have no inverse() / invert() / isInvertible()
methods at all.

  2. (second choice) have inverse() returning NaN on non-invertible
matrices and possibly somehow returning a second boolean return value (e.g.
an out-parameter or a structured return value) to indicate whether the
matrix was invertible. Do not have a separate isInvertible().

  3. (worst case #1) keep inverse() throwing. Do not have a separate
isInvertible().

  4. (worst case #2) offer isInvertible() method separate from inverse().

Benoit



 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
 le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-01 23:19 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Sun, Jun 1, 2014 at 3:11 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-05-31 0:40 GMT-04:00 Rik Cabanier caban...@gmail.com:

  Objection #3:

 I dislike the way that this API exposes multiplication order. It's not
 obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
 and which is doing A=B*A.


 The by methods do the transformation in-place. In this case, both are
 A = A * B
 Maybe you're thinking of preMultiply?


 Ah, I was totally confused by the method names. Multiply is already a
 verb, and the method name multiply already implicitly means multiply
 *by*. So it's very confusing that there is another method named multiplyBy.


 Yeah, we had discussion on that. 'by' is not ideal, but it is much shorter
 than 'InPlace'. Do you have a suggestion to improve the name?


My suggestion was the one below that part (multiply-product,
multiplyBy-multiply) but it seems that that's moot because:




 Methods on DOMMatrixReadOnly are inconsistently named: some, like
 multiply, are named after the /verb/ describing what they /do/, while
 others, like inverse, are named after the /noun/ describing what they
 /return/.

 Choose one and stick to it; my preference goes to the latter, i.e. rename
 multiply to product in line with the existing inverse and then the
 DOMMatrix.multiplyBy method can drop the By and become multiply.

 If you do rename multiply to product that leads to the question of
 what preMultiply should become.

 In an ideal world (not commenting on whether that's a thing we can get on
 the Web), product would be a global function, not a class method, so you
 could let people write product(X, Y) or product(Y, X) and not have to worry
 about naming differently the two product orders.


 Unfortunately, we're stuck with the API names that SVG gave to its matrix.
 The only way to fix this is to duplicate the API and support both old and
 new names which is very confusing,


Sounds like the naming is not even up for discussion, then? In that case,
what is up for discussion?

That's basically the core disagreement here: I'm not convinced that just
because something is in SVG implies that it should be propagated as a
blessed abstraction for the rest of the Web. Naming and branding matter:
something named SVGMatrix clearly suggests should be used for dealing
with SVG while something named DOMMatrix sounds like it's recommended
for use everywhere on the Web.

I would rather have SVG keep its own matrix class while the rest of the Web
gets something nicer.





  Objection #4:

 By exposing a inverse() method but no solve() method, this API will
 encourage people who have to solve linear systems to do so by doing
 matrix.inverse().transformPoint(...), which is inefficient and can be
 numerically unstable.

 But then of course once we open the pandora box of exposing solvers,
 the API grows a lot more. My point is not to suggest to grow the API more.
 My point is to discourage you and the W3C from getting into the matrix API
 design business. Matrix APIs are bound to either grow big or be useless. I
 believe that the only appropriate Matrix interface at the Web API level is
 a plain storage class, with minimal getters (basically a thin wrapper
 around a typed array without any nontrivial arithmetic built in).


 We already went over this at length about a year ago.
 Dirk's been asking for feedback on this interface on www-style and
 public-fx so can you raise your concerns there? Just keep in mind that we
 have to support the SVGMatrix and CSSMatrix interfaces.


 My ROI for arguing on standards mailing on matrix math topics lists has
 been very low, presumably because these are specialist topics outside of
 the area of expertise of these groups.


 It is a constant struggle. We need to strike a balance between
 mathematicians and average authors. Stay with it and prepare to repeat
 yourself; it's frustrating for everyone involved.
 If you really don't want to participate anymore, we can get to an
 agreement here and I can try to convince the others.


I'm happy to continue to provide input on matrix API design or other math
topics. I can't go spontaneously participate in conversations on all the
mailing lists though; dev-platform is the only one that I monitor closely,
and where I'm very motivated to get involved, because what really makes my
life harder is if the wrong API gets implemented in Gecko.



 Here are a couple more objections by the way:

 Objection #5:

 The isIdentity() method has the same issue as was described about is2D()
 above: as matrices get computed, they are going to jump unpredicably
 between being exactly identity and not. People using isIdentity() to jump
 between code paths are going to get unexpected jumps between code paths
 i.e. typically performance cliffs, or worse if they start asserting that a
 matrix should or should not be exactly identity. For that reason, I would
 remove

Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 14:06 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-02 13:56 GMT-04:00 Nick Alexander nalexan...@mozilla.com:

 On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




 On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander nalexan...@mozilla.com
 mailto:nalexan...@mozilla.com wrote:

 On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

 On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier caban...@gmail.com
 mailto:caban...@gmail.com wrote:

 isIdentity() indeed suffers from rounding errors but since
 it's useful, I'm
 hesitant to remove it.
 In our rendering libraries at Adobe, we check if a matrix is
 *almost*
 identity. Maybe we can do the same here?


 One option would be to make isIdentity and is2D state bits
 in the
 object rather than predicates on the matrix coefficients. Then
 for each
 matrix operation, we would define how it affects the isIdentity
 and is2D
 bits. For example we could say translate(tx, ty, tz)'s result
 isIdentity if
 and only if the source matrix isIdentity and tx, ty and tz are
 all exactly
 0.0, and the result is2D if and only if the source matrix is2D
 and tz is
 exactly 0.0.

 With that approach, isIdentity and is2D would be much less
 sensitive to
 precision issues. In particular they'd be independent of the
 precision used
 to compute and store store matrix elements, which would be
 helpful I think.


 I agree that most mathematical ways of determining a matrix (as a
 rotation, or a translation, etc) come with isIdentity for free; but
 are most matrices derived from some underlying transformation, or
 are they given as a list of coefficients?


 You can do it either way. Here are the constructors:
 http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

 So you can do:

 var m = new DOMMatrix(); // identity = true, 2d = true
 var m = new DOMMatrix(translate(20 20) scale(4 4) skewX); //
 identity = depends, 2d = depends
 var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
 inherited
 var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d =
 true
 var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
 depends

 If the latter, the isIdentity flag needs to be determined by the
 constructor, or fed as a parameter.  Exactly how does the
 constructor determine the parameter?  Exactly how does the user?


 The constructor would check the incoming parameters as defined:

 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


 Thanks for providing these references.  As an aside -- it worries me that
 these are defined rather differently:  is2d says are equal to 0, while
 isIdentity says are '0'.  Is this a syntactic or a semantic difference?

 But, to the point, the idea of carrying around the isIdentity flag is
 looking bad, because we either have that A*A.inverse() will never have
 isIdentity() == true; or we promote the idiom that to check for identity,
 one always creates a new DOMMatrix, so that the constructor determines
 isIdentity, and then we query it.  This is no better than just having
 isIdentity do the (badly-rounded) check.


 The way that propagating an is identity flag is better than determining
 that from the matrix coefficients, is that it's predictable. People are
 going to have matrices that are the result of various arithmetic
 operations, that are close to identity but most of the time not exactly
 identity. On these matrices, I would like isIdentity() to consistently
 return false, instead of returning false 99.99% of the time and then
 suddenly accidentally returning true when a little miracle happens and a
 matrix happens to be exactly identity.


...but, to not lose sight of what I really want:  I am still not convinced
that we should have a isIdentity() method at all, and by default I would
prefer no such method to exist. I was only saying the above _if_ we must
have a isIdentity method.

Benoit



 Benoit




 Nick

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 13:56 GMT-04:00 Nick Alexander nalexan...@mozilla.com:

 On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




 On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander nalexan...@mozilla.com
 mailto:nalexan...@mozilla.com wrote:

 On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

 On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier caban...@gmail.com
 mailto:caban...@gmail.com wrote:

 isIdentity() indeed suffers from rounding errors but since
 it's useful, I'm
 hesitant to remove it.
 In our rendering libraries at Adobe, we check if a matrix is
 *almost*
 identity. Maybe we can do the same here?


 One option would be to make isIdentity and is2D state bits
 in the
 object rather than predicates on the matrix coefficients. Then
 for each
 matrix operation, we would define how it affects the isIdentity
 and is2D
 bits. For example we could say translate(tx, ty, tz)'s result
 isIdentity if
 and only if the source matrix isIdentity and tx, ty and tz are
 all exactly
 0.0, and the result is2D if and only if the source matrix is2D
 and tz is
 exactly 0.0.

 With that approach, isIdentity and is2D would be much less
 sensitive to
 precision issues. In particular they'd be independent of the
 precision used
 to compute and store store matrix elements, which would be
 helpful I think.


 I agree that most mathematical ways of determining a matrix (as a
 rotation, or a translation, etc) come with isIdentity for free; but
 are most matrices derived from some underlying transformation, or
 are they given as a list of coefficients?


 You can do it either way. Here are the constructors:
 http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

 So you can do:

 var m = new DOMMatrix(); // identity = true, 2d = true
 var m = new DOMMatrix(translate(20 20) scale(4 4) skewX); //
 identity = depends, 2d = depends
 var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d =
 inherited
 var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d = true
 var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d =
 depends

 If the latter, the isIdentity flag needs to be determined by the
 constructor, or fed as a parameter.  Exactly how does the
 constructor determine the parameter?  Exactly how does the user?


 The constructor would check the incoming parameters as defined:

 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


 Thanks for providing these references.  As an aside -- it worries me that
 these are defined rather differently:  is2d says are equal to 0, while
 isIdentity says are '0'.  Is this a syntactic or a semantic difference?

 But, to the point, the idea of carrying around the isIdentity flag is
 looking bad, because we either have that A*A.inverse() will never have
 isIdentity() == true; or we promote the idiom that to check for identity,
 one always creates a new DOMMatrix, so that the constructor determines
 isIdentity, and then we query it.  This is no better than just having
 isIdentity do the (badly-rounded) check.


The way that propagating an is identity flag is better than determining
that from the matrix coefficients, is that it's predictable. People are
going to have matrices that are the result of various arithmetic
operations, that are close to identity but most of the time not exactly
identity. On these matrices, I would like isIdentity() to consistently
return false, instead of returning false 99.99% of the time and then
suddenly accidentally returning true when a little miracle happens and a
matrix happens to be exactly identity.

Benoit




 Nick

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: DOMMatrix

2014-06-02 Thread Benoit Jacob
2014-06-02 17:13 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Mon, Jun 2, 2014 at 11:08 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:




 2014-06-02 14:06 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-06-02 13:56 GMT-04:00 Nick Alexander nalexan...@mozilla.com:

 On 2014-06-02, 9:59 AM, Rik Cabanier wrote:




 On Mon, Jun 2, 2014 at 9:05 AM, Nick Alexander nalexan...@mozilla.com
 mailto:nalexan...@mozilla.com wrote:

 On 2014-06-02, 4:59 AM, Robert O'Callahan wrote:

 On Mon, Jun 2, 2014 at 3:19 PM, Rik Cabanier 
 caban...@gmail.com
 mailto:caban...@gmail.com wrote:

 isIdentity() indeed suffers from rounding errors but since
 it's useful, I'm
 hesitant to remove it.
 In our rendering libraries at Adobe, we check if a matrix
 is
 *almost*
 identity. Maybe we can do the same here?


 One option would be to make isIdentity and is2D state bits
 in the
 object rather than predicates on the matrix coefficients. Then
 for each
 matrix operation, we would define how it affects the isIdentity
 and is2D
 bits. For example we could say translate(tx, ty, tz)'s result
 isIdentity if
 and only if the source matrix isIdentity and tx, ty and tz are
 all exactly
 0.0, and the result is2D if and only if the source matrix is2D
 and tz is
 exactly 0.0.

 With that approach, isIdentity and is2D would be much less
 sensitive to
 precision issues. In particular they'd be independent of the
 precision used
 to compute and store store matrix elements, which would be
 helpful I think.


 I agree that most mathematical ways of determining a matrix (as a
 rotation, or a translation, etc) come with isIdentity for free; but
 are most matrices derived from some underlying transformation, or
 are they given as a list of coefficients?


 You can do it either way. Here are the constructors:
 http://dev.w3.org/fxtf/geometry/#dom-dommatrix-dommatrix

 So you can do:

 var m = new DOMMatrix(); // identity = true, 2d = true
 var m = new DOMMatrix(translate(20 20) scale(4 4) skewX); //
 identity = depends, 2d = depends
 var m = new DOMMatrix(otherdommatrix;  // identity = inherited, 2d
 =
 inherited
 var m = new DOMMatrix([a b c d e f]); // identity = depends, 2d =
 true
 var m = new DOMMatrix([m11 m12... m44]); // identity = depends, 2d
 =
 depends

 If the latter, the isIdentity flag needs to be determined by the
 constructor, or fed as a parameter.  Exactly how does the
 constructor determine the parameter?  Exactly how does the user?


 The constructor would check the incoming parameters as defined:

 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-is2d
 http://dev.w3.org/fxtf/geometry/#dom-dommatrixreadonly-isidentity


 Thanks for providing these references.  As an aside -- it worries me
 that these are defined rather differently:  is2d says are equal to 0,
 while isIdentity says are '0'.  Is this a syntactic or a semantic
 difference?

 But, to the point, the idea of carrying around the isIdentity flag is
 looking bad, because we either have that A*A.inverse() will never have
 isIdentity() == true; or we promote the idiom that to check for identity,
 one always creates a new DOMMatrix, so that the constructor determines
 isIdentity, and then we query it.  This is no better than just having
 isIdentity do the (badly-rounded) check.


 The way that propagating an is identity flag is better than
 determining that from the matrix coefficients, is that it's predictable.
 People are going to have matrices that are the result of various arithmetic
 operations, that are close to identity but most of the time not exactly
 identity. On these matrices, I would like isIdentity() to consistently
 return false, instead of returning false 99.99% of the time and then
 suddenly accidentally returning true when a little miracle happens and a
 matrix happens to be exactly identity.


 ...but, to not lose sight of what I really want:  I am still not
 convinced that we should have a isIdentity() method at all, and by default
 I would prefer no such method to exist. I was only saying the above _if_ we
 must have a isIdentity method.


 Scanning through the mozilla codebase, IsIdentity is used to make
 decisions if objects were transformed. This seems to match how we use
 Identity() internally.
 Since this seems useful for native applications, there's no reason why
 this wouldn't be the case for the web platform (aka blink's rational web
 platform principle). If for some reason the author *really* wants to know
 if the matrix is identity, he can calculate it manually.

 I would be fine with keeping this as an internal flag and defining this
 behavior normative.


Gecko's existing code really isn't authoritative on matrix

Re: Intent to implement: DOMMatrix

2014-05-30 Thread Benoit Jacob
I never seem to be able to discourage people from dragging the W3C into
specialist topics that are outside its area of expertise. Let me try again.

Objection #1:

The skew* methods are out of place there, because, contrary to the rest,
they are not geometric transformations, they are just arithmetic on matrix
coefficients whose geometric impact depends entirely on the choice of a
coordinate system. I'm afraid of leaving them there will propagate
widespread confusion about skews --- see e.g. the authors of
http://dev.w3.org/csswg/css-transforms/#matrix-interpolation who seemed to
think that decomposing a matrix into a product of things including a skew
would have geometric significance, leading to clearly unwanted behavior as
demonstrated in
http://people.mozilla.org/~bjacob/transform-animation-not-covariant.html

Objection #2:

This DOMMatrix interface tries to be simultaneously a 4x4 matrices
representing projective 3D transformations, and about 2x3 matrices
representing affine 2D transformations; this mode switch corresponds to the
is2D() getter. I have a long list of objections to this mode switch:
 - I believe that, being based on exact floating point comparisons, it is
going to be fragile. For example, people will assert that !is2D() when they
expect a 3D transformation, and that will intermittently fail when for
whatever reason their 3D matrix is going to be exactly 2D.
 - I believe that these two classes of transformations (projective 3D and
affine 2D) should be separate classes entirely, that that will make the API
simpler and more efficiently implementable and that forcing authors to
think about that choice more explicitly is doing them a favor.
 - I believe that that feature set, with this choice of two classes of
transformations (projective 3D and affine 2D), is arbitrary and
inconsistent. Why not support affine 3D or projective 2D, for instance?

Objection #3:

I dislike the way that this API exposes multiplication order. It's not
obvious enough which of A.multiply(B) and A.multiplyBy(B) is doing A=A*B
and which is doing A=B*A.

Objection #4:

By exposing a inverse() method but no solve() method, this API will
encourage people who have to solve linear systems to do so by doing
matrix.inverse().transformPoint(...), which is inefficient and can be
numerically unstable.

But then of course once we open the pandora box of exposing solvers, the
API grows a lot more. My point is not to suggest to grow the API more. My
point is to discourage you and the W3C from getting into the matrix API
design business. Matrix APIs are bound to either grow big or be useless. I
believe that the only appropriate Matrix interface at the Web API level is
a plain storage class, with minimal getters (basically a thin wrapper
around a typed array without any nontrivial arithmetic built in).

Benoit





2014-05-30 20:02 GMT-04:00 Rik Cabanier caban...@gmail.com:

 Primary eng emails
 caban...@adobe.com, dschu...@adobe.com

 *Proposal*
 *http://dev.w3.org/fxtf/geometry/#DOMMatrix
 http://dev.w3.org/fxtf/geometry/#DOMMatrix*

 *Summary*
 Expose new global objects named 'DOMMatrix' and 'DOMMatrixReadOnly' that
 offer a matrix abstraction.

 *Motivation*
 The DOMMatrix and DOMMatrixReadOnly interfaces represent a mathematical
 matrix with the purpose of describing transformations in a graphical
 context. The following sections describe the details of the interface.
 The DOMMatrix and DOMMatrixReadOnly interfaces replace the SVGMatrix
 interface from SVG.

 In addition, DOMMatrix will be part of CSSOM where it will simplify getting
 and setting CSS transforms.

 *Mozilla bug*
 https://bugzilla.mozilla.org/show_bug.cgi?id=1018497
 I will implement this behind the flag: layout.css.DOMMatrix

 *Concerns*
 None.
 Mozilla already implemented DOMPoint and DOMQuad

 *Compatibility Risk*
 Blink: unknown
 WebKit: in development [1]
 Internet Explorer: No public signals
 Web developers: unknown

 1: https://bugs.webkit.org/show_bug.cgi?id=110001
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Benoit Jacob
Awesome work!

By the way, I just figured a way that you could static_assert so that at
least on supporting C++11 compilers, we would automatically catch this.

The basic C++11 tool here is std::is_destructible from type_traits, but
it has a problem: it only returns false if the destructor is deleted, it
doesn't return false if the destructor is private. However, the example
below shows how we can still achieve what we want by using wrapping the
class that we are interested in as a member of a helper templated struct:



#include type_traits
#include iostream

class ClassWithDeletedDtor {
  ~ClassWithDeletedDtor() = delete;
};

class ClassWithPrivateDtor {
  ~ClassWithPrivateDtor() {}
};

class ClassWithPublicDtor {
public:
  ~ClassWithPublicDtor() {}
};

template typename T
class IsDestructorPrivateOrDeletedHelper {
  T x;
};

template typename T
struct IsDestructorPrivateOrDeleted
{
  static const bool value =
!std::is_destructibleIsDestructorPrivateOrDeletedHelperT::value;
};

int main() {
#define PRINT(x) std::cerr  #x  =   (x)  std::endl;

  PRINT(std::is_destructibleClassWithDeletedDtor::value);
  PRINT(std::is_destructibleClassWithPrivateDtor::value);
  PRINT(std::is_destructibleClassWithPublicDtor::value);

  std::cerr  std::endl;

  PRINT(IsDestructorPrivateOrDeletedClassWithDeletedDtor::value);
  PRINT(IsDestructorPrivateOrDeletedClassWithPrivateDtor::value);
  PRINT(IsDestructorPrivateOrDeletedClassWithPublicDtor::value);
}


Output:


std::is_destructibleClassWithDeletedDtor::value = 0
std::is_destructibleClassWithPrivateDtor::value = 0
std::is_destructibleClassWithPublicDtor::value = 1

IsDestructorPrivateOrDeletedClassWithDeletedDtor::value = 1
IsDestructorPrivateOrDeletedClassWithPrivateDtor::value = 1
IsDestructorPrivateOrDeletedClassWithPublicDtor::value = 0


If you also want to require classes to be final, C++11 type_traits also
has std::is_final for that.

Cheers,
Benoit


2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com:

 Hi dev-platform,

 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:

  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')

  (b) Your class should be labeled as MOZ_FINAL (or, see below).


 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.

 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.

 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.

 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route to trigger the same sort of bug -- someone can come along and add
 a subclass, perhaps not realizing that they're subclassing a refcounted
 class, and the subclass will (by default) have a public destructor,
 which means then that anyone can declare
   MySubclass foo;
 and run into the exact same problem with the subclass.  A MOZ_FINAL
 annotation will prevent that by keeping people from naively adding
 subclasses.

 BUT WHAT IF I NEED SUBCLASSES
 =
 First, if your class is abstract, then it shouldn't have AddRef/Release
 implementations to begin with.  Those belong on the concrete subclasses
 -- not on your abstract base class.

 But if your class is concrete and refcounted and needs to have
 subclasses, then:
  - Your base class *and each of its subclasses* should have virtual,
 protected destructors, to prevent the MySubclass foo; problem
 mentioned above.
  - Your subclasses themselves should also probably be declared as
 MOZ_FINAL, to keep someone from naively adding another subclass
 without recognizing the above.
  - Your subclasses should definitely *not* declare their own
 AddRef/Release methods. (They should share the base class's methods 
 refcount.)

 For more information, see
 https://bugzilla.mozilla.org/show_bug.cgi?id=984786 , where I've fixed
 this sort of thing in 

Re: PSA: Refcounted classes should have a non-public destructor should be MOZ_FINAL where possible

2014-05-28 Thread Benoit Jacob
Actually that test program contradicts what I said --- my
IsDestructorPrivateOrDeleted produces exactly the same result as
!is_destructible,  and is_destructible does return 0 for the class with
private destructor. So you could just use that!

Benoit


2014-05-28 16:51 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:

 Awesome work!

 By the way, I just figured a way that you could static_assert so that at
 least on supporting C++11 compilers, we would automatically catch this.

 The basic C++11 tool here is std::is_destructible from type_traits, but
 it has a problem: it only returns false if the destructor is deleted, it
 doesn't return false if the destructor is private. However, the example
 below shows how we can still achieve what we want by using wrapping the
 class that we are interested in as a member of a helper templated struct:



 #include type_traits
 #include iostream

 class ClassWithDeletedDtor {
   ~ClassWithDeletedDtor() = delete;
 };

 class ClassWithPrivateDtor {
   ~ClassWithPrivateDtor() {}
 };

 class ClassWithPublicDtor {
 public:
   ~ClassWithPublicDtor() {}
 };

 template typename T
 class IsDestructorPrivateOrDeletedHelper {
   T x;
 };

 template typename T
 struct IsDestructorPrivateOrDeleted
 {
   static const bool value =
 !std::is_destructibleIsDestructorPrivateOrDeletedHelperT::value;
 };

 int main() {
 #define PRINT(x) std::cerr  #x  =   (x)  std::endl;

   PRINT(std::is_destructibleClassWithDeletedDtor::value);
   PRINT(std::is_destructibleClassWithPrivateDtor::value);
   PRINT(std::is_destructibleClassWithPublicDtor::value);

   std::cerr  std::endl;

   PRINT(IsDestructorPrivateOrDeletedClassWithDeletedDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPrivateDtor::value);
   PRINT(IsDestructorPrivateOrDeletedClassWithPublicDtor::value);
 }


 Output:


 std::is_destructibleClassWithDeletedDtor::value = 0
 std::is_destructibleClassWithPrivateDtor::value = 0
 std::is_destructibleClassWithPublicDtor::value = 1

 IsDestructorPrivateOrDeletedClassWithDeletedDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPrivateDtor::value = 1
 IsDestructorPrivateOrDeletedClassWithPublicDtor::value = 0


 If you also want to require classes to be final, C++11 type_traits also
 has std::is_final for that.

 Cheers,
 Benoit


 2014-05-28 16:24 GMT-04:00 Daniel Holbert dholb...@mozilla.com:

 Hi dev-platform,

 PSA: if you are adding a concrete class with AddRef/Release
 implementations (e.g. via NS_INLINE_DECL_REFCOUNTING), please be aware
 of the following best-practices:

  (a) Your class should have an explicitly-declared non-public
 destructor. (should be 'private' or 'protected')

  (b) Your class should be labeled as MOZ_FINAL (or, see below).


 WHY THIS IS A GOOD IDEA
 ===
 We'd like to ensure that refcounted objects are *only* deleted via their
 ::Release() methods.  Otherwise, we're potentially susceptible to
 double-free bugs.

 We can go a long way towards enforcing this rule at compile-time by
 giving these classes non-public destructors.  This prevents a whole
 category of double-free bugs.

 In particular: if your class has a public destructor (the default), then
 it's easy for you or someone else to accidentally declare an instance on
 the stack or as a member-variable in another class, like so:
 MyClass foo;
 This is *extremely* dangerous. If any code wraps 'foo' in a nsRefPtr
 (say, if some function that we pass 'foo' or 'foo' into declares a
 nsRefPtr to it for some reason), then we'll get a double-free. The
 object will be freed when the nsRefPtr goes out of scope, and then again
 when the MyClass instance goes out of scope. But if we give MyClass a
 non-public destructor, then it'll make it a compile error (in most code)
 to declare a MyClass instance on the stack or as a member-variable.  So
 we'd catch this bug immediately, at compile-time.

 So, that explains why a non-public destructor is a good idea. But why
 MOZ_FINAL?  If your class isn't MOZ_FINAL, then that opens up another
 route to trigger the same sort of bug -- someone can come along and add
 a subclass, perhaps not realizing that they're subclassing a refcounted
 class, and the subclass will (by default) have a public destructor,
 which means then that anyone can declare
   MySubclass foo;
 and run into the exact same problem with the subclass.  A MOZ_FINAL
 annotation will prevent that by keeping people from naively adding
 subclasses.

 BUT WHAT IF I NEED SUBCLASSES
 =
 First, if your class is abstract, then it shouldn't have AddRef/Release
 implementations to begin with.  Those belong on the concrete subclasses
 -- not on your abstract base class.

 But if your class is concrete and refcounted and needs to have
 subclasses, then:
  - Your base class *and each of its subclasses* should have virtual,
 protected destructors, to prevent the MySubclass foo; problem
 mentioned above.
  - Your subclasses themselves should also probably

Re: Intent to implement and ship: navigator.hardwareConcurrency

2014-05-19 Thread Benoit Jacob
+1000! Thanks for articulating so clearly the difference between the
Web-as-an-application-platform and other application platforms.

Benoit




2014-05-19 21:35 GMT-04:00 Jonas Sicking jo...@sicking.cc:

 On Mon, May 19, 2014 at 4:10 PM, Rik Cabanier caban...@gmail.com wrote:
  I don't see why the web platform is special here and we should trust that
  authors can do the right thing.

 I'm fairly sure people have already pointed this out to you. But the
 reason the web platform is different is that because we allow
 arbitrary application logic to run on the user's device without any
 user opt-in.

 I.e. the web is designed such that it is safe for a user to go to any
 website without having to consider the risks of doing so.

 This is why we for example don't allow websites to have arbitrary
 read/write access to the user's filesystem. Something that all the
 other platforms that you have pointed out do.

 Those platforms instead rely on that users make a security decision
 before allowing any code to run. This has both advantages (easier to
 design APIs for those platforms) and disadvantages (malware is pretty
 prevalent on for example Windows).

 / Jonas
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


PSA: nsTArray lengths and indices are now size_t (were uint32_t)

2014-05-11 Thread Benoit Jacob
Hi,

Since Bug 1004098 landed, the type of nsTArray lengths and indices is now
size_t.

Code using nsTArrays is encouraged to use size_t for indexing them; in most
cases, this does not really matter; however there is one case where this
does matter, which is when user code stores the result of
nsTArray::IndexOf().

Indeed, nsTArray::NoIndex used to be uint32_t(-1), which has the value 2^32
- 1.  Now, nsTArray::NoIndex is size_t(-1) which, on x86-64, has the value
2^64 - 1.

This means that code like this is no longer correct:

  uint32_t index = array.IndexOf(thing);

Such code should be changed do:

  size_t index = array.IndexOf(thing);

Or, better still (slightly pedantic but would have been correct all along):

  ArrayType::index_type index = array.IndexOf(thing);

Where ArrayType is the type of that 'array' variable (one could use
decltype(array) too).

Thanks,
Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Time to revive the require SSE2 discussion

2014-05-09 Thread Benoit Jacob
Again (see my previous email) I dont think that performance is the primary
factor here. I care more about not having to worry about two different
flavors of floating point semantics.

Just 2 days ago a colleague had a clever implementation of something he
needed to do in gecko gfx code, and had to back out from that because it
would give the wrong result on x87. I don't know how many other things we
already do, that silently fail on x87 without us realizing. That's what I
worry about.

Benoit


2014-05-09 13:19 GMT-04:00 Bobby Holley bobbyhol...@gmail.com:

 Can somebody get us less-circumstantial evidence that the stuff from
 http://www.palemoon.org/technical.shtml#speed , which AFAICT are the only
 perf numbers that have been cited in this thread?


 On Fri, May 9, 2014 at 10:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 Totally agree that 1% is probably still too much to drop, but the 4x drop
 over the past two years makes me hopeful that we'll be able to drop
 non-SSE2, eventually.

 SSE2 is not just about SIMD. The most important thing it buys us IMHO is
 to
 be able to not use x87 instructions anymore and instead use SSE2 (scalar)
 instructions. That removes entire classes of bugs caused by x87 being
 non-IEEE754-compliant with its crazy 80-bit registers.

 Benoit


 2014-05-09 13:01 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

  What does requiring SSE2 buy us? 1% of hundreds of millions of Firefox
  users is still millions of people.
 
  chris
 
 
 
  On 5/8/14, 5:42 PM, matthew.br...@gmail.com wrote:
 
  On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
 
  2012/1/3 Jeff Muizelaar jmuizel...@mozilla.com:
 
 
 
   On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
 
 
 
 
   2012/1/2 Robert Kaiser ka...@kairo.at:
 
 
 
 
   Jean-Marc Desperrier schrieb:
 
 
 
 
 
 
   According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6,
 
 
 
 
   the Raw Dump tab on crash-stats.mozilla.com shows the needed
 
 
 
 
   information, you need to sort out from the info on the second line
 CPU
 
 
 
 
   maker, family, model, and stepping information whether SSE2 is there
 or
 
 
 
 
   not (With a little search, I can find that info again, bug 593117
 gives
 
 
 
 
   a formula that's correct for most of the cases).
 
 
 
 
 
 
 
 
   https://crash-analysis.mozilla.com/crash_analysis/ holds
 
 
 
 
   *-pub-crashdata.csv.gz files that have that info from all Firefox
 
 
 
 
   desktop/mobile crashes on a given day, you should be able to analyze
  that
 
 
 
 
   for this info - with a bias, of course, as it's only people having
  crashes
 
 
 
 
   that you see there. No idea if the less biased telemetry samples have
  that
 
 
 
 
   info as well.
 
 
 
 
 
 
   On yesterday's crash data, assuming that AuthenticAMD\ family\
 
 
   [1-6][^0-9]  is the proper way to identify these old AMD CPUs (I
 
 
   didn't check that very well), I get these results:
 
 
 
 
 
 
   The measurement I have used in the past was:
 
 
 
 
   CPUs have sse2 if:
 
 
 
 
   if vendor == AuthenticAMD and family = 15
 
 
   if vendor == GenuineIntel and family = 15 or (family == 6 and (model
  == 9
 
 
   or model  11))
 
 
   if vendor == CentaurHauls and family = 6 and model = 10
 
 
 
 
 
 
  Thanks.
 
 
 
  AMD and Intel CPUs amount to 296362 crashes:
 
 
 
  bjacob@cahouette:~$ egrep AuthenticAMD\|GenuineIntel
 
  20120102-pub-crashdata.csv | wc -l
 
  296362
 
 
 
  Counting SSE2-capable CPUs:
 
 
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 1[5-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  58490
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  0
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 9
 
  20120102-pub-crashdata.csv | wc -l
 
  792
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 1[2-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  52473
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  103655
 
  bjacob@cahouette:~$ egrep AuthenticAMD\ family\ 1[5-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  59463
 
  bjacob@cahouette:~$ egrep AuthenticAMD\ family\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  8120
 
 
 
  Total SSE2 capable CPUs:
 
 
 
  58490 + 792 + 52473 + 103655 + 59463 + 8120 = 282993
 
 
 
  1 - 282993 / 296362 = 0.045
 
 
 
  So the proportion of non-SSE2-capable CPUs among crash reports is 4.5
 %.
 
 
  Just for the record, I coded this analysis up here:
  https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
 
  SSE2 apparently now at about one percent:
 
   20120102-pub-crashdata.csv.gz: 4.53
   20120401-pub-crashdata.csv.gz: 4.24
   20120701-pub-crashdata.csv.gz: 2.77
   20121001-pub-crashdata.csv.gz: 2.83
   20130101-pub-crashdata.csv.gz: 2.66
   20130401-pub-crashdata.csv.gz: 2.59
   20130701-pub-crashdata.csv.gz: 2.20
   20131001-pub-crashdata.csv.gz: 1.92
   20140101-pub

Re: Time to revive the require SSE2 discussion

2014-05-09 Thread Benoit Jacob
2014-05-09 13:24 GMT-04:00 Rik Cabanier caban...@gmail.com:




 On Fri, May 9, 2014 at 10:14 AM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 Totally agree that 1% is probably still too much to drop, but the 4x drop
 over the past two years makes me hopeful that we'll be able to drop
 non-SSE2, eventually.

 SSE2 is not just about SIMD. The most important thing it buys us IMHO is
 to
 be able to not use x87 instructions anymore and instead use SSE2 (scalar)
 instructions. That removes entire classes of bugs caused by x87 being
 non-IEEE754-compliant with its crazy 80-bit registers.


 Out of interest, do you have links to bugs for this issue?


No: there are the bugs we probably have but don't know about; and there are
the things that we caught in time and caused us to give up on approaches
before they would become patches... I don't have an example of a bug we
found after the fact.



 Also, can't you ask the compiler to produce both sse and non-sse code and
 make a decision at runtime?


Not that I know of. At least GCC documentation does no list anything about
that here, http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86-64-Options.html

-mfpmath=both or -mfpmath=sse+387 does not seem to be doing that; instead
it seems to be about using both in the same code path.

Benoit




 2014-05-09 13:01 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

  What does requiring SSE2 buy us? 1% of hundreds of millions of Firefox
  users is still millions of people.
 
  chris
 
 
 
  On 5/8/14, 5:42 PM, matthew.br...@gmail.com wrote:
 
  On Tuesday, January 3, 2012 4:37:53 PM UTC-8, Benoit Jacob wrote:
 
  2012/1/3 Jeff Muizelaar jmuizel...@mozilla.com:
 
 
 
   On 2012-01-03, at 2:01 PM, Benoit Jacob wrote:
 
 
 
 
   2012/1/2 Robert Kaiser ka...@kairo.at:
 
 
 
 
   Jean-Marc Desperrier schrieb:
 
 
 
 
 
 
   According to https://bugzilla.mozilla.org/show_bug.cgi?id=594160#c6,
 
 
 
 
   the Raw Dump tab on crash-stats.mozilla.com shows the needed
 
 
 
 
   information, you need to sort out from the info on the second line
 CPU
 
 
 
 
   maker, family, model, and stepping information whether SSE2 is there
 or
 
 
 
 
   not (With a little search, I can find that info again, bug 593117
 gives
 
 
 
 
   a formula that's correct for most of the cases).
 
 
 
 
 
 
 
 
   https://crash-analysis.mozilla.com/crash_analysis/ holds
 
 
 
 
   *-pub-crashdata.csv.gz files that have that info from all Firefox
 
 
 
 
   desktop/mobile crashes on a given day, you should be able to analyze
  that
 
 
 
 
   for this info - with a bias, of course, as it's only people having
  crashes
 
 
 
 
   that you see there. No idea if the less biased telemetry samples have
  that
 
 
 
 
   info as well.
 
 
 
 
 
 
   On yesterday's crash data, assuming that AuthenticAMD\ family\
 
 
   [1-6][^0-9]  is the proper way to identify these old AMD CPUs (I
 
 
   didn't check that very well), I get these results:
 
 
 
 
 
 
   The measurement I have used in the past was:
 
 
 
 
   CPUs have sse2 if:
 
 
 
 
   if vendor == AuthenticAMD and family = 15
 
 
   if vendor == GenuineIntel and family = 15 or (family == 6 and (model
  == 9
 
 
   or model  11))
 
 
   if vendor == CentaurHauls and family = 6 and model = 10
 
 
 
 
 
 
  Thanks.
 
 
 
  AMD and Intel CPUs amount to 296362 crashes:
 
 
 
  bjacob@cahouette:~$ egrep AuthenticAMD\|GenuineIntel
 
  20120102-pub-crashdata.csv | wc -l
 
  296362
 
 
 
  Counting SSE2-capable CPUs:
 
 
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 1[5-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  58490
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  0
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 9
 
  20120102-pub-crashdata.csv | wc -l
 
  792
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ 1[2-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  52473
 
  bjacob@cahouette:~$ egrep GenuineIntel\ family\ 6\ model\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  103655
 
  bjacob@cahouette:~$ egrep AuthenticAMD\ family\ 1[5-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  59463
 
  bjacob@cahouette:~$ egrep AuthenticAMD\ family\ [2-9][0-9]
 
  20120102-pub-crashdata.csv | wc -l
 
  8120
 
 
 
  Total SSE2 capable CPUs:
 
 
 
  58490 + 792 + 52473 + 103655 + 59463 + 8120 = 282993
 
 
 
  1 - 282993 / 296362 = 0.045
 
 
 
  So the proportion of non-SSE2-capable CPUs among crash reports is 4.5
 %.
 
 
  Just for the record, I coded this analysis up here:
  https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
 
  SSE2 apparently now at about one percent:
 
   20120102-pub-crashdata.csv.gz: 4.53
   20120401-pub-crashdata.csv.gz: 4.24
   20120701-pub-crashdata.csv.gz: 2.77
   20121001-pub-crashdata.csv.gz: 2.83
   20130101-pub-crashdata.csv.gz: 2.66
   20130401-pub-crashdata.csv.gz: 2.59
   20130701-pub-crashdata.csv.gz: 2.20
   20131001-pub-crashdata.csv.gz: 1.92
   20140101-pub

Re: Intent to implement: WebGL 2.0

2014-05-08 Thread Benoit Jacob
2014-05-08 5:53 GMT-04:00 Anne van Kesteren ann...@annevk.nl:

 It seems like you want to be able to do that going forward so you
 don't have to maintain a large matrix forever, but at some point say
 you drop the idea that people will want 1 and simply return N if they
 ask for 1.


Yes, that's what we agreed on in the last conversation mentioned by Ehsan
yesterday. In the near future (for the next decade), there will be
webgl-1-only devices around, so allowing getContext(webgl) to
automatically give webgl2 would create accidentaly compatibility problems.
But in the longer term, there will (probably) eventually be a time when
webgl-1-only devices won't exist anymore, and then, we could decide to
allow that.

2014-05-08 5:53 GMT-04:00 Anne van Kesteren ann...@annevk.nl:

 Are we forever going to mint new version strings or are we going to
 introduce a version parameter which is observed (until we decide to
 prune the matrix a bit), this time around?


Agreed: if we still think that a version parameter would have been
desirable if not for the issue noted above, then now would be a good time
to fix it.

If we're doing the latter,
 maybe we should call the context id 3d this time around...


WebGL is low-level and generalistic enough that it is not specifically a
3d graphics API. I prefer to call it a low-level or generalistic graphics
API.

(*plug*) this might be useful reading:
https://hacks.mozilla.org/2013/04/the-concepts-of-webgl/

Benoit






 --
 http://annevankesteren.nl/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Benoit Jacob
2014-05-07 13:41 GMT-04:00 Boris Zbarsky bzbar...@mit.edu:

 On 5/7/14, 12:34 PM, Ehsan Akhgari wrote:

 Implementations are free to return a context that implements a higher
 version, should that be appropriate in the future, but never lower.


 As pointed out, this fails the explicit opt-in bit.

 There is also another problem here.  If we go with this setup but drop the
 may return higher version bit, how can a consumer handed a
 WebGLRenderingContext tell whether v2 APIs are OK to use with it short of
 trying one and having it throw?  Do we want to expose the context version
 on the context somehow?


The idea is that if getContext(webgl, {version : N}) returns non-null,
then the resulting context is guaranteed to be WebGL version N, so that no
other versioning mechanism is needed.

Benoit




 This is only an issue if the code being handed the context and the code
 that did the getContext() call are not tightly cooperating, of course.

 -Boris

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-07 Thread Benoit Jacob
2014-05-07 14:14 GMT-04:00 Boris Zbarsky bzbar...@mit.edu:

 On 5/7/14, 2:00 PM, Benoit Jacob wrote:

 The idea is that if getContext(webgl, {version : N}) returns non-null,
 then the resulting context is guaranteed to be WebGL version N, so that
 no other versioning mechanism is needed.


 Sure, but say some code calls getContext(webgl, { version: 1 }) and then
 passes the context to other code (from a library, say).

 How is that other code supposed to know whether it can use the webgl2
 bits?  The methods are there no matter what, so it can't detect based on
 that.


Right, so there is a mechanism for that. The second parameter to getContext
is called context creation parameters. After creation, you can still
query context attributes, and that has to return the actual context
parameters, which are not necessarily the same as what you requested at
getContext time. For example, that's how you do today if you want to query
whether your WebGL context actually has antialiasing  (the antialias
attribute defaults to true, but actual antialiasing support is
non-mandatory).

See:
http://www.khronos.org/registry/webgl/specs/latest/1.0/#2.1

Benoit




 -Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 11:04 GMT-04:00 Anne van Kesteren ann...@annevk.nl:

 On Tue, May 6, 2014 at 3:57 PM, Thomas Zimmermann
 tzimmerm...@mozilla.com wrote:
  I think Khronos made a bad experience with backwards compatible APIs
  during OpenGL's history. They maintained a compatible API for OpenGL for
  ~15 years until it was huge and crufty. Mode switches are their solution
  to the problem.

 Yes, but Khronos can say that at some point v1 is no longer supported.
 Or particular GPU vendors can say that. But this API is for the web,
 where we can't and where we've learned repeatedly that mode switches
 are terrible in the long run.


For the record, the only time the Khronos broke compatibility with a new GL
API version, was with the release of OpenGL ES 2.0 (on which WebGL 1 is
based), which dropped support for OpenGL ES 1.0 API. That's the only ever
instance: newer ES versions (like ES 3.0, on which WebGL 2 is based) are
strict supersets, and regular non-ES OpenGL versions are always supersets
--- all the way from 1992 OpenGL 1.0 to the latest OpenGL 4.4.

This is just to provide a data point that OpenGL has a long track record of
strictly preserving with long-term API compatibility.

The other point I'm reading above is about mode switches. I think you're
making a valid point here. I also think that the particulars of WebGL2
still make it a decent trade-off. Indeed, the alternative to doing WebGL2
is to expose the same functionality as a collection of WebGL 1 extensions
(1) (In fact, some of that functionality is already exposed (2)). We could
take that route. However, that would require figuring out the interactions
for all possible subsets of that set of extensions. There would be
nontrivial details to sort out in writing the specs, and in writing
conformance tests. To get a feel of the complexity of interactions between
different OpenGL extensions (3). Just exposing this entire set of
extensions at once as WebGL2 reduces a lot of the complexity of
interactions.

Some more particulars of WebGL2 may be useful to spell out here to clarify
why this is a reasonable thing for us to implement.

WebGL2 follows ES 3.0 which loosely follows OpenGL 3.2 from 2009, and most
of it is OpenGL 3.0 from 2008. So this API has been well adopted and tested
in the real world for five years now.

ES 3.0 functionality is universally supported on current desktop hardware,
and is the standard for newer mobile hardware too, even in the low end (for
example, all Adreno 300 mobile GPUs support it).

We have received consistent feedback from game developers that WebGL 2
would make it much easier for them to port their newer rendering paths to
the Web.

The spec process is already well on its way with other browser vendors on
board (Google, Apple) as one can see from public_webgl mailing list
archives.

Benoit

(1) http://www.khronos.org/registry/webgl/extensions/
(2) E.g.
http://www.khronos.org/registry/webgl/extensions/WEBGL_draw_buffers/ and
http://www.khronos.org/registry/webgl/extensions/ANGLE_instanced_arrays/
(3) Search for the phrase affects the definition of this extension in the
language of OpenGL extension specs such as
http://www.khronos.org/registry/gles/extensions/EXT/EXT_draw_buffers.txt to
mention just one extension that's become a WebGL extension and part of
WebGL 2.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 12:11 GMT-04:00 Boris Zbarsky bzbar...@mit.edu:

 On 5/6/14, 12:03 PM, Benoit Jacob wrote:

 Indeed, the alternative to doing WebGL2
 is to expose the same functionality as a collection of WebGL 1 extensions


 I think Anne's question, if I understood it right, is why this requires a
 new context ID.

 I assume the argument is that if you ask for the WebGL2 context id and get
 something back that guarantees that all the new methods are implemented.
  But one could do something similar via implementations simply guaranteeing
 that if you ask for the WebGL context ID and get back an object and it has
 any of the new methods on it, then they're all present and work.

 Are there other reasons there's a separate context id for WebGL2?


To what extent does what I wrote in my previous email, regarding
interactions between different extensions, answer your question?

With the example approach you suggested above, one would have to specify
extensions separately and for each of them, their possible interactions
with other extensions.

Moreover, most of the effort spent doing that would be of little use in
practice as current desktop hardware / newer mobile hardware supports all
of that functionality. And realistically, the primary target audience there
is games, and games already have their code paths written for ES2 and/or
for ES3 i.e. they already expect the mode switch.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 12:53 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:




 2014-05-06 12:32 GMT-04:00 Boris Zbarsky bzbar...@mit.edu:

 On 5/6/14, 12:25 PM, Benoit Jacob wrote:

 To what extent does what I wrote in my previous email, regarding
 interactions between different extensions, answer your question?


 I'm not sure it answers it at all.


  With the example approach you suggested above, one would have to specify
 extensions separately and for each of them, their possible interactions
 with other extensions.


 Why?  This is the part I don't get.

 The approach I suggest is that a UA is allowed to expose the new methods
 on the return value of getContext(webgl), but only if it exposes all of
 them.  In other words, it's functionally equivalent to the WebGL2 spec,
 except the way you get a context continues to be getContext(webgl) (which
 is the part that Anne was concerned about, afaict).


 Ah, I see the confusion now. So the first reason why what you're
 suggesting wouldn't work for WebGL is that WebGL extension my add
 functionality without changing any IDL at all.

 A good example (that is a WebGL 1 extension and that is part of WebGL 2)
 is float textures.
 http://www.khronos.org/registry/webgl/extensions/OES_texture_float/

 WebGL has a method, texImage2D, that allows uploading texture data; and it
 has various enum values, like BYTE and INT and FLOAT, that allow specifying
 the type of data. By default, WebGL does not allow FLOAT to be passed for
 the type parameter of the texImage2D method. The OES_texture_float
 extension make that allowed. So this adds real functionality (enables using
 textures with floating-point RGB components) without changing anything at
 the DOM interface level.

 There are more examples. Even when OES_texture_float is supported, FLOAT
 textures don't support linear filtering by default. That is, in turn,
 enabled by the OES_texture_float_linear extension,
 http://www.khronos.org/registry/webgl/extensions/OES_texture_float_linear/

 Both these WebGL extensions are part of core WebGL 2, so they are relevant
 examples.


Well, I guess the but only if it exposes all of them part of your
proposal would make this work, because other parts of WebGL2 do add new
methods and constants.

But, suppose that an application relies on WebGL2 features that don't
change IDL (there are many more, besides the two I mentioned). In your
proposal, they would have to either check for unrelated things on the WebGL
interface, which seems clunky, or try to go ahead and try to use the
feature and use WebGL.getError to check for errors if that's unsupported,
which would be slow and error-prone.

Benoit



 Benoit






 -Boris
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 13:07 GMT-04:00 Boris Zbarsky bzbar...@mit.edu:

 On 5/6/14, 12:53 PM, Benoit Jacob wrote:

 Ah, I see the confusion now. So the first reason why what you're
 suggesting
 wouldn't work for WebGL is that WebGL extension my add functionality
 without changing any IDL at all.


 Sure, but we're not talking about arbitrary WebGL extensions.  We're
 talking about specifically the set of things we want to expose in WebGL2,
 which do include new methods.

 In particular, the contract would be that if any of the new methods are
 supported, then FLOAT texture upload is also supported.

 The fact that these may be extensions under the hood doesn't seem really
 relevant, as long as the contract is that the support is all-or-nothing.


Our last emails crossed, obviously :)

My point is it would be a clunky API if, in order to test for feature X
that does not affect the DOM interface, one had to test for another
unrelated feature Y.

Anyway I've shared what I think I know on this topic; I'll let other people
(who contrary to me are working on WebGL at the moment) give their own
answers.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: WebGL 2.0

2014-05-06 Thread Benoit Jacob
2014-05-06 13:15 GMT-04:00 Ralph Giles gi...@mozilla.com:

 On 2014-05-06 9:53 AM, Benoit Jacob wrote:

  By default, WebGL does not allow FLOAT to be passed for
  the type parameter of the texImage2D method. The OES_texture_float
  extension make that allowed.

 I have trouble seeing how this could break current implementations. If a
 page somehow looks for the error as a feature or version check it should
 still get the correct answer.


I didn't say it would break anything. I just commented on why enabling this
feature wasn't just a switch at the DOM interface level.



  There are more examples. Even when OES_texture_float is supported, FLOAT
  textures don't support linear filtering by default. That is, in turn,
  enabled by the OES_texture_float_linear extension,
 
 http://www.khronos.org/registry/webgl/extensions/OES_texture_float_linear/

 This looks similar. Are there extensions which cause rendering
 differences merely by enabling them?


No. WebGL2 does not break any WebGL 1 API. WebGL extensions do not break
any API.



 E.g. everything in webgl2 is exposed on a 'webgl' context, and calling
 getExtension to enable extensions which are also webgl2 core features is
 a no-op? I guess the returned interface description would need new spec
 language in webgl2 if there are ever extensions with the same name
 written against different versions of the spec.


No, there just won't be different extensions for WebGL2 vs WebGL1 with the
same name.

 Is this what you mean
about considering (and writing tests for) all the interactions?

No, I meant something different. Different extensions add different spec
language, and sometimes it's nontrivial to work out the details of how
these additions to the spec interplay. For example, if you start with an
API that allows doing only additions, and only supports integers; if you
then specify an extension for floating-point numbers, and another extension
for multiplication, then you need to work out the interaction between the
two: are you allowing multiplication of floating-point numbers? Do you
specify it in the multiplication spec or in the floating-point spec?




 It looks like doing so would violate to webgl1 spec. An attempt to use
 any features of an extension without first calling getExtension to
 enable it must generate an appropriate GL error and must not make use of
 the feature. https://www.khronos.org/registry/webgl/specs/1.0/#5.14.14
 It would be like getExtension was silently called on context creation.


That is indeed the only difference between a WebGL2 rendering context and a
WebGL1 rendering context. (In a theoretical world where all of WebGl2 would
be specced as WebGL1 extensions).

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: NS_IMPL_ISUPPORTS and friends are now variadic

2014-04-28 Thread Benoit Jacob
2014-04-28 0:18 GMT-04:00 Birunthan Mohanathas birunt...@mohanathas.com:

 Bugs 900903 and 900908 introduced variadic variants of
 NS_IMPL_ISUPPORTS, NS_IMPL_QUERY_INTERFACE, NS_IMPL_CYCLE_COLLECTION,
 etc. and removed the old numbered macros. So, instead of e.g.
 NS_IMPL_ISUPPORTS2(nsFoo, nsIBar, nsIBaz), simply use
 NS_IMPL_ISUPPORTS(nsFoo, nsIBar, nsIBaz) instead. Right now, the new
 macros support up to 50 variadic arguments.


Awesome, congrats, and thanks!

Question: is there a plan to switch to an implementation based on variadic
templates when we will stop supporting compilers that don't support them?
Do you know when that would be (of the compilers that we currently support,
which ones don't support variadic templates?)

Benoit



 Note that due to technical details, the new macros will reject uses
 with zero variadic arguments. In such cases, you will want to continue
 to use the zero-numbered macro, e.g. NS_IMPL_ISUPPORTS0(nsFoo).

 Cheers,
 Biru
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: NS_IMPL_ISUPPORTS and friends are now variadic

2014-04-28 Thread Benoit Jacob
2014-04-28 14:18 GMT-04:00 Trevor Saunders trev.saund...@gmail.com:

 On Mon, Apr 28, 2014 at 02:07:07PM -0400, Benoit Jacob wrote:
  2014-04-28 12:17 GMT-04:00 Birunthan Mohanathas 
 birunt...@mohanathas.com:
 
   On 28 April 2014 14:18, Benoit Jacob jacob.benoi...@gmail.com wrote:
Question: is there a plan to switch to an implementation based on
   variadic
templates when we will stop supporting compilers that don't support
   them? Do
you know when that would be (of the compilers that we currently
 support,
which ones don't support variadic templates?)
  
   I don't think a purely variadic template based solution is possible
   (e.g. due to argument stringification employed by NS_IMPL_ADDREF and
   others).
  
 
  Would it be possible to have a variadic macro that takes N arguments,
  stringifies them, and passes all 2N resulting values (the original N
  arguments and their N stringifications) to a variadic template?

 Well, the bigger problem is that those macros are defining member
 functions, so I don't see how you could do that with a variatic
 template, accept perhaps for cycle collection if we can have a struct
 that takes a variatic template, and then use the variatic template args
 in member functions.


Right, NS_IMPL_CYCLE_COLLECTION and its variants are what I have in mind
here.

Benoit



 Trev


 
  Benoit
 
 
  
   As for compiler support, I believe our current MSVC version is the
   only one lacking variadic templates. I don't know if/when we are going
   to switch to VS2013.
  
   On 28 April 2014 12:07, Henri Sivonen hsivo...@hsivonen.fi wrote:
Cool. Is there a script that rewrites mq patches whose context has
numbered macros to not expect numbered macros?
  
   Something like this should work (please use with caution because it's
   Perl and because I only did a quick test):
  
   perl -i.bak -0777 -pe '
   $names = join(|, (
   NS_IMPL_CI_INTERFACE_GETTER#,
   NS_IMPL_CYCLE_COLLECTION_#,
   NS_IMPL_CYCLE_COLLECTION_INHERITED_#,
   NS_IMPL_ISUPPORTS#,
   NS_IMPL_ISUPPORTS#_CI,
   NS_IMPL_ISUPPORTS_INHERITED#,
   NS_IMPL_QUERY_INTERFACE#,
   NS_IMPL_QUERY_INTERFACE#_CI,
   NS_IMPL_QUERY_INTERFACE_INHERITED#,
   NS_INTERFACE_TABLE#,
   NS_INTERFACE_TABLE_INHERITED#,
   )) =~ s/#/[1-9]\\d?/gr;
  
   sub rep {
   my ($name, $args) = @_;
   my $unnumbered_name = $name =~ s/_?\d+//r;
   my $spaces_to_remove = length($name) -
 length($unnumbered_name);
   $args =~ s/^(. {16}) {$spaces_to_remove}/\1/gm;
   return $unnumbered_name . $args;
   }
  
   s/($names)(\(.*?\))/rep($1, $2)/ges;
   ' some-patch.diff
  
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Policing dead/zombie code in m-c

2014-04-25 Thread Benoit Jacob
2014-04-25 3:31 GMT-04:00 Henri Sivonen hsivo...@hsivonen.fi:

 On Thu, Apr 24, 2014 at 4:20 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:
  2014-04-24 8:31 GMT-04:00 Henri Sivonen hsivo...@hsivonen.fi:
 
  I have prepared a queue of patches that removes Netscape-era (circa
  1999) internationalization code that efforts to implement the Encoding
  Standard have shown unnecessary to have in Firefox. This makes libxul
  on ARMv7 smaller by 181 KB, so that's a win.
 
  Have we measured the impact of this change on actual memory usage (as
  opposed to virtual address space size) ?

 No, we haven't. I don't have a B2G phone, but I could give my whole
 patch queue in one diff to someone who wants to try.

  Have we explored how much this problem could be automatically helped by
 the
  linker being smart about locality?

 Not to my knowledge, but I'm very skeptical about getting these
 benefits by having the linker be smart so that the dead code ends up
 on memory pages that  aren't actually mapped to real RAM.

 The code that is no longer in use is sufficiently intermingled with
 code that's still is in use. Useful and useless plain old C data is
 included side-by-side. Useful and useless classes are included next to
 each other in unified compilation units. Since the classes are
 instantiated via XPCOM, a linker that's unaware of XPCOM couldn't tell
 that some classes are in use and some aren't via static analysis. All
 of them would look equally dead or alive depending on what we do you
 take on the root of the caller chain being function pointers in a
 contract ID table.

 Using PGO to determine what's dead code and what's not wouldn't work,
 either, if the profiling run was load mozilla.org, because the run
 would exercise too little code, or if the profiling run was all the
 unit tests, because the profiling run would exercise too much code.


Thanks for this answer, it does totally make sense (and shed light on the
specifics here that make this hard to solve automatically).

Benoit




 On Fri, Apr 25, 2014 at 2:03 AM, Ehsan Akhgari ehsan.akhg...@gmail.com
 wrote:
  * Are we building and shipping dead code in ICU on B2G?
 
  No.  That is at least partly covered by bug 864843.

 Using system ICU seems wrong in terms of correctness. That's the
 reason why we don't use system ICU on Mac and desktop Linux, right?

 For a given phone, the Android base system practically never updates,
 so for a given Firefox version, the Web-exposed APIs would have as
 many behaviors as there are differing ICU snapshots on different
 Android versions out there.

 As for B2G, considering that Gonk is supposed to update less often
 than Gecko, it seems like a bad idea to have ICU be part of Gonk
 rather than part of Gecko on B2G.

  In my experience, ICU is unfortunately a hot potato. :(  The real blocker
  there is finding someone who can tell us what bits of ICU _are_ used in
 the
  JS engine.

 Apart from ICU initialization/shutdown, the callers seem to be
 http://mxr.mozilla.org/mozilla-central/source/js/src/builtin/Intl.cpp
 and http://mxr.mozilla.org/mozilla-central/source/js/src/jsstr.cpp#852
 .

 So the JS engine uses:
  * Collation
  * Number formatting
  * Date and time formatting
  * Normalization

 It looks like the JS engine has its own copy of the Unicode database
 for other purposes. It seems like that should be unified with ICU so
 that there'd be only one copy of the Unicode database.

 Additionally, we should probably rewrite nsCollation users to use ICU
 collation and delete nsCollation.

 Therefore, it looks like we should turn off (if we haven't already):
  * The ICU LayoutEngine.
  * Ustdio
  * ICU encoding converters and their mapping tables.
  * ICU break iterators and their data.
  * ICU transliterators and their data.

 http://apps.icu-project.org/datacustom/ gives a good idea of what
 there is to turn off.

  The parts used in Gecko for input type=number are pretty
  small.  And of course someone needs to figure out the black magic of
  conveying the information to the ICU build system.

 So it looks like we already build with UCONFIG_NO_LEGACY_CONVERSION:

 http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/common/unicode/uconfig.h#264

 However, that flag is misdesigned in the sense that it considers
 US-ASCII, ISO-8859-1, UTF-7, UTF-32, CESU-8, SCSU and BOCU-1 as
 non-legacy, even though, frankly, those are legacy, too. (UTF-16 is
 legacy also, but it's legacy we need, since both ICU and Gecko are
 UTF-16 legacy code bases!)

 http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/common/unicode/uconfig.h#267

 So I guess the situation isn't quite as bad as I thought.

 We should probably set UCONFIG_NO_CONVERSION to 1 and
 U_CHARSET_IS_UTF8 to 1 per:

 http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/common/unicode/uconfig.h#248
 After all, we should easily be able to make sure that we don't use
 non-UTF-8 encodings when passing char* to ICU.

 Also, If the ICU

Re: Policing dead/zombie code in m-c

2014-04-24 Thread Benoit Jacob
2014-04-24 8:31 GMT-04:00 Henri Sivonen hsivo...@hsivonen.fi:

 I have prepared a queue of patches that removes Netscape-era (circa
 1999) internationalization code that efforts to implement the Encoding
 Standard have shown unnecessary to have in Firefox. This makes libxul
 on ARMv7 smaller by 181 KB, so that's a win.


Have we measured the impact of this change on actual memory usage (as
opposed to virtual address space size) ?

Have we explored how much this problem could be automatically helped by the
linker being smart about locality?

I totally agree about the value of removing dead code (if only to make the
codebase easier to read and maintain), I just wonder if there might be a
shortcut to get the short-term memory usage benefits that you mention.

Benoit




 However, especially in the context of slimming down our own set of
 encoding converters, it's rather demotivating to see that at least on
 desktop, we are building ICU encoding converters that we don't use.
 See https://bugzilla.mozilla.org/show_bug.cgi?id=944348 . This isn't
 even a matter of building code that some might argue we maybe should
 use (https://bugzilla.mozilla.org/show_bug.cgi?id=724540). We are even
 building ICU encoding converters that we should never use even if we
 gave up on our own converters. We're building stuff like BOCU-1 that's
 explicitly banned by the HTML spec! (In general, it's uncool that
 abandoned researchy stuff like BOCU-1 is included by default in a
 widely-used production library like ICU.)

 Questions:
  * Are we building and shipping dead code in ICU on B2G?
  * The bug about building useless code in ICU has been open since
 November. Whose responsibility is it to make sure we stop building
 stuff that we don't use in ICU?
  * Do we have any mechanisms in place for preventing stuff like the
 ICU encoding converters becoming part of the building the future? When
 people propose to import third-party code, do reviewers typically ask
 if we are importing more than we intend to use? Clearly, considering
 that it is hard to get people to remove unused code from the build
 after the code has landed, we shouldn't have allowed code like the ICU
 encoding converters to become part of the build in the first place?
  * How should we identify code that we build but that isn't used anywhere?
  * How should we identify code that we build as part of Firefox but is
 used only in other apps (Thunderbird, SeaMonkey, etc.)?
  * Are there obvious places that people should inspect for code that's
 being built but not used? Some libs that got imported for WebRTC
 maybe?

 --
 Henri Sivonen
 hsivo...@hsivonen.fi
 https://hsivonen.fi/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support somehwat-non-free code in the tree

2014-04-15 Thread Benoit Jacob
2014-04-14 18:41 GMT-04:00 Vladimir Vukicevic vladim...@gmail.com:

 3. We do nothing.  This option won't happen: I'm tired of not having Gecko
 and Firefox at the forefront of web technology in all aspects.


Is VR already Web technology i.e. is another browser vendor already
exposing this, or would we be the first to?

If VR is not yet a thing on the Web, could you elaborate on why you think
it should be?

I'm asking because the Web has so far mostly been a common denominator,
conservative platform. For example, WebGL stays at a distance behind the
forefront of OpenGL innovation. I thought of that as being intentional. Is
VR a departure from this, or is it already much more mainstream than I
thought it was?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Oculus VR support somehwat-non-free code in the tree

2014-04-15 Thread Benoit Jacob
2014-04-15 18:28 GMT-04:00 Andreas Gal andreas@gmail.com:


 You can’t beat the competition by fast following the competition. Our
 competition are native, closed, proprietary ecosystems. To beat them, the
 Web has to be on the bleeding edge of technology. I would love to see VR
 support in the Web platform before its available as a builtin capability in
 any major native platform.


Can't we?   (referring to: You can’t beat the competition by fast
following the competition.)

The Web has a huge advantage over the competition (native, closed,
proprietary ecosystems):

The web only needs to be good enough.

Look at all the wins that we're currently scoring with Web games. (I
mention games because that's relevant to this thread). My understanding of
this year's GDC announcements is that we're winning. To achieve that, we
didn't really give the web any technical superiority over other platforms;
in fact, we didn't even need to achieve parity. We merely made it good
enough. For example, the competition is innovating with a completely new
platform to run native code on the web, but with asm.js and emscripten
we're showing that javascript is in fact good enough, so we end up winning
anyway.

What we need to ensure to keep winning is 1) that the Web remains good
enough and 2) that it remains true, that the Web only needs to be good
enough.

In this respect, more innovation is not necessarily better, and in fact,
the cost of innovating in the wrong direction could be particularly high
for the Web compared to other platforms. We need to understand the above 2)
point and make sure that we don't regress it. 2) probably has something to
do with the fact that the Web is the one write once, run anywhere
platform and, on top of that, also offers run forever. Indeed, compared
to other platforms, we care much more about portability and we are much
more serious about committing to long-term platform stability. Now my point
is that we can only do that by being picky with what we support. There's no
magic here; we don't get the above 2) point for free.

Benoit



 Andreas

 On Apr 15, 2014, at 2:57 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

  On Wed, Apr 16, 2014 at 3:14 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:
 
  If VR is not yet a thing on the Web, could you elaborate on why you
 think
  it should be?
 
  I'm asking because the Web has so far mostly been a common denominator,
  conservative platform. For example, WebGL stays at a distance behind the
  forefront of OpenGL innovation. I thought of that as being intentional.
 
 
  That is not intentional. There are historical and pragmatic reasons why
 the
  Web operates well in fast follow mode, but there's no reason why we
 can't
  lead as well. If the Web is going to be a strong platform it can't always
  be the last to get shiny things. And if Firefox is going to be strong we
  need to lead on some shiny things.
 
  So we need to solve Vlad's problem.
 
  Rob
  --
  Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
  le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids
  teoa
  stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg
 iyvoeunr,
  'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt
  uIp
  waanndt  wyeonut  thoo mken.o w
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Atomic considered harmful

2014-04-02 Thread Benoit Jacob
2014-04-02 11:03 GMT-04:00 Honza Bambas honzab@firemni.cz:

 On 4/2/2014 11:33 AM, Nicolas B. Pierron wrote:


 --lock(mRefCnt);
 if (lock(mRefCnt) == 0) {
delete this;
 }

 This way, this is more obvious that we might not be doing the right
 things, as long as we are careful to refuse AtomicHandler references in
 reviews.


 I personally don't think this will save us.  This can easily slip through
 review as well.

 Also, I'm using our mozilla::Atomic for not just refcounting but as an
 easy lock-less t-s counters.  If I had to change the code from mMyCounter
 += something; to mozilla::Unused  AtomicFetchAndAdd(mMyCounter,
 something); I would not be happy :)


I hope that here on dev-platform we all agree that what we're really
interested in is making it easier to *read* code, much more than making it
easier to *write* code!

Assuming that we do, then the above argument weighs very little against the
explicitness of AtomicFetchAdd saving the person reading this code from
missing the atomic part!

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
 == 3)  result = y*y*y;
else if (x == 4)  result = (y*y + 1) / (y + 10);
else if (x == 5)  result = y*y*y + y;
else if (x == 6)  result = 2*y;
else if (x == 7)  result = y*y + 3*y;
else if (x == 8)  result = 5*y*y*y + y*y;
else if (x == 9)  result = 7*y*y + 1;
else if (x == 10) result = 3*y*y*y - y + 1;
else {
  UNREACHABLE();
}

return result;
  }

Results:

Clang3.4 implements this using a jump table and does the range check
even with unreachable. Unreachable still has the effect of merging the
last case with the default case, which allows to generate slightly
smaller code.

GCC4.6 implements this as a dump chain of conditional jumps;
unreachable has the effect of jumping to the end of the function
without a 'ret' instruction, i.e. continuing execution outside of the
function! (I actually verified that that's what happens in GDB).

Conclusion: if we hit the 'unreachable' path, with clang3.4 we're
safe, but with gcc4.6 we end up running code of unrelated functions,
typically crashing, very possibly doing exploitable things first!

*
*

Test program 3/4 - c.cpp

If-branch on a condition already known to be never met due to an
earlier unreachable statement.

Source code:

  #ifdef USE_UNREACHABLE
  #define UNREACHABLE() __builtin_unreachable()
  #else
  #define UNREACHABLE()
  #endif

  unsigned int foo(unsigned int x)
  {
bool b = x  100;

if (b) {
  UNREACHABLE();
}

if (b) {
  return (x*x*x*x + x*x + 1) / (x*x*x + x + 1234);
}

return x;
  }

Results:

Clang3.4: unreachable has no effect.

GCC4.6: the unreachable statement is fully understood to make this
condition never met, and GCC4.6 uses this to omit it entirely.

Without unreachable:

.cfi_startproc
cmpl$100, %edi
movl%edi, %eax
jbe .L2
movl%edi, %ecx
xorl%edx, %edx
imull   %edi, %ecx
addl$1, %ecx
imull   %edi, %ecx
imull   %ecx, %eax
addl$1234, %ecx
addl$1, %eax
divl%ecx
.L2:
rep
ret
.cfi_endproc

With unreachable:

.cfi_startproc
movl%edi, %eax
ret
.cfi_endproc


*
*

Test program 4/4 - d.cpp

If-branch on a condition already known to be always met due to an
earlier unreachable statement on the opposite condition.

Source code:

  #ifdef USE_UNREACHABLE
  #define UNREACHABLE() __builtin_unreachable()
  #else
  #define UNREACHABLE()
  #endif

  unsigned int foo(unsigned int x)
  {
bool b = x  100;

if (!b) {
  UNREACHABLE();
}

if (b) {
  return (x*x*x*x + x*x + 1) / (x*x*x + x + 1234);
}

return x;
  }

Clang3.4: unreachable has no effect.

GCC4.6: the unreachable statement is fully understood to make this
condition always met, and GCC4.6 uses this to remove the conditional
branch and unconditionally take this branch.

Without unreachable:

.cfi_startproc
cmpl$100, %edi
movl%edi, %eax
jbe .L2
movl%edi, %ecx
xorl%edx, %edx
imull   %edi, %ecx
addl$1, %ecx
imull   %edi, %ecx
imull   %ecx, %eax
addl$1234, %ecx
addl$1, %eax
divl%ecx
.L2:
rep
ret
.cfi_endproc

With unreachable:

.cfi_startproc
movl%edi, %ecx
xorl%edx, %edx
imull   %edi, %ecx
addl$1, %ecx
imull   %edi, %ecx
movl%ecx, %eax
addl$1234, %ecx
imull   %edi, %eax
addl$1, %eax
divl%ecx
ret
.cfi_endproc







2014-03-28 12:25 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:

 Hi,

 Despite a helpful, scary comment above its definition in
 mfbt/Assertions.h, MOZ_ASSUME_UNREACHABLE is being misused. Not pointing
 fingers to anything specific here, but see
 http://dxr.mozilla.org/mozilla-central/search?q=MOZ_ASSUME_UNREACHABLEcase=true.

 The only reason why one might want an unreachability marker instead of
 simply doing nothing, is as an optimization --- a rather arcane, dangerous,
 I-know-what-I-am-doing kind of optimization.

 How can we help people not misuse?

 Should we rename it to something more explicit about what it is doing,
 such as perhaps MOZ_UNREACHABLE_UNDEFINED_BEHAVIOR ?

 Should we give typical code a macro that does what they want and sounds
 like what they want? Really, what typical code wants is a no-operation
 instead of undefined-behavior; now, that is exactly the same as
 MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying,
 and it would be worth adding a macro for that, i.e

Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
2014-04-01 3:58 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:

   * Remove jump table bounds checks (See a.cpp; allowing to abuse a jump
 table to jump to an arbitrary address);


Just got an idea: we could market this as WebJmp, exposing the jmp
instruction to the Web ?

We could build a pretty strong case for it, we already ship it.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
2014-03-28 17:14 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:


 2014-03-28 16:48 GMT-04:00 L. David Baron dba...@dbaron.org:

 On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote:
  My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.
 
  It's really handy to have something like MOZ_ASSERT_UNREACHABLE,
 instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines.
 
  Consider MOZ_ASSERT_UNREACHABLE being the same as
 MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds.

 I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't
 think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the
 name should make it clear that it's dangerous for the code to be
 reachable (i.e., the compiler can produce undefined behavior).
 MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for
 that, though it's a bit of a mouthful.


 I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new
 name of MOZ_ASSUME_UNREACHABLE sound really scary.

 I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as
 it should rarely be used. If SpiderMonkey gurus find that they need it
 often, they can always alias it in some local header.

 I think that _ASSUME_ is too hard to understand, probably because this
 doesn't explicitly say what would happen if the assumption were violated.
 One has to understand that this is introducing a *compiler* assumption to
 understand that violating it would be Undefined Behavior.

 How about  MOZ_ALLOW_COMPILER_TO_GO_CRAZY  ;-) This is technically
 correct, and explicit!


Let's see if we can wrap up this conversation soon now. How about:

MOZ_MAKE_COMPILER_BELIEVE_IS_UNREACHABLE

The idea of _COMPILER_ here is to clarify that this macro is tweaking the
compiler's own view of the surrounding code; and the idea of _BELIEVE_ here
is that the compiler is just going to believe us, even if we say something
absurd, which I believe underlines our responsibility. I'm not a native
English speaker so don't hesitate to point out any awkwardness in this
construct...

And as agreed above, we will also introduce a MOZ_ASSERT_UNREACHABLE macro
doing MOZ_ASSERT(false, msg) and will recommend it for most users.

If anyone has a better proposal or a tweak to this one, speak up! I'd like
to be able to proceed with this soon.

Benoit





 Benoit




 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-04-01 Thread Benoit Jacob
2014-04-01 10:57 GMT-04:00 Benjamin Smedberg benja...@smedbergs.us:

 On 4/1/2014 10:54 AM, Benoit Jacob wrote:

 Let's see if we can wrap up this conversation soon now. How about:

  MOZ_MAKE_COMPILER_BELIEVE_IS_UNREACHABLE

 I counter-propose that we remove the macro entirely. I don't believe that
 the potential performance benefits we've identified are worth the risk.


I certainly don't object to that, but I didn't suppose that I could easily
get consensus around that.

This macro is especially heavily used in SpiderMonkey. Maybe SpiderMonkey
developers could weigh in on how large are the benefits brought by
UNREACHABLE there?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Atomic considered harmful

2014-04-01 Thread Benoit Jacob
2014-04-01 18:40 GMT-04:00 Jeff Walden jwalden+...@mit.edu:

 On 04/01/2014 02:32 PM, Ehsan Akhgari wrote:
  What do people feel about my proposal?  Do you think it improves writing
  and reviewing thread safe code to be less error prone?

 As I said in the bug, not particularly.  I don't think you can program
 with atomics in any sort of brain-off way, and I don't think the
 boilerplate difference of += versus fetch-and-add or whatever really
 affects that.  To the extent things should be done differently, it should
 be that *template* functions that deal with atomic/non-atomic versions of
 the same algorithm deserve extra review and special care, and perhaps even
 should be implemented twice, rather than sharing a single implementation.
  And I think the cases in question here are flavors of approximately a
 single issue, and we do not have a fundamental problem here to be solved by
 making the API more obtuse in practice.


How are we going to enforce (and ensure that future people enforce) that?
(The part about functions that deal with atomic/non-atomic versions of the
same algorithm deserve extra review and special care) ?

I like Ehsan's proposal because, as far as I am concerned, explicit
function names help me very well to remember to check atomic semantics;
especially if we follow the standard atomic naming where the functions
start by atomic_ , e.g. std::atomic_fetch_add.

On the other hand, if the function name that we use for that is just
operator + then it becomes very hard for me as a reviewer, because I have
to remember checking everytime I see a + to check if the variables at
hand are atomics.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-31 Thread Benoit Jacob
2014-03-31 15:22 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

 On 3/28/14, 7:03 PM, Joshua Cranmer  wrote:

 I included MOZ_ASSUME_UNREACHABLE_MARKER because that macro is the
 compiler-specific optimize me intrinsic, which I believe was the
 whole point of the original MOZ_ASSUME_UNREACHABLE.

 AFAIU, MOZ_ASSUME_UNREACHABLE_MARKER crashes on all Gecko platforms,
 but I included MOZ_CRASH to ensure the behavior was consistent for all
 platforms.


 No, MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler that this code and
 everything after it can't be reached, so it need do anything. Clang will
 delete the code after this branch and decide to not emit any control
 flow. It may crash, but this is in the same vein that reading an
 uninitialized variable may crash: it can certainly do a lot of wrong and
 potentially exploitable things first.


 So what is an example of an appropriate use of MOZ_ASSUME_UNREACHABLE in
 Gecko today?


That's a very good question to ask at this point!

Good examples are examples where 1) it is totally guaranteed that the
location is unreachable, and 2) the surrounding code is
performance-critical for at least some caller.

Example 1:

Right *after* (not *before* !) a guaranteed crash in generic code, like
this one:

http://hg.mozilla.org/mozilla-central/file/df7b26e90378/build/annotationProcessors/CodeGenerator.java#l329

I'm not familiar with this code, but, being in a code generator, I can
trust that this might be performance critical, and is really unreachable.

Example 2:

In the default case of a performance-critical switch statement that we have
an excellent reason of thinking is completely unreachable. Example:

http://hg.mozilla.org/mozilla-central/file/df7b26e90378/js/src/gc/RootMarking.cpp#l42

Again I'm not familiar with this code, but I can trust that it's
performance-critical, and since that function is static to this cpp file, I
can trust that the callers of this function are only a few local functions
that are aware of the fact that it would be very dangerous to call this
function with a bad 'kind' (though I wish that were said in a big scary
warning). The UNREACHABLE here would typically allow the compiler to skip
checking that 'kind' is in range before implementing this switch statement
with a jump-table, so, if this code is performance-critical to the point
that the cost of checking that 'kind' is in range is significant, then the
UNREACHABLE here is useful.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
Hi,

Despite a helpful, scary comment above its definition in mfbt/Assertions.h,
MOZ_ASSUME_UNREACHABLE is being misused. Not pointing fingers to anything
specific here, but see
http://dxr.mozilla.org/mozilla-central/search?q=MOZ_ASSUME_UNREACHABLEcase=true.

The only reason why one might want an unreachability marker instead of
simply doing nothing, is as an optimization --- a rather arcane, dangerous,
I-know-what-I-am-doing kind of optimization.

How can we help people not misuse?

Should we rename it to something more explicit about what it is doing, such
as perhaps MOZ_UNREACHABLE_UNDEFINED_BEHAVIOR ?

Should we give typical code a macro that does what they want and sounds
like what they want? Really, what typical code wants is a no-operation
instead of undefined-behavior; now, that is exactly the same as
MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying,
and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH
but only affecting DEBUG builds? What would be a good name for it? Is it
worth keeping a close analogy with the unreachable-marker macro to steer
people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even
just MOZ_UNREACHABLE? So that people couldn't miss it when they look for
UNREACHABLE macros?

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 13:23 GMT-04:00 Chris Peterson cpeter...@mozilla.com:

 On 3/28/14, 12:25 PM, Benoit Jacob wrote:

 Should we give typical code a macro that does what they want and sounds
 like what they want? Really, what typical code wants is a no-operation
 instead of undefined-behavior; now, that is exactly the same as
 MOZ_ASSERT(false, error). Maybe this syntax is unnecessarily annoying,
 and it would be worth adding a macro for that, i.e. similar to MOZ_CRASH
 but only affecting DEBUG builds? What would be a good name for it? Is it
 worth keeping a close analogy with the unreachable-marker macro to steer
 people away from it --- e.g. maybe MOZ_UNREACHABLE_NO_OPERATION or even
 just MOZ_UNREACHABLE? So that people couldn't miss it when they look for
 UNREACHABLE macros?


 How about replacing MOZ_ASSUME_UNREACHABLE with two new macros like:

 #define MOZ_ASSERT_UNREACHABLE() \
   MOZ_ASSERT(false, MOZ_ASSERT_UNREACHABLE)

 #define MOZ_CRASH_UNREACHABLE() \
   do {  \
 MOZ_ASSUME_UNREACHABLE_MARKER();\
 MOZ_CRASH(MOZ_CRASH_UNREACHABLE); \
   } while (0)


MOZ_ASSUME_UNREACHABLE_MARKER tells the compiler feel free to arbitrarily
miscompile this, and anything from that point on in this branch, as you may
assume that this code is unreachable. So it doesn't really serve any
purpose to add a MOZ_CRASH after a MOZ_ASSUME_UNREACHABLE_MARKER.

Benoit




 chris
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 16:48 GMT-04:00 L. David Baron dba...@dbaron.org:

 On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote:
  My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.
 
  It's really handy to have something like MOZ_ASSERT_UNREACHABLE, instead
 of having a bunch of MOZ_ASSERT(false, Unreachable.) lines.
 
  Consider MOZ_ASSERT_UNREACHABLE being the same as
 MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds.

 I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't
 think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the
 name should make it clear that it's dangerous for the code to be
 reachable (i.e., the compiler can produce undefined behavior).
 MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for
 that, though it's a bit of a mouthful.


I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new name
of MOZ_ASSUME_UNREACHABLE sound really scary.

I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as
it should rarely be used. If SpiderMonkey gurus find that they need it
often, they can always alias it in some local header.

I think that _ASSUME_ is too hard to understand, probably because this
doesn't explicitly say what would happen if the assumption were violated.
One has to understand that this is introducing a *compiler* assumption to
understand that violating it would be Undefined Behavior.

How about  MOZ_ALLOW_COMPILER_TO_GO_CRAZY  ;-) This is technically
correct, and explicit!

Benoit




 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 17:19 GMT-04:00 Mike Habicher mi...@mozilla.com:

   MOZ_UNDEFINED_BEHAVIOUR() ? Undefined behaviour is usually enough to
 get C/++ programmers' attention.


I thought about that too; then I remembered that it is only at least a year
_after_ some time at Mozilla working  on Gecko, that I started appreciating
how scary Undefined Behavior is. If I remember correctly, before that, I
was misunderstanding this concept as just Implementation-defined behavior
i.e. not affecting the behavior of other C++ statements, like Undefined
Behavior does.

Benoit




 --m.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_ASSUME_UNREACHABLE is being misused

2014-03-28 Thread Benoit Jacob
2014-03-28 17:14 GMT-04:00 Benoit Jacob jacob.benoi...@gmail.com:


 2014-03-28 16:48 GMT-04:00 L. David Baron dba...@dbaron.org:

 On Friday 2014-03-28 13:41 -0700, Jeff Gilbert wrote:
  My vote is for MOZ_ASSERT_UNREACHABLE and MOZ_OPTIMIZE_FOR_UNREACHABLE.
 
  It's really handy to have something like MOZ_ASSERT_UNREACHABLE,
 instead of having a bunch of MOZ_ASSERT(false, Unreachable.) lines.
 
  Consider MOZ_ASSERT_UNREACHABLE being the same as
 MOZ_OPTIMIZE_FOR_UNREACHABLE in non-DEBUG builds.

 I agree on the first (adding a MOZ_ASSERT_UNREACHABLE), but I don't
 think MOZ_OPTIMIZE_FOR_UNREACHABLE sounds dangerous enough -- the
 name should make it clear that it's dangerous for the code to be
 reachable (i.e., the compiler can produce undefined behavior).
 MOZ_DANGEROUSLY_ASSUME_UNREACHABLE is one idea I've thought of for
 that, though it's a bit of a mouthful.


 I too agree on MOZ_ASSERT_UNREACHABLE, and on the need to make the new
 name of MOZ_ASSUME_UNREACHABLE sound really scary.

 I don't mind if the new name of MOZ_ASSUME_UNREACHABLE is really long, as
 it should rarely be used. If SpiderMonkey gurus find that they need it
 often, they can always alias it in some local header.

 I think that _ASSUME_ is too hard to understand, probably because this
 doesn't explicitly say what would happen if the assumption were violated.
 One has to understand that this is introducing a *compiler* assumption to
 understand that violating it would be Undefined Behavior.

 How about  MOZ_ALLOW_COMPILER_TO_GO_CRAZY  ;-) This is technically
 correct, and explicit!


By the way, here is an anecdote. In some very old versions of GCC, when the
compiler identified Undefined Behavior, it emitted system commands to try
launching some video games that might be present on the system (see:
http://feross.org/gcc-ownage/ ). That actually helped more to raise
awareness of what Undefined Behavior means, than any serious explanation...

So... maybe MOZ_MAYBE_PLAY_STARCRAFT?

Benoit





 Benoit




 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-28 Thread Benoit Jacob
http://en.wikipedia.org/wiki/Plain_Old_Data_Structures


confirms that POD can't have a vptr :-)

Benoit


2014-02-28 7:39 GMT-05:00 Henri Sivonen hsivo...@hsivonen.fi:

 On Fri, Feb 28, 2014 at 1:04 PM, Neil n...@parkwaycc.co.uk wrote:
  At least under MSVC, they have vtables, so they need to be constructed,
 so
  they're not static.

 So structs that inherit at least one virtual method can't be plain old
 C data? That surprises me. And we still don't want to give the dynamic
 linker initializer code to run, right?

 --
 Henri Sivonen
 hsivo...@hsivonen.fi
 https://hsivonen.fi/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-07 Thread Benoit Jacob
2014/1/7 L. David Baron dba...@dbaron.org

 On Tuesday 2014-01-07 09:13 +0100, Ms2ger wrote:
  On 01/07/2014 01:11 AM, Joshua Cranmer  wrote:
  Since Benjamin's message of November 22:
  news://
 news.mozilla.org/mailman.11861.1385151580.23840.dev-platf...@lists.mozilla.org
 
  (search for Please use NS_WARN_IF instead of NS_ENSURE_SUCCESS if you
  don't have an NNTP client). Although there wasn't any prior discussion
  of the wisdom of this change, whether or not to use
  NS_ENSURE_SUCCESS-like patterns has been the subject of very acrimonious
  debates in the past and given the voluminous discussion on style guides
  in recent times, I'm not particularly inclined to start yet another one.
 
  FWIW, I've never seen much support for this change from anyone else
  than Benjamin, and only in his modules the NS_ENSURE_* macros have
  been effectively deprecated.

 I'm happy about getting rid of NS_ENSURE_*.


I would like a random voice in favor of deprecating NS_ENSURE_* for the
reason that hiding control flow behind macros is IMO one of the most evil
usage patterns of macros.

Benoit



 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-07 Thread Benoit Jacob
2014/1/7 Neil n...@parkwaycc.co.uk

 Benoit Jacob wrote:

  I would like a random voice in favor of deprecating NS_ENSURE_* for the
 reason that hiding control flow behind macros is IMO one of the most evil
 usage patterns of macros.

  So you're saying that

 nsresult rv = Foo();
 NS_ENSURE_SUCCESS(rv, rv);

 is hiding the control flow of the equivalent JavaScript

 try {
Foo();
 } catch (e) {
throw e;
 }

 except of course that nobody writes JavaScript like that...


All I mean is that NS_ENSURE_SUCCESS hides a 'return' statement.

#define NS_ENSURE_SUCCESS(res, ret)
   \  do {
   \nsresult __rv = res; /* Don't evaluate |res| more than
once */\if (NS_FAILED(__rv)) {
   \  NS_ENSURE_SUCCESS_BODY(res, ret)
   \  return ret;
   \}
   \  } while(0)


For example, if I'm scanning a function for possible early returns (say I'm
debugging a bug where we're forgetting to close or delete a thing before
returning), I now need to scan for NS_ENSURE_SUCCESS in addition to
scanning for return. That's why hiding control flow in macros is, in my
opinion, never acceptable.

Benoit



 --
 Warning: May contain traces of nuts.

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: List of deprecated constructs [was Re: A proposal to reduce the number of styles in Mozilla code]

2014-01-07 Thread Benoit Jacob
2014/1/7 Kyle Huey m...@kylehuey.com

 On Tue, Jan 7, 2014 at 11:29 AM, Benoit Jacob jacob.benoi...@gmail.comwrote:

 For example, if I'm scanning a function for possible early returns (say
 I'm
 debugging a bug where we're forgetting to close or delete a thing before
 returning), I now need to scan for NS_ENSURE_SUCCESS in addition to
 scanning for return. That's why hiding control flow in macros is, in my
 opinion, never acceptable.


 If you care about that 9 times out of 10 you are failing to use an RAII
 class when you should be.


I was talking about reading code, not writing code. I spend more time
reading code that I didn't write, than writing code. Of course I do use
RAII helpers when I write this kind of code myself, in fact just today I
landed two more such helpers in gfx/gl/ScopedGLHelpers.* ...

Benoit



 Since we seem to be voting now, I am moderately opposed to making XPCOM
 method calls more boilerplate-y, and very opposed to removing
 NS_ENSURE_SUCCESS without some sort of easy shorthand to test an nsresult
 and print to the console if it is a failure.  I know for sure that some of
 the other DOM peers (smaug and bz come to mind) feel similarly about the
 latter.

 - Kyle

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
Note that we already do use at least those STL containers for which we
don't have an equivalent in the tree. I've seen usage of at least:
std::map, std::set, and std::bitset.

I think that Nick has a good point about reporting memory usage, but I
think that the right solution to this problem is to add Mozilla equivalents
for the STL data structures that we need to fork, not to skew all your
design to use the data structures that we have instead of the ones you need.

Forking STL data structures into Mozilla code seems reasonable to me.
Besides memory reporting, it also gives us another benefit: guarantee of
consistent implementation across platforms and compilers.

Benoit



2013/12/10 Nicholas Nethercote n.netherc...@gmail.com

 On Tue, Dec 10, 2013 at 8:28 PM, Chris Pearce cpea...@mozilla.com wrote:
  Hi All,
 
  Can we start using C++ STL containers like std::set, std::map,
 std::queue in
  Mozilla code please?


 https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code#C.2B.2B_and_Mozilla_standard_libraries
 has the details.

 As a general rule of thumb, prefer the use of MFBT or XPCOM APIs to
 standard C++ APIs. Some of our APIs include extra methods not found in
 the standard API (such as those reporting the size of data
 structures). 

 I'm particularly attuned to that last point.  Not all structures grow
 large enough to be worth reporting, but many are.  In the past I've
 converted STL containers to Mozilla containers just to get memory
 reporting.

 Nick
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
Also note that IIUC, the only thing that prevents us from solving the
memory-reporting problem using a STL allocator, is that the spec doesn't
allow us to rely on storing per-object member data on a STL allocator.

Even without that, we could at least have a STL allocator doing
per-STL-container-class memory reporting, so that we can at least know how
much memory is taken by all std::set 's together. Just so we know if that
ever becomes a significant portion of dark matter.

Benoit


2013/12/10 Benoit Jacob jacob.benoi...@gmail.com

 Note that we already do use at least those STL containers for which we
 don't have an equivalent in the tree. I've seen usage of at least:
 std::map, std::set, and std::bitset.

 I think that Nick has a good point about reporting memory usage, but I
 think that the right solution to this problem is to add Mozilla equivalents
 for the STL data structures that we need to fork, not to skew all your
 design to use the data structures that we have instead of the ones you need.

 Forking STL data structures into Mozilla code seems reasonable to me.
 Besides memory reporting, it also gives us another benefit: guarantee of
 consistent implementation across platforms and compilers.

 Benoit



 2013/12/10 Nicholas Nethercote n.netherc...@gmail.com

 On Tue, Dec 10, 2013 at 8:28 PM, Chris Pearce cpea...@mozilla.com
 wrote:
  Hi All,
 
  Can we start using C++ STL containers like std::set, std::map,
 std::queue in
  Mozilla code please?


 https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code#C.2B.2B_and_Mozilla_standard_libraries
 has the details.

 As a general rule of thumb, prefer the use of MFBT or XPCOM APIs to
 standard C++ APIs. Some of our APIs include extra methods not found in
 the standard API (such as those reporting the size of data
 structures). 

 I'm particularly attuned to that last point.  Not all structures grow
 large enough to be worth reporting, but many are.  In the past I've
 converted STL containers to Mozilla containers just to get memory
 reporting.

 Nick
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Can we start using C++ STL containers in Mozilla code?

2013-12-10 Thread Benoit Jacob
2013/12/10 Chris Pearce cpea...@mozilla.com

 It seems to me that we should be optimizing for developer productivity
 first, and use profiling tools to find code that needs to be optimized.

 i.e. we should be able to use STL containers where we need basic ADTs in
 day-to-day coding, and if instances of these containers show up in profiles
 then we should look at moving indivdual instances over to more optimized
 data structures.


 On 12/11/2013 4:42 AM, Benjamin Smedberg wrote:

 njn already mentioned the memory-reporting issue.


 We already have this problem with third party libraries that we use. We
 should work towards having a port of the STL that uses our memory
 reporters, so that we can solve this everywhere, and influence the size of
 generated code for these templates.


I agree with the above.

I would also like to underline an advantage of the STL's design: the API is
very consistent across containers, which allows to most easily switch
containers (e.g. switch between map and unordered_map) and recompile.

This has sometimes been derided as a footgun as one can unintentionally use
a container with an algorithm that isn't efficient with it.

But this also has a really nice, important effect: that one can avoid
worrying too early about optimization details, such as whether a binary
tree is efficient enough for a given use case or whether a hash table is
needed instead.

By contrast, our current Mozilla containers have each their own API and no
equivalent of the STL's iterators, so code using one container becomes
married to it. I believe that this circumstance is why optimization
details have been brought up IMHO prematurely in this thread, needlessly
complicating this conversation.

2013/12/10 Robert O'Callahan rob...@ocallahan.org

 Keep in mind that proliferation of different types for the same
 functionality hurts developer productivity in various ways, especially when
 they have quite different APIs. That's the main reason I'm not excited
 about widespread usage of a lot of new (to us) container types.


For the same reason as described above, I believe that adopting STL
containers is the solution, not the problem! The STL shows how to design
containers that have a sufficiently similar API that, in most cases where
that makes sense (e.g. between a map and an unordered_map), you can switch
containers without having to adapt to a different API.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Benoit Jacob
I would like to know the *effective* average number of original source
files per unified source file, and see how it compares to the *requested*
one (which you are adjusting here).

Because many unified directories have a low number of source files, the
effective number of sources per unified source will be lower than the
requested one.

Benoit


2013/12/2 Mike Hommey m...@glandium.org

 Hi,

 It was already mentioned that unified builds might be causing memory
 issues. Since the number of unified sources (16) was decided more or
 less arbitrarily (in fact, it's just using whatever was used for
 ipdl/webidl builds, which, in turn just used whatever seemed a good
 tradeoff between clobber build and incremental build with a single .cpp
 changing), it would be good to make an informed decision about the
 number of unified sources.

 So, now that mozilla-inbound (finally) builds with different numbers of
 unified sources (after fixing bugs 944844 and 945563, but how long
 before another problem slips in?[1]), I got some build time numbers on my
 machine (linux, old i7, 16GB RAM) to give some perspective:

 Current setup (16):
   real11m7.986s
   user63m48.075s
   sys 3m24.677s
   Size of the objdir: 3.4GiB
   Size of libxul.so: 455MB

 12 unified sources (requires additional patches for yet-to-be-filed bugs
 (yes, plural)):
   real  11m18.572s
   user  65m24.145s
   sys   3m28.113s
   Size of the objdir: 3.5GiB
   Size of libxul.so: 464MB

 8 unified sources:
   real11m47.825s
   user68m21.888s
   sys 3m39.406s
   Size of the objdir: 3.6GiB
   Size of libxul.so: 476MB

 4 unified sources:
   real  12m52.630s
   user  76m41.208s
   sys   4m2.783s
   Size of the objdir: 3.9GiB
   Size of libxul.so: 509MB

 2 unified sources:
   real  14m59.050s
   user  90m44.928s
   sys   4m45.418s
   Size of the objdir: 4.3GiB
   Size of libxul.so: 561MB

 disabled unified sources:
   real  18m1.001s
   user  113m0.524s
   sys   5m57.970s
   Size of the objdir: 4.9GiB
   Size of libxul.so: 628MB

 Mike

 1. By the way, those type of bugs that show up at different number of
 unified sources are existing type of problems waiting to arise when we
 add source files, and running non-unified builds doesn't catch them.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Benoit Jacob
2013/12/3 Chris Peterson cpeter...@mozilla.com

 On 12/3/13, 8:53 AM, Ted Mielczarek wrote:

 On 12/2/2013 11:39 PM, Mike Hommey wrote:

 Current setup (16):
real11m7.986s
user63m48.075s
sys 3m24.677s
Size of the objdir: 3.4GiB
Size of libxul.so: 455MB

  Just out of curiosity, did you try with greater than 16?


 I tested unifying 99 files. On my not-super-fast MacBook Pro, I saw no
 significant difference (up or down) in real time compared to 16 files. This
 result is in line with Mike's results showing only small improvements
 between 8, 12, and 16 files.


See my email earlier in this thread.  Until we know the effective
unification ratio (as opposed to the one we request) we can't draw
conclusions from that.

Benoit




 chris


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deciding whether to change the number of unified sources

2013-12-03 Thread Benoit Jacob
Here, stripping a non-opt debug linux 64bit libxul brings it down from 534
MB down to 117 MB.

Benoit


2013/12/3 L. David Baron dba...@dbaron.org

 On Tuesday 2013-12-03 10:18 -0800, Brian Smith wrote:
  Also, I would be very interested in seeing size of libxul.so for
  fully-optimized (including PGO, where we normally do PGO) builds. Do
  unified builds help or hurt libxul size for release builds? Do unified
  builds help or hurt performance in release builds?

 I'd certainly hope that nearly all of the difference in size of
 libxul.so is debugging info that wouldn't be present in a non-debug
 build.  But it's worth testing, because if that's not the case,
 there are some serious improvements that could be made in the C/C++
 toolchain...

 -David

 --
 턞   L. David Baron http://dbaron.org/   턂
 턢   Mozilla  https://www.mozilla.org/   턂
  Before I built a wall I'd ask to know
  What I was walling in or walling out,
  And to whom I was like to give offense.
- Robert Frost, Mending Wall (1914)

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mitigating unified build side effects Was: Thinking about the merge with unified build

2013-11-30 Thread Benoit Jacob
I'm all for reducing usage of 'using' and in .cpp files I've been switching
to doing

namespace foo {
// my code
}

instead of

using namespace foo;
// my code

where possible, as the latter leaks to other .cpp files in unified builds
and the former doesn't.

Regarding the proposal to ban 'using' only at root scope only, keep in mind
that we have conflicting *nested* namespaces too:

mozilla::ipc
mozilla::dom::ipc

so at least that class of problems won't be solved by this proposal. But I
still agree that it's a step in the right direction.

Benoit


2013/11/29 Mike Hommey m...@glandium.org

 On Sat, Nov 30, 2013 at 12:39:59PM +0900, Mike Hommey wrote:
  Incidentally, in those two weeks, I did two attempts at building
  without unified sources, resulting in me filing 4 bugs in different
  modules for problems caused by 6 different landings[1]. I think it is
 time
  to seriously think about having regular non-unified builds (bug 942167).
  If that helps, I can do that on birch until that bug is fixed.

 Speaking of which, there are essentially two classes of such errors:
 - missing headers.
 - namespace spilling.

 The latter is due to one source doing using namespace foo, and some
 other source forgetting the same because, in the unified case, they
 benefit from the other source doing it. I think in the light of unified
 sources, we should forbid non-scoped use of using.

 That is:

 using namespace foo;

 would be forbidden, but

 namespace bar {
 using namespace foo;
 }

 wouldn't. In most cases, bar could be mozilla anyways.

 Thoughts?

 Mike
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HWA and OMTC on Linux

2013-11-26 Thread Benoit Jacob
Congrats Nick, after all is said and done, this is a very nice milestone to
cross!


2013/11/26 Nicholas Cameron nick.r.came...@gmail.com

 This has finally happened. If it sticks, then after this commit (
 https://tbpl.mozilla.org/?tree=Mozilla-Inboundrev=aa0066b3865c) there
 will be no more main thread OpenGL compositing on any platform. See my blog
 post (
 http://featherweightmusings.blogspot.co.nz/2013/11/no-more-main-thread-opengl-in-firefox.html)
 for details (basically what I proposed at the beginning of this thread).
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Recent build time improvements due to unified sources

2013-11-20 Thread Benoit Jacob
2013/11/20 Ehsan Akhgari ehsan.akhg...@gmail.com

 On 2013-11-20 5:27 PM, Robert O'Callahan wrote:

 On Thu, Nov 21, 2013 at 11:06 AM, Zack Weinberg za...@panix.com wrote:

  On 2013-11-20 12:37 PM, Benoit Jacob wrote:

  Talking about ideas for further extending the impact of UNIFIED_SOURCES,
 it
 seems that the biggest limitation at the moment is that sources can't be
 unified between different moz.build's. Because of that, source
 directories
 that consist of many small sub-directories do not benefit much from
 UNIFIED_SOURCES at the moment. I would love to have the ability to
 declare
 in a moz.build that UNIFIED_SOURCES from here downwards, including
 subdirectories, are to be unified with each other. Does that sound
 reasonable?


 ... Maybe this should be treated as an excuse to reduce directory
 nesting?


 We don't need an excuse!

 layout/xul/base/src, and pretty much everything under content/, I'm
 looking
 at you.


 How do you propose that we know which directory contains the source then?


And I always thought that all public: methods had to go in the public/
directory!

Benoit



 /sarcasm

 Ehsan


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Graphics leaks(?)

2013-11-19 Thread Benoit Jacob
2013/11/19 Nicholas Nethercote n.netherc...@gmail.com

 And the comment at https://bugzilla.mozilla.org/show_bug.cgi?id=915940#c13is
 worrying:
 ... once allocated the memory is only referenced via a SurfaceDescriptor,
 which is a generated class (from IPDL). These are then passed around from
 thread to thread and not really kept tracked of - the lifetime management
 for
 them and their resources is an ongoing nightmare and is why we were leaking
 this memory image memory until Friday.

 Is my perception wrong -- is graphics code especially leak-prone?  If not,
 could we be doing more and/or different things to make such leaks less
 likely?
 https://bugzilla.mozilla.org/show_bug.cgi?id=935778 (hook
 RefCounted/RefPtr
 into the leak checking) is one idea.  Any others?


The problem is that SurfaceDescriptor is a non-refcounted IPDL wrapper, and
as such it should only ever have been used to reference surfaces short-term
in IPDL-related code, where short-term means over a period of time not
extending across the handling of more than one IPDL message. Otherwise, as
the thing it wraps is often a non-refcounted IPDL actor, it can have been
deleted at any time by the IPDL code (e.g. on channel error). So the
problem here is not just that SurfaceDescriptor makes it easy to write
leaky code, it also makes it super easy to write crashy code, if one
doesn't stick to the precise usage pattern that it is safe for (as said
above, only use it to process one IPDL message).

But it was very much used outside of that safe use case, mainly because
there was no other platform-generic surface type that people could use,
that would cover all the surface types that could be covered by a
SurfaceDescriptor. In a nutshell, new platform-specific surface types were
added, but platform-generic code needs a platform-generic surface type that
can specialize to any of these platform-specific types, and the only such
platform-generic surface type that we had for a while, that covered all the
newly added surface types, was SurfaceDescriptor.

Because of the way it ended being used in many places, SurfaceDescriptor
was involved in maybe half of the b2g 1.2 blocking (koi+) graphics crashes
that we went over over the past few months.

During the Paris work week we had extended sessions (I think they totalled
about 10 hours) about what a right platform-generic surface type would
be, and how they would be passed around. Obviously, it would be
reference-counted, but we worked out the details, and you can see the
results of these sessions here:

https://wiki.mozilla.org/Platform/GFX/Surfaces

In a nutshell, there is a near-term-but-not-immediately-trivial plan to get
such a right surface type, and it would come from unifying the existing
TextureClient and TextureHost classes.

Meanwhile, I had initially also been working on an even more near-term plan
to provide a drop-in safe replacement for SurfaceDescriptor, and wrote
patches on this bug, https://bugzilla.mozilla.org/show_bug.cgi?id=932537 ,
but have since been told that this is considered not worth it anymore since
we should get the right surface type described above soon enough.

Hope that answers some of your questions / eases some of your concerns
Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unified builds

2013-11-18 Thread Benoit Jacob
2013/11/18 Boris Zbarsky bzbar...@mit.edu

 On 11/17/13 5:26 PM, Ehsan Akhgari wrote:

 I don't think that we need to try to fix this problem any more than the
 general problem of denoting our dependencies explicitly.  It's common for
 you to remove an #include from a header and find dozens of .cpp files in
 the tree that implicitly depended on it.  And that is much more likely to
 happen than people adding/removing cpp files.


 While true, in the new setup we have a different problem: adding or
 removing a .cpp file makes other random .cpp files not compile.

 This is especially a problem where windows.h is involved.  For bindings we
 simply disallowed including it in binding .cpp files, but for other .cpp
 files that's not necessarily workable.  Maybe we need a better solution for
 windows.h bustage.  :(


While working on porting directories to UNIFIED_SOURCES, I too have found
that the main problem was system headers (not just windows.h but also Mac
and X11 headers) tend to define very polluting symbols in the root
namespace, which we collide with thanks to using namespace statements.

The solution I've employed so far has been to:
 1) minimize the number of cpp files that need to #include such system
headers, by typically moving code out of header files and only #including
system headers in a few implementation files;
 2) Keep these cpp files, that #include system headers, in plain old
SOURCES, not in UNIFIED_SOURCES.

In other words, I've been doing partial ports to UNIFIED_SOURCES only, not
full ports, but in this way we can still get 90% of the benefits and
sidestep problems caused by system headers. And 1) is generally considered
a good thing regardless.

Benoit




 -Boris

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unified builds

2013-11-17 Thread Benoit Jacob
Here is a wiki page to track our progress on this front and our means of
synchronizing on this work:

https://wiki.mozilla.org/Platform/Porting_to_unified_sources

Benoit


2013/11/14 Ehsan Akhgari ehsan.akhg...@gmail.com

 I've started to work on a project in my spare time to switch us to use
 unified builds for C/C++ compilation.  The way that unified builds work is
 by using the UNIFIED_SOURCES instead of the SOURCES variable in moz.build
 files.  With that, the build system creates files such as:

 // Unified_cpp_path_0.cpp
 #include Source1.cpp
 #include Source2.cpp
 // ...

 And compiles them instead of the individual source files.

 The advantage of this is that it speeds up the compilation (I've measured
 between 6-15x speed improvement depending on the code in question
 locally.)  But there are also trade-offs with this approach.  One trade-off
 is that the source code might require changes before it can be compiled in
 this way, due to things like name clashes, etc.  The other one is that if
 you change one .cpp file which is built in unified mode, we would spend
 more time compiling it because we'll be compiling the unified .cpp file
 which includes more than what you have changed.  It's hard to come up with
 numbers for this trade-off, but assuming that the linking step takes way
 longer than the C++ compilation in the touch one .cpp file scenario, and
 also that changes to headers will potentially trigger multiple .cpp
 rebuilds in the same directory, I think doing unified builds in more parts
 of the tree is a reasonable choice.

 I'm going to continue working on this as time permits, if you're interested
 in helping out, or if you have questions, please get in touch.

 Cheers,
 --
 Ehsan
 http://ehsanakhgari.org/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HWA and OMTC on Linux

2013-11-07 Thread Benoit Jacob
(Not replying to anyone in particular, just replying to the lastest email
in this thread at this point: ) Let's not transform the original
conversation here, which was purely a technical conversation about
improving the way we do compositing on Linux, into a prioritization
conversation, which would not belong on dev-platform.

One data point that Karl provided here is purely technical and needs to be
taken into account in this technical conversation well before one would
start talking priorities: newer types of Web content, e.g. WebGL, may mean
that not having OpenGL compositing may become less and less viable going
forward.

Benoit


2013/11/7 Karl Tomlinson mozn...@karlt.net

 Andreas Gal writes:

  On Nov 7, 2013, at 1:48 PM, Karl Tomlinson mozn...@karlt.net wrote:
 
  Andreas Gal writes:
 
  Its not a priority to fix Linux/X11. We will happily take
  contributed patches, and people are welcome to fix issues they
  see, as long its not at the expense of the things that matter.
 
  Do bugs in B2G Desktop on Linux/X11 matter?
 
  I assume glitches and perf issues that are not on the device
  don't really matter.  How about crashes and security bugs?
 
  Nobody said Linux/X11 doesn't matter. The proposal was to focus
  on OMTC on all platforms, including Linux/X11.

 I assume games and maps and WebGL matter.
 The current direction for WebGL on Linux/X11 seems to be assuming
 that OGL will be the solution.

 The comments thus far in this thread imply that OGL-OMTC-Linux-X11
 doesn't matter.  If that is the case, then we need to find another
 solution for WebGL.  However, OGL-OMTC layers is likely the best
 solution for WebGL, and not necessarily more work.

 And if OGL-OMTC-Linux-X11 doesn't matter for Firefox, then how
 much is B2G Desktop used on Linux?  Even if only crashes on Linux
 B2G Desktop matter, then OGL-OMTC-Linux-X11 Firefox will benefit
 from sharing the same code, and benefits flow the other way too.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Killing the Moz Audio Data API

2013-10-17 Thread Benoit Jacob
The other day, while testing some B2G v1.2 stuff, I noticed the Moz Audio
Data deprecation warning flying in adb logcat. So you probably need to
check with B2G/Gaia people about the timing to kill this API.

Benoit


2013/10/16 Ehsan Akhgari ehsan.akhg...@gmail.com

 I'd like to write a patch to kill Moz Audio Data in Firefox 28 in favor of
 Web Audio.  We added a deprecation warning for this API in Firefox 23 (bug
 855570).  I'm not sure what our usual process for this kind of thing is,
 should we just take the patch, and evangelize on the broken websites enough
 times so that we're able to remove the feature in a stable build?

 Thanks!
 --
 Ehsan
 http://ehsanakhgari.org/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unified shader for layer rendering

2013-10-16 Thread Benoit Jacob
2013/10/10 Benoit Jacob jacob.benoi...@gmail.com

 this is the kind of work that would require very careful performance
 measurements


Here is a benchmark:
http://people.mozilla.org/~bjacob/webglbranchingbenchmark/webglbranchingbenchmark.html

Some results:
http://people.mozilla.org/~bjacob/webglbranchingbenchmark/webglbranchingbenchmarkresults.txt

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unified shader for layer rendering

2013-10-11 Thread Benoit Jacob
2013/10/11 Nicholas Cameron nick.r.came...@gmail.com

 The advantage to me is that we have a single shader and avoid the
 combinatorial explosion when we add more shaders for things like SVG
 filters/CSS compositing.



[...snip...]

 I have not recently been discussing new shaders, perhaps you are thinking
 of mstange who is looking at HW implementations of SVG filters?


Incidentally, I just looked into the feasibility of implementing
constant-time-regardless-of-operands (necessary for filter security)
filters in OpenGL shaders, as a similar topic is being discussed at the
moment on the WebGL mailing list, and there is a serious problem:

Newer GPUs (since roughly 2008 for high-end desktop GPUs, since 2013 for
high-end mobile GPUs) have IEEE754-conformant floating point with
denormals, and denormals may be slow there too.

https://developer.nvidia.com/content/cuda-pro-tip-flush-denormals-confidence
http://malideveloper.arm.com/engage-with-mali/benchmarking-floating-point-precision-part-iii/

I suggest on the Khronos public_webgl list that one way that this could be
solved in the future would be to write an OpenGL extension spec to force
flush-to-zero behavior to avoid denormals. For all I know, flush-to-zero is
currently a CUDA compiler flag but isn't exposed to OpenGL.

The NVIDIA whitepaper above also hints at this only being a problem with
multi-instruction functions such as square-root and inverse-square-root
(which is already a problem for e.g. lighting filters, which need to
normalize a vector), but that would at best be very NVIDIA-specific; in
general, denormals are a minority case that requires special handling, so
their slowness is rather universal; all x86 and ARM CPUs that I tested have
slow denormals.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unified shader for layer rendering

2013-10-10 Thread Benoit Jacob
I'll pile on what Benoit G said --- this is the kind of work that would
require very careful performance measurements before we commit to it.

Also, like Benoit said, we have seen no indication that glUseProgram is
hurting us. General GPU wisdom is that switching programs is not per se
expensive as long as one is not relinking them, and besides the general
performance caveat with any state change, forcing to split drawing into
multiple draw-calls, which also applies to updating uniforms, so we're not
escaping it here.

In addition to that, not all GPUs have real branching. My Sandy Bridge
Intel chipset has real branching, but older Intel integrated GPUs don't,
and I'd be very surprised if all of the mobile GPUs we're currently
supporting did. To put this in perspective, in the world of discrete
desktop NVIDIA GPUs, this was only introduced in the Geforce 6 series. Old,
but a lot more advanced that some integrated/mobile devices we still
support. On GPUs that are not capable of actual branching, if...else blocks
are implemented by executing all branches and masking the result. On such
GPUs, a unified shader would run considerably slower, basically N times
slower for N branches. Even on GPUs with branching, each branching has a
cost and we have N of them, so in all cases the unified shader approach
introduces new (at least potential) scalability issues.

So if we wanted to invest in this, we would need to conduct careful
benchmarking on a wide range of hardware.

Benoit


2013/10/10 Benoit Girard bgir...@mozilla.com

 On Thu, Oct 10, 2013 at 7:59 AM, Andreas Gal andreas@gmail.com
 wrote:

  Rationale:
  switching shaders tends to be expensive.
 

 In my opinion this is the only argument for working on this at moment.
 Particularly at the moment where we're overwhelmed with high priority
 desktop and mobile graphics work, I'd like to see numbers before we
 consider a change. I have seen no indications that we get hurt by switching
 shaders. I suspected it might matter when we start to have 100s of layers
 in a single page but we always fall down from another reason before this
 can become a problem. I'd like to be able to answer 'In which use cases
 would patching this lead to a user measurable improvement?' before working
 on this. Right now we have a long list of bugs where we have a clear answer
 to that question. Patching this is good to check off that we're using the
 GPU optimally on the GPU best practice dev guides and will later help us
 batch draw calls more aggressively but I'd like to have data to support
 this first.

 Also old Android drivers are a bit touchy with shaders so I recommend
 counting some dev times for resolving these issues.

 I know that roc and nrc have some plans for introducing more shaders which
 will make a unified shader approach more difficult. I'll let them weight in
 here.

 On the flip side I suspect having a single unified shader will be faster to
 compile then the several shaders we have on the start-up path.
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unified shader for layer rendering

2013-10-10 Thread Benoit Jacob
2013/10/10 Benoit Jacob jacob.benoi...@gmail.com

 I'll pile on what Benoit G said --- this is the kind of work that would
 require very careful performance measurements before we commit to it.

 Also, like Benoit said, we have seen no indication that glUseProgram is
 hurting us. General GPU wisdom is that switching programs is not per se
 expensive as long as one is not relinking them, and besides the general
 performance caveat with any state change, forcing to split drawing into
 multiple draw-calls, which also applies to updating uniforms, so we're not
 escaping it here.

 In addition to that, not all GPUs have real branching. My Sandy Bridge
 Intel chipset has real branching, but older Intel integrated GPUs don't,
 and I'd be very surprised if all of the mobile GPUs we're currently
 supporting did. To put this in perspective, in the world of discrete
 desktop NVIDIA GPUs, this was only introduced in the Geforce 6 series.


In fact, even on a Geforce 6, we only get full real CPU-like (MIMD)
branching in vertex shaders, not in fragment shaders.

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter34.html

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Transparent Black, or Are all transparent colors create equal?

2013-10-08 Thread Benoit Jacob
One kind of layer that would not have premultiplied alpha would be a
CanvasLayer with a WebGL context with the {premultipliedAlpha:false}
context attribute (what was passed as the 2nd argument to
canvas.getContext).

I did some testing and found a bug,
https://bugzilla.mozilla.org/show_bug.cgi?id=924375 . But I don't know
whether it's related to what's discussed here.

Benoit




2013/10/8 Robert O'Callahan rob...@ocallahan.org

 On Mon, Oct 7, 2013 at 3:11 PM, Chris Peterson cpeter...@mozilla.com
 wrote:

  I stumbled upon some layout code that for transparent colors using != or
  == NS_RGBA(0,0,0,0):
 
  http://dxr.mozilla.org/**mozilla-central/search?q=**
  regexp%3A%23[!%3D]%3D%20%3FNS_**RGBA%23
 http://dxr.mozilla.org/mozilla-central/search?q=regexp%3A%23[!%3D]%3D%20%3FNS_RGBA%23
 
 
  Are those checks unnecessarily restrictive?
 
  One of the checks has a comment saying Use the strictest match for
  'transparent' so we do correct round-tripping of all other rgba()
 values,
  but the strictness of the other checks is unclear. ;)
 

 All those checks look correct to me. Most of the colors we deal with are
 using premultiplied alpha (e.g. the checks in nsDisplayList certainly do),
 in which case the only valid alpha value is RGBA(0,0,0,0).

 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
 le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w  *
 *
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ Standards Committee meeting next week

2013-09-20 Thread Benoit Jacob
OK, here is something that I would really like:

http://llvm.org/devmtg/2012-11/Weber_TypeAwareMemoryProfiling.pdf

Basically, this is a language extension that asks the compiler to store
type information for each object in memory, so that one can query at
runtime the type of what's stored at a given address.

That would be useful e.g. for refgraph (
https://github.com/bjacob/mozilla-central/wiki/Refgraph ) and, I suppose,
that also wouldn't hurt to know more about heap-unclassified in
about:memory. Basically, everything would be 'classified' at least by c++
type name. And that would also be a nice tool to have around during long
debugging sessions. So, I would appreciate if you could figure if there is
any intention to add something like this in the standard.

(Thanks to Rafael for bringing this to my attention)

Benoit




2013/9/20 Botond Ballo bot...@mozilla.com

 Hi everyone,

 The C++ Standards Committee is meeting in Chicago next week. Their focus
 will be on C++14, the upcoming version of the C++ standard, as well as some
 Technical Specifications (specifications for features intended to be
 standardized but not fully-baked enough to be standardized now) that are
 also planned for publication in 2014. Presumably there will also be some
 discussion of the following version of the standard, C++17.

 I will attend this meeting as an observer. I intend to follow the progress
 of the Concepts Lite proposal [1] which I'm particularly interested in, but
 I will try to keep up with other goings-on as well (the committee splits up
 into several sub-groups that meet in parallel over the course of the week).

 I wanted to ask if there's anything anyone would like to know about the
 upcoming standards that I could find out at the meeting - if so, please let
 me know and I will do my best to find it out.

 If anyone's interested in the C++ standardization process, you can find
 more information here [2].

 Thanks,
 Botond

 [1]
 http://isocpp.org/blog/2013/02/concepts-lite-constraining-templates-with-predicates-andrew-sutton-bjarne-s
 [2] http://isocpp.org/std
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-12 Thread Benoit Jacob
2013/9/12 Avi Hal avi...@gmail.com

 On Sunday, September 8, 2013 6:22:01 AM UTC+3, Benoit Jacob wrote:
  Hi,
 
 
 
  It seems that we have some much-included header files including
 algorithm
 
  just to get std::min and std::max.
 

 Is it because min/max are used at the h file?


Yes.

can it be delegated to cpp files?


When it can, that's the easy case. So the case that we're really discussing
here is when it can't because the existing code intentionally implements a
function in a header to allow it to get inlined (some of that existing code
might be wrong in believing that it needs to get inlined, but that's costly
to prove wrong, needs custom benchmarking for each case).

how many other files which include said h files count on them to include
 algorithm?


I don't know that.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tegra build backlog is too big!

2013-09-11 Thread Benoit Jacob
2013/9/11 Mike Hommey m...@glandium.org

 On Wed, Sep 11, 2013 at 04:39:37PM -0700, jmaher wrote:
  quite possibly we don't need all those jobs running on tegras.  I
  don't know of a bug in the product that has broken on either the tegra
  or panda platform but not the other.

 Off the top of my head:

 - I have broken one but not the other on several occasions, involving
 differences in the handling of instruction and data caches, but unless
 you're touching the linker or the jit, it shouldn't matter.

 - Tegras don't have neon instructions, so wrong build flags, or wrong run
 time detection could trigger failures on one end and not the other.

 - GPUs on tegras and pandas, as well as their supporting libraries,
 differ, too. But unless you are touching graphics code, that shouldn't
 matter, unless your changes trigger some pre-existing bug..


And Panda boards have 1G of RAM, which is more than the Tegra boards have,
right? Surely that can help avoiding OOM problems on Pandas.

At some point earlier this year, WebGL conformance tests were perma-orange
on Tegras but only intermittently orange on Pandas. RAM differences were
likely the cause, as WebGL tests were OOM'ing a lot.

Benoit




 So, while chances of breaking one and not the other are slim, they do
 exist.

 Mike
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Benoit Jacob
We have many other headers including algorithm; it would be interesting
to compare the percentage of our cpp files that recursively include
algorithm before and after that patch; I suppose that just a single patch
like that is not enough to move that needle much, because there are other
ways that algorithm gets included in the same cpp files.

I do expect, though, that the 23 ms overhead from including algorithm is
real (at least as an order of magnitude), so I still expect that we can
save 23 ms times the number of cpp files that currently include algorithm
and could avoid to.

Those 9,000 lines of code are indeed a moderate amount of extra code
compared to the million lines of code that we have in many compilation
units, and yes I think we all expect that there are bigger wins to make.
This one is a relatively easy one though and 9,000 lines, while moderate,
is not negligible. How many other times are we neglecting 9,000 lines and
to how much does that add up? In the end, I believe that the ratio

(number of useful lines of code) / (total lines of code included)

is a very meaningful metric, and including algorithm for min/max scores
less than 1e-3 on that metric.

Benoit



2013/9/8 Nicholas Cameron nick.r.came...@gmail.com

 I timed builds to see if this makes a significant difference and it did
 not.

 I timed a clobber debug build using clang with no ccache on Linux on a
 fast laptop. I timed using a pull from m-c about a week old (I am using
 this pull because I have a lot of other stats on it). I then applied
 bjacob's nscoord patch from the bug and a patch of my own which does a
 similar thing for some Moz2D headers which get pulled into a lot of files
 (~900 other headers, presumably more cpp files). For both runs I did a full
 build, then clobbered, then timed a build. I avoided doing any other work
 on the laptop. n=1, so there might be variation, but my experience with
 build times is that there usually isn't much.

 Before changes:

 real  38m54.373s
 user  234m48.508ms
 sys 7m18.708s

 after changes:

 real  39m11.123s
 user  234m26.864ms
 sys 7m10.336s

 The removed headers are also the ideal case for ccache, so incremental
 builds or real life clobber builds should be affected even less by these
 changes.

 I don't think these kind of time improvements make it worth duplicating
 std library code into mfbt, we may as well just pull in the headers and
 forget about it. A caveat would be if it makes a significant difference on
 slower systems.

 Given that improving what gets included via headers can make significant
 difference to build time, this makes me wonder exactly what aspect of
 header inclusion (if not size, which we should catch here) makes the
 difference.

 Nick.

 On Sunday, September 8, 2013 3:22:01 PM UTC+12, Benoit Jacob wrote:
  Hi,
 
 
 
  It seems that we have some much-included header files including
 algorithm
 
  just to get std::min and std::max.
 
 
 
  That seems like an extreme case of low ratio between lines of code
 included
 
  (9,290 on my system, see Appendix below) and lines of code actually used
 
  (say 6 with whitespace).
 
 
 
  I ran into this issue while trying to minimize nsCoord.h (
 
  https://bugzilla.mozilla.org/show_bug.cgi?id=913868 ) and in my patch, I
 
  resorted to defining my own min/max functions in a nsCoords_details
 
  namespace.
 
 
 
  This prompted comments on that bug suggesting that it might be better to
 
  have that in MFBT. But that, in turn, sounds like overturning our recent
 
  decision to switch to std::min / std::max, which I feel is material for
 
  this mailing list.
 
 
 
  It is also conceivable to keep saying that we should use std::min /
 
  std::max *except* in headers that don't otherwise include algorithm,
 
  where it may be more reasonable to use the cheap-to-#include variant
 
  instead.
 
 
 
  What do you think?
 
 
 
  Benoit
 
 
 
  === Appendix: how big and long to compile is algorithm ? ===
 
 
 
  On my Ubuntu 12.04 64bit system, with GCC 4.6.3, including algorithm
 
  means recursively including 9,290 lines of code:
 
 
 
  $ echo '#includealgorithm'  a.cpp  g++ -save-temps -c a.cpp  wc -l
 
  a.ii
 
  9290 a.ii
 
 
 
  On may wonder what this implies in terms of compilation times; here is a
 
  naive answer. I'm timing 10 successive compilations of a file that just
 
  includes iostream, and then I do the same with a file that also
 includes
 
  algorithm.
 
 
 
  $ echo '#includeiostream'  a.cpp  time (g++ -c a.cpp  g++ -c a.cpp
 
   g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c
 
  a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp)
 
 
 
  real0m1.391s
 
  user0m1.108s
 
  sys 0m0.212s
 
 
 
  echo '#includealgorithm'  a.cpp  echo '#includeiostream'  a.cpp
 
 
  time (g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++
 
  -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp
 
 
  g++ -c a.cpp)
 
 
 
  real0m1.617s
 
  user0m1.324s
 
  sys

Re: Including algorithm just to get std::min and std::max

2013-09-08 Thread Benoit Jacob
Again, how many other similar wins are we leaving on the table because
they're only 10s on a clobber build? It's of course hard to know, which is
why I've suggested the (number of useful lines of code) / (total lines of
code included) ratio as a meaningful metric.

But I'm completely OK with focusing on the bigger wins in the short terms
and only reopening this conversation once we'll be done with the big items.

Benoit


2013/9/8 Mike Hommey m...@glandium.org

 On Sun, Sep 08, 2013 at 08:52:23PM -0400, Benoit Jacob wrote:
  We have many other headers including algorithm; it would be interesting
  to compare the percentage of our cpp files that recursively include
  algorithm before and after that patch; I suppose that just a single
 patch
  like that is not enough to move that needle much, because there are other
  ways that algorithm gets included in the same cpp files.
 
  I do expect, though, that the 23 ms overhead from including algorithm
 is
  real (at least as an order of magnitude), so I still expect that we can
  save 23 ms times the number of cpp files that currently include
 algorithm
  and could avoid to.

 23ms times 6000 sources is about 2 minutes and 20 seconds, if you don't
 account for parallelism. If you count 6 processes compiling at the same
 time on average, that's about 23s on a clobber build.
 And according to the .o.pp files in my recently built fennec, we include
 algorithm in less than 3000 files. So we'd be looking at about 10s of
 overhead including algorithm on a clobber build. On a 20-something
 minutes build.
 I'd say there's not much to worry about here.

 Mike

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Stop #including jsapi.h everywhere!

2013-09-07 Thread Benoit Jacob
I just was starting to look at BindingUtils.h as it is one of the most
important hub headers that we have (see
https://bugzilla.mozilla.org/show_bug.cgi?id=912735). But it seems that you
guys are already well ahead into BindingUtils.h discussion. Is there a bug
filed for it?

Benoit


2013/8/21 Nicholas Nethercote n.netherc...@gmail.com

 On Wed, Aug 21, 2013 at 4:46 PM, Boris Zbarsky bzbar...@mit.edu wrote:
  On 8/21/13 2:23 AM, Nicholas Nethercote wrote:
 
  And jswrapper.h includes jsapi.h.

 I will try to remedy that... it looks doable.

 Nick
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Stop #including jsapi.h everywhere!

2013-09-07 Thread Benoit Jacob
2013/9/7 Benoit Jacob jacob.benoi...@gmail.com

 I just was starting to look at BindingUtils.h as it is one of the most
 important hub headers that we have (see
 https://bugzilla.mozilla.org/show_bug.cgi?id=912735). But it seems that
 you guys are already well ahead into BindingUtils.h discussion. Is there a
 bug filed for it?

 Benoit


Here are some patches towards making BindingUtils.h a cheaper header to
include:

https://bugzilla.mozilla.org/show_bug.cgi?id=913847  moves NS_IsMainThread
to a new MainThreadUtils.h header that's cheaper to include, and in
particular is all what BindingUtils.h needs (there was a helpful comment
about that in BindingUtils.h).

https://bugzilla.mozilla.org/show_bug.cgi?id=913852  makes BindingUtils.h
not include algorithm just for one use of std::min.

If there is a BindingUtils.h tracking bug, they could block it.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Stop #including jsapi.h everywhere!

2013-09-07 Thread Benoit Jacob
2013/9/7 Boris Zbarsky bzbar...@mit.edu

 On 9/7/13 12:56 PM, Benoit Jacob wrote:

 https://bugzilla.mozilla.org/**show_bug.cgi?id=913847https://bugzilla.mozilla.org/show_bug.cgi?id=913847
  moves NS_IsMainThread
 to a new MainThreadUtils.h header that's cheaper to include, and in
 particular is all what BindingUtils.h needs (there was a helpful comment
 about that in BindingUtils.h).


 Excellent.  Note 
 https://bugzilla.mozilla.org/**show_bug.cgi?id=909971https://bugzilla.mozilla.org/show_bug.cgi?id=909971also:
  we can stop including MainThreadUtils.h in this header too, I think.


Thanks for the link. MainThreadUtils.h is tiny, though, so this won't be a
big deal anymore.




  
 https://bugzilla.mozilla.org/**show_bug.cgi?id=913852https://bugzilla.mozilla.org/show_bug.cgi?id=913852
  makes BindingUtils.h
 not include algorithm just for one use of std::min.


 This is good, but unfortunately algorithm leaks in all over the place
 anyway in DOM code.

 The way it does that is that dom/Element.h has inline methods that need
 nsPresContext.h and Units.h.  Either one will get you things like nsRect.h
 or nsCoord.h, both of which include algorithm.  Oh, nsContentUtils.h
 includes Units.h too...


Incidentally, nsRect.h just got fixed by
https://bugzilla.mozilla.org/show_bug.cgi?id=913603

Thanks for pointing out nsCoord.h, let's fix it... (will file a bug and
block the tracking bug 912735)

Benoit



 We should strongly consider moving the Element methods that need those
 includes out of line, I think; not sure what we can do about nsContentUtils.


  If there is a BindingUtils.h tracking bug, they could block it.


 There isn't one yet.

 -Boris


 __**_
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/**listinfo/dev-platformhttps://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Including algorithm just to get std::min and std::max

2013-09-07 Thread Benoit Jacob
Hi,

It seems that we have some much-included header files including algorithm
just to get std::min and std::max.

That seems like an extreme case of low ratio between lines of code included
(9,290 on my system, see Appendix below) and lines of code actually used
(say 6 with whitespace).

I ran into this issue while trying to minimize nsCoord.h (
https://bugzilla.mozilla.org/show_bug.cgi?id=913868 ) and in my patch, I
resorted to defining my own min/max functions in a nsCoords_details
namespace.

This prompted comments on that bug suggesting that it might be better to
have that in MFBT. But that, in turn, sounds like overturning our recent
decision to switch to std::min / std::max, which I feel is material for
this mailing list.

It is also conceivable to keep saying that we should use std::min /
std::max *except* in headers that don't otherwise include algorithm,
where it may be more reasonable to use the cheap-to-#include variant
instead.

What do you think?

Benoit

=== Appendix: how big and long to compile is algorithm ? ===

On my Ubuntu 12.04 64bit system, with GCC 4.6.3, including algorithm
means recursively including 9,290 lines of code:

$ echo '#includealgorithm'  a.cpp  g++ -save-temps -c a.cpp  wc -l
a.ii
9290 a.ii

On may wonder what this implies in terms of compilation times; here is a
naive answer. I'm timing 10 successive compilations of a file that just
includes iostream, and then I do the same with a file that also includes
algorithm.

$ echo '#includeiostream'  a.cpp  time (g++ -c a.cpp  g++ -c a.cpp
 g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c
a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp)

real0m1.391s
user0m1.108s
sys 0m0.212s

echo '#includealgorithm'  a.cpp  echo '#includeiostream'  a.cpp 
time (g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++
-c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp  g++ -c a.cpp 
g++ -c a.cpp)

real0m1.617s
user0m1.324s
sys 0m0.244s

(I actually repeated this many times and kept the best result for each; my
hardware is a Thinkpad W520 with a 2.5GHz, 8M cache Core i7).

So we see that adding the #includealgorithm made each compilation 23 ms
longer in average (226 ms for 10 compilations).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: partial GL buffer swap

2013-08-31 Thread Benoit Jacob
2013/8/31 Andreas Gal andreas@gmail.com


 Soon we will be using GL (and its Windows equivalent) on most platforms to
 implement a hardware accelerated compositor. We draw into a back buffer and
 with up to 60hz we perform a buffer swap to display the back buffer and
 make the front buffer the new back buffer (double buffering). As a result,
 we have to recomposite the entire window with up to 60hz, even if we are
 only animating a single pixel.


Do you have a particular device in mind?

Knowing whether we are fill-rate bound on any device that we care about is
an important prerequisite before we can decide whether this kind of
optimization is worth the added complexity.

As an example maybe showing why it is not out of hand obvious that we'd be
fill-rate bound anywhere: the ZTE Open phone has a MSM7225A chipset with
the enhanced variant of the Adreno 200 GPU, which has a fill-rate of 432M
pixels per second (Source: http://en.wikipedia.org/wiki/Adreno). While that
metric is hard to give a precise meaning, it should be enough for an
order-of-magnitude computation. This device has a 320x480 screen
resolution, so we compute:

(320*480*60)/432e+6 = 0.02

So unless that computation is wrong, on the ZTE Open, refreshing the entire
screen 60 times per second consumes about 2% of the possible fill-rate.

On the original (not enhanced) version of the Adreno 200, that figure
would be 7%.

By all means, it would be interesting to have numbers from an actual
experiment as opposed to the above naive, abstract computation. For that
experiment, a simple WebGL page with scissor/clearColor/clear calls would
suffice (scissor and clearColor calls preventing any short-circuiting).

Benoit




 On desktop, this is merely bad for battery life. On mobile, this can
 genuinely hit hardware limits and we won't hit 60 fps because we waste a
 lot of time recompositing pixels that don't change, sucking up memory
 bandwidth.

 Most platforms support some way to only update a partial rect of the frame
 buffer (AGL_SWAP_RECT on Mac, eglPostSubBufferNVfor Linux, setUpdateRect
 for Gonk/JB).

 I would like to add a protocol to layers to indicate that the layer has
 changed since the last composition (or not). I propose the following API:

 void ClearDamage(); // called by the compositor after the buffer swap
 void NotifyDamage(Rect); // called for every update to the layer, in
 window coordinate space (is that a good choice?)

 I am using Damage here to avoid overloading Invalidate. Bike shedding
 welcome. I would put these directly on Layer. When a color layer changes,
 we damage the whole layer. Thebes layers receive damage as the underlying
 buffer is updated.

 The compositor accumulates damage rects during composition and then does a
 buffer swap of that rect only, if supported by the driver.

 Damage rects could also be used to shrink the scissor rect when drawing
 the layer. I am not sure yet whether its easily doable to take advantage of
 this, but we can try as a follow-up patch.

 Feedback very welcome.

 Thanks,

 Andreas

 PS: Does anyone know how this works on Windows?
 __**_
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/**listinfo/dev-platformhttps://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Exposing the CSS/SVG Filters as Canvas API's

2013-08-08 Thread Benoit Jacob
Given the security flaws that have been recently disclosed,

http://contextis.co.uk/files/Browser_Timing_Attacks.pdf

I think that it would make more sense to first try to see to what extent we
manage to fix these issues, see what is left of SVG filters after these
issues are fixed, and only then consider propagating these concepts to more
Web APIs. (I know that an effort is under way on a tentative secure
implementation of filters. That's great. I'm just saying, let's just finish
and evaluate that first).

If we really care about these features, since the current security model
make them an inherently difficult trade-off between features, performance
and security, I would rather have us think of a better security model first.

If the intended use case is Shumway, so that --- I guess --- these filters
are intended to be applied mostly to plain text and same-origin
images/video and origin-clean canvases, then for this particular case, we
could have a safe to read back pixels from concept for DOM elements and
restrict filters to that. That's considerably more limited security model
than what SVG/CSS filters currently claim to apply to, but it may be enough
for the most important Shumway use cases.

Benoit


2013/8/8 Jet Villegas j...@mozilla.com

 Shumway team still needs to implement filter effects available in the
 Flash Player. Ideally, fast filters can be made available to all Canvas
 programs. Now that we've got a shared filter pipeline with SVG and CSS, can
 we surface the same filters as a Canvas API?

 I'm attaching our last e-mail thread on the subject for context.

 --Jet

 - Original Message -
 From: Jeff Muizelaar jmuizel...@mozilla.com
 To: Tobias Schneider schnei...@jancona.com
 Cc: Jet Villegas j...@mozilla.com, Benoit Jacob bja...@mozilla.com,
 Joe Drew j...@mozilla.com, Boris Zbarsky bzbar...@mozilla.com, L.
 David Baron dba...@dbaron.org, Robert O'Callahan 
 rocalla...@mozilla.com, Jonas Sicking sick...@mozilla.com, Bas
 Schouten bschou...@mozilla.com
 Sent: Friday, July 20, 2012 8:17:44 AM
 Subject: Re: Native implementation of Flashs ColorMatrix filter

 This is not that easy for us to do. The only Azure backend that makes this
 easy is Skia. None of CoreGraphics, Cairo or Direct2D support this
 functionality.

 We could do hardware/software implementations on top of those APIs as we
 do with SVG but I'm not in a huge rush to do this work.

 -Jeff

 On 2012-07-19, at 9:20 AM, Jet Villegas wrote:

  Here's a request from the Shumway team re: Canvas2D graphics. Can we
 surface this API?
 
  -- Jet
 
  - Forwarded Message -
  From: Tobias Schneider schnei...@jancona.com
  To: Jet Villegas j...@mozilla.com
  Sent: Thursday, July 19, 2012 8:40:52 AM
  Subject: Native implementation of Flashs ColorMatrix filter
 
  Hi Jet,
 
  as already discussed in some meetings, it would be big performance
 benefit for Shumway if we could implement Flashs ColorMatrix filter
 natively as an extension to the Canvas API. ColorMatrix filters (or more
 the ColorTransformation, which can be easily converted to a ColorMatrix)
 are used really often in Swiff files, e.g. there is no way no change a
 display objects opacity except of using a color transformation (or via
 script of course), so its really a highly needed feature for Shumway. And
 doing bitmap filter operations pixel wise with plain Javascript is just too
 slow to archive a decent frame rate (especially since Canvas is hardware
 accelerated, which makes using getImageData a pain in the ass).
 
  You can find out more about the ColorMatirx filter in the SWF spec or
 here:
 http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/filters/ColorMatrixFilter.html
 
  Firefox already implements a ColorMatrix filter in its SVG
 implementation, but since SVG is currently using the Skaia graphics
 backend, I'm not sure if its possible to use them internally within the
 Canvas API, which is based on Azure/Cairo.
 
  So I digged a little bit deeper into Azure to see where it could be
 implemented, and i think the way we blur pixels to draw shadows is kinda
 similar. So maybe we can reuse a lot of that for additional bitmap filters.
 
  The Canvas API for it can look pretty simple, we just need a way to
 assign the ColorMatrix values as an array to a context property like so:
 
  ctx.mozColorMatrix = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
 0, 1, 0];
 
  I'm pretty sure that once we've created the infrastructure to support
 bitmap filters on a Canvas, its easy to implement more Flash filters if
 needed. I would start with the ColorMatrix filters since its the most used
 and also a lot of other filter effects can be archived with the same effect
 using a ColorMatrix.
 
  What do you think? Is it worth it to talk to the gfx guys during the
 work week?
 
 
  Tobias
 

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

Heads up: difference in reference counting between Mozilla and WebKit worlds

2013-06-18 Thread Benoit Jacob
Hi,

(The TL;DR parts are in bold).

This is to draw attention to an important difference in reference counting
between Mozilla (also COM) objects [1] and WebKit (also Blink and Skia)
objects [2]:
- *Mozilla-style objects are created with a refcount of 0* (see e.g. [3],
[4])
- *WebKit-style objects are created with a refcount of 1* (see e.g. [5])

This is important to know for any Mozilla developer writing or reviewing
code that deals with WebKit-style refcounted objects [2]. As long as you're
only dealing with Mozilla-style objects [1] you can safely ignore all of
this.

*Not being aware of this can easily give memory leaks.* For example, look
at typical code like this:

  {
RefPtrT p = new T;
  }

If T is a Mozilla-style reference-counted object, this code is fine: the T
object gets destroyed as p goes out of scope.

But if T is a WebKit-style reference-counted object, this **leaks** !!!
Indeed, the new T starts with a refcount of 1, the RefPtr ups it to 2, and
we're only back to 1 when the RefPtr goes out of scope. In other words, in
WebKit, new T implicitly means addref'd even though the type, T*, doesn't
indicate it. Be aware of that, and act accordingly! WebKit's WTF has a
PassRefPtr / adoptRef mechanism [6] that can, IIUC, be used to wrap new T
to make it safe in this respect.

Attaching a simple test program demoing this:

bjacob:~$ g++ demoleak.cpp -o demoleak -D USE_MOZILLA_MFBT -I
/hack/mozilla-graphics/obj-firefox-debug/dist/include  ./demoleak
OK, nothing leaked.
bjacob:~$ g++ demoleak.cpp -o demoleak -D USE_WEBKIT_WTF -I
/hack/blink/Source  ./demoleak
leaked 1 object(s)!

So let's be thankful that we have the saner convention (that makes the
above innocuous-looking code actually innocuous), and at the same time
let's be very careful when dealing with imported external code that follows
the other convention!

Cheers,
Benoit

Notes:

[1] By Mozilla/COM style I mean, in particular, anything inheriting
nsISupports or mozilla::RefCountedT from MFBT, or using the
NS_*_REFCOUNTING macros from nsISupportsImpl.h.

[2] By WebKit-style I mean anything inheriting WTF's RefCountedT or other
similar refcounting mechanisms found throughout WebKit/Blink/Chromium or
related projects e.g. Skia. Of course, I haven't checked everything so I'm
sure that someone will be able to point out an exception ;-)

[3]
http://hg.mozilla.org/mozilla-central/file/d2a7cfa34154/mfbt/RefPtr.h#l63

[4]
http://hg.mozilla.org/mozilla-central/file/d2a7cfa34154/xpcom/glue/nsISupportsImpl.h#l255

[5]
https://github.com/WebKit/webkit/blob/master/Source/WTF/wtf/RefCounted.h#L115

[6] https://github.com/WebKit/webkit/blob/master/Source/WTF/wtf/PassRefPtr.h
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We should drop MathML

2013-05-28 Thread Benoit Jacob
2013/5/28 Henri Sivonen hsivo...@hsivonen.fi

 On Fri, May 24, 2013 at 4:33 PM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:
  I also thought that it was obvious that a suitably chosen subset of TeX
  could be free of such unwanted characteristics.

 So basically that would involve inventing something new that currently
 does not exist and are currently isn't supported by Gecko, Trident,
 Blink or WebKit. Furthermore, to integrate properly with the platform,
 math should have some kind of DOM representation, so a TeX-like syntax
 would still need to parse into something DOMish.


Note that I only brought up the TeX-like-syntax point to show that _if_ we
really wanted to do something like MathML, we could have at least have
gotten a language with less awkward syntax. (People have argued that MathML
was no worse than HTML, but it is very much worse: it is as verbose as if,
to write text in HTML, one had to enclose each syllabe and each punctuation
or whitespace character in a separate HTML element).

Parsing into a syntax tree is not an exclusive property of XML; heck,
even C++ parses into a syntax tree. Parsing into something DOMish is a
stronger requirement to place on a language; it is true that a TeX-like
syntax would be at a disadvantage there, as one would need to come up with
an entirely new syntax for specifying attributes and applying CSS style.
When I started this thread, I didn't even conceive that one would want to
apply style to individual pieces of an equation. Someone gave the example
of applying a color to e.g. a square root sign, to highlight it; I don't
believe much in the pedagogic value of this kind of tricks --- that sounds
like a toy to me --- but at this point I didn't want to argue further, as
that is a matter of taste.

So at this point I conceded the TeX point (that was a few dozen emails ago
in this thread) but noted that regardless, one may still have a very hard
time arguing that browsers should have native support for something as
specialized as MathML. More discussion ensued.

There really are two basic reasons to support MathML in the browser that
have been given in this thread:
 1. It's needed to allow specifying CSS style for each individual piece of
an equation. (It's also been claimed to be needed for WYSIWYG editing, but
I don't believe that part, as again, having a syntax tree is not a special
property of XML).
 2. It's needed to support epub3 natively in browsers. I don't have much to
answer to that as the whole epub thing was news to me: I thought that we
were only concerned with doing a Web rendering engine, it turned out that
Gecko is rather a Web *and epub* rendering engine. If I understand
correctly, the only reason to give epub this special treatment whereas we
happily implement our PDF viewer in JavaScript only, is that epub happens
to be XHTML. That sounds like XHTML is a sort of trojan horse to introduce
native support for all sorts of XML languages (like, here, MathML) into
Gecko, but whatever --- I've had enough fighting.

Benoit




 On the other hand, presentation MathML is already mostly supported by
 Gecko and WebKit, parses into a DOM (from text/html, too) and has had
 years of specification development behind it to figure out what the
 sufficiently expressive feature set is.

 So instead of being in the point where there's a mature spec and two
 of the four engines still to go, we'd go back to zero engines and no
 spec.

 Presentation MathML may not be pleasant to write by hand, but we don't
 put a Markdown parser in the browser, either, for those who don't like
 writing HTML. (And we don't put a JIT for $LANGUAGE for those who
 don't want JS.) Those who rather write Markdown can run the conversion
 on their server. Likewise, those who rather write a subset of TeX can
 run itex2mml on their server.

 --
 Henri Sivonen
 hsivo...@hsivonen.fi
 http://hsivonen.iki.fi/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We should drop MathML

2013-05-28 Thread Benoit Jacob
Hi Isaac,

What I meant by matter of taste is that while some people find advanced
presentation styles, such as the one you mention, useful, other people find
them to be just toys. I belong to the latter category, but have no hope of
convincing everyone else of my views, so I'd rather just call it a matter
of taste.

If you asked me about my views on these matters (I have taught mathematics
for a few years), I would tell you that math symbolic formalism is
overrated in the first place and is given an excessive role in math
teaching; I would say that math teaching should focus on conveying
underlying concepts, which in most areas of math are geometric in nature,
and that once the student has a grasp of how things work on a geometric
level, the formalism isn't anymore a major impediment to understanding; and
that therefore, focusing on cool presentation features to ease algebra
teaching is like treating only the symptoms of a poor approach to math
teaching. You mention Fourier transforms: that is a prime example of
something that has formulas that may seem intimidating to students until
they understand the basic idea, at which point the formulas become almost
trivial. However, by expanding on all these things, I would be completely
off-topic on this list ;-)

Benoit


2013/5/28 Isaac Aggrey isaac.agg...@gmail.com

 Hi Benoit,

  When I started this thread, I didn't even conceive that one would want to
  apply style to individual pieces of an equation. Someone gave the example
  of applying a color to e.g. a square root sign, to highlight it; I don't
  believe much in the pedagogic value of this kind of tricks --- that
 sounds
  like a toy to me --- but at this point I didn't want to argue further, as
  that is a matter of taste.

 I think there is tremendous value in styling individual pieces of an
 equation, especially in educational settings, but its application is
 largely unexplored.

 For example, this image [1] breaks down a Fourier Transform in such a
 way that makes the equation more approachable rather than a sea of
 symbols (see [2] for entire blog post). I can't help but get excited
 thinking about applications in future online courses (MOOCs [3]) that
 use interactive equations along with frameworks like Popcorn.js [4] to
 create a more dynamic learning experience.

 [1]: http://altdevblogaday.com/wp-content/uploads/2011/05/DerivedDFT.png
 [2]:
 http://www.altdevblogaday.com/2011/05/17/understanding-the-fourier-transform/
 [3]: https://en.wikipedia.org/wiki/Massive_open_online_course
 [4]: http://popcornjs.org/


 - Isaac

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


No Rendering meeting this Monday

2013-05-24 Thread Benoit Jacob
There will be no rendering meeting this coming Monday (May 27), as many
people will be recovering from jet lag from the Taipei work week.

The next rendering meeting will be announced by Milan, probably for the
week after.

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We should drop MathML

2013-05-06 Thread Benoit Jacob
Thanks Peter: that point-for-point format makes it easier for me to
understand your perspective on the issues that I raised.

2013/5/6 p.krautzber...@gmail.com

 Benoit, you said you need proof that MathML is better than TeX. I think
 it's the reverse at this point (from a web perspective -- you'll never get
 me to use Word instead of TeX privately ;) ).

 Anyway, let me try to repeat how I had addressed your original points in
 my first post.

 1.1. you make a point against adding unnecessary typography. Mathematics
 is text, but adding new requirements. It's comparable to the introduction
 of RTL or tables much more than musical notation. It's also something that
 all school children will encounter for 9-12 years. IMHO, this makes it
 necessary to implement mathematical typesetting functionality.


School children are only on the reading end of math typesetting, so for
them, AFAICS, it doesn't matter that math is rendered with MathML or with
MathJax's HTML+CSS renderer.


 1.2 you claimed MathML is inferior to TeX. I've tried to point out that
 that's not the case as most scientific and educational publishers use it
 extensively.

 1.2.1 you claimed TeX is the universal standard. I've tried to point out
 only research mathematicians use it as a standard. Almost most mathematics
 happens outside that group.


I suppose that I can only accept your data as better documented that mine;
most of the TeX users I know are or have been math researchers.


 1.2.2 You pointed out that MathML isn't friendly to manual input. That's
 true but HTML isn't very friendly either, nor is SVG.


It's not comparable at all.

If you're writing plain text, HTML's overhead is limited to some br or
p tags, with maybe the usual b, i, heading... so the overhead is
small compared to the size of your text.

If you add many anchors and links, and some style, the overhead can grow
significantly, but is hardly going to be more than 2 input lines per output
line.

With MathML, we're talking about easily over 10 input lines per output line
--- in wikipedia's example, MathML has 30 where TeX has 1.

So contrary to HTML, nobody's going to actually write MathML code by hand
for anything more than a few isolated equations.

Thanks also for your other points below, to which I'm not individually
replying; we have a perspective mismatch here, so it's interesting for me
to understand your perspective, but I'm not going to win a fight against
the entire publishing industry which you say is already behind MathML.

Benoit



 1.2.3 You argued TeX is superior for accessibility. I've pointed out that
 that's not the case given the current technology landscape.


 2 You wrote now is the time to drop MathML. I've tried to point out that
 now -- as web and ebook standard -- is the time to support it, especially
 when your implementation is almost complete and you're looking to carve a
 niche out of the mobile and mobile OS market, ebooks etc.

 2.1 you claim MathML never saw traction outside of Firefox. I tried to
 point out that MathML has huge traction in publishing and the educational
 sector, even if it wasn't visible on the web until MathJax came along.
 Google wants MathML support (they just don't trust the current code) while
 Apple has happily advertised with the MathML they got for free. Microsoft
 indeed remains a mystery.

 2.2 you claim MathJax does a great job -- ok, I'm not going to argue ;) --
 while browsers don't. But we've used native output on Firefox before
 MathJax 2.0 and plan to do it again soon -- it is well implemented and can
 provide the same quality of typesetting.

 3. Well, I'm not sure what to say to those.  If math is a basic
 typographical need, then the syntax doesn't matter -- we need to see it
 implemented and its bottom up layout process clashes with CSS's top down
 process. No change in syntax will resolve that.

 Since MathML development involved a large number of TeX and computer
 algebra experts, I doubt a TeX-like syntax will end up being extremely
 different from MathML the second time around.

 Instead of fighting over syntax, I would prefer to focus on improving the
 situation of mathematics on the web -- so thank you for your offer to
 support us in fixing bugs and improving HTML layout.

 Peter.


 On Sunday, 5 May 2013 20:23:56 UTC-7, Joshua Cranmer   wrote:
  On 5/5/2013 9:46 PM, Benoit Jacob wrote:
 
   I am still waiting for the rebuttal of my arguments, in the original
 
   email in this thread, about how TeX is strictly better than MathML for
 
   the particular task of representing equations. As far as I can see,
 
   MathML's only inherent claim to existence is it's XML, and being XML
 
   stopped being a relevant selling point for a Web spec many years ago
 
   (or else we'd be stuck with XHTML)
 
 
 
  Don't be quick to dismiss the utility of XML. The problem of XHTML, as I
 
  understand it, was that the XHTML2 spec ignored the needs of its
 
  would-be users and designed stuff

Re: We should drop MathML

2013-05-06 Thread Benoit Jacob
2013/5/6 Robert O'Callahan rob...@ocallahan.org

 Let me go on a bit of a rampage about TeX for a bit.

 TeX is not a markup format. It is an executable code format. It is a
 programming language by design!


Yes, but a small subset of TeX could be purely a markup format, not a
programming language. Just support a finite list of common TeX math
operations, and no custom macros (or very restricted ones).

Benoit
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >