Re: Visual Studio Code integration with `clangd` for C/C++ development

2020-09-16 Thread Jean-Yves Avenard
Hi.

Started to play with this. I've used VSCode for several years, though
multi-process debugging capabilities make it a tad useless as a debugger.

Now what we need is something like
https://marketplace.visualstudio.com/items?itemName=vsdbgplat.MicrosoftChildProcessDebuggingPowerTool#overview

Something that will automatically attach any new process to the debugger.

When will it be ready ? :D

Jean-Yves

On Wed, Sep 16, 2020 at 5:00 PM Andi-Bogdan Postelnicu 
wrote:

>
>
> On 16 Sep 2020, at 04:14, Botond Ballo  wrote:
>
> On Tue, Sep 15, 2020 at 6:55 PM Jean-Yves Avenard 
> wrote:
>
>> This broke several features for me (and I use VSCode all the time). One in
>> particular was the inability to switch between code and header (Ctrl-K
>> Ctrl-O).
>>
>
> clangd supports this, but it's under a custom command name (as it's not
> part of the Language Server Protocol).
>
> Thank you for adding this, maybe we should also add to our documentation?
>
> If you go to Keyboard Shortcuts, and search for the command
> "clangd.switchheadersource", you can bind your preferred shortcut to it.
>
>
>> Finding symbol definition broke under many cases too.
>>
>
> This is one we'd have to examine on a case-by-case basis. Some of them may
> be upstream issues in clangd, but some may be issues in our setup (e.g.
> related to unified builds).
>
> We should file a bug here an investigate on a per-module-basis, since from
> our testing this feature works. I reckon that a problem may be triggered
> with out unified build system.
>
>
> Andi, do you have a suggestion for how to track these? Should we encourage
> people to file Bugzilla tickets for them, which we can then triage (and if
> appropriate, we can file upstream clangd issues)?
>
> Yes, we have Bug 1662709
> <https://bugzilla.mozilla.org/show_bug.cgi?id=1662709> that is a META bug
> acting as an aggregator for all bugs that are related with VSCode
> deployment. All issues should block that bug and we should triage them.
> My take on this is that if we find bugs in clangd extension or in clangd
> itself we should upstream the fixes, at least for the clangd extension
> since we don’t ship it in our environment. clangd on the other hand we ship
> it in our artifacts so if we can’t upstream fixes, due to various reasons,
> most probable they get to land in major or dot releases of clang, we apply
> them locally when we build the artifacts.
>
> Cheers,
> Botond
>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Visual Studio Code integration with `clangd` for C/C++ development

2020-09-15 Thread Jean-Yves Avenard
Hi.

I don't know if that's related, but when recently VSCode started to show
clangd as a recommended extension for the project. Which I installed.

This broke several features for me (and I use VSCode all the time). One in
particular was the inability to switch between code and header (Ctrl-K
Ctrl-O).
Finding symbol definition broke under many cases too.

So I ended up uninstalling the clangd extension.

Is this a known issue?

Jean-Yves


On Fri, Sep 11, 2020 at 3:47 PM Andi-Bogdan Postelnicu 
wrote:

> First of all you will need to get VSCode and mozilla repo. Besides that
> you have the `./mach bootstrap` environment that downloads everything that
> you need like Python, Node, LLVM and the rest of packages that we need to
> build Firefox.
> Once the bootstrap environment is setup is very easy to have VSCode
> configured, just use:
>
> `./mach ide vscode`
>
> The solution will be generated under `.vscode` directory and a temporary
> compilation database will be created in the obj directory. The IDE will be
> automatically open and you will be prompted with a list of extension that
> are going to be installed, this list is situated in
> `.vscode/extensions.json`.
>
> This should be it, after the last step you should have a fully working
> IDE, no matter the platform, Win64, MaxOS64 or Linux64.
>
> Hope this sheds some light,
> ANdi
>
> > On 10 Sep 2020, at 19:48, mhoye  wrote:
> >
> >
> > This is amazing work.
> >
> > For the sake of new user documentation, I have a question: From scratch,
> for me to get from zero to VS Code Community edition to "I have everything
> I need to work on Firefox", what is the consensus around the components or
> options I need to pick to get myself close to an ideal VS setup? I haven't
> revisited this on a clean machine in a long time, and this seems like as
> good a time as any to update that information.
> >
> > I think the answer is, Python, Node, C/C++ desktop and mobile... have I
> missed any, and are there other workload options that would help on first
> setup?
> >
> > - mhoye
> >
> > -- Original Message --
> > From: "Andrew Halberstadt" 
> > To: "Andi-Bogdan Postelnicu" 
> > Cc: "dev-platform" 
> > Sent: 2020-09-10 12:29:48 PM
> > Subject: Re: Visual Studio Code integration with `clangd` for C/C++
> development
> >
> >> This is great, thanks Andi!
> >>
> >> Are there any plans to introduce a `mach lint` integration as well? Or
> is
> >> that what is already being used for "inline parsing errors with limited
> >> auto-fix hints"?
> >>
> >>
> >> On Thu, Sep 10, 2020 at 12:20 PM Andi-Bogdan Postelnicu <
> a...@mozilla.com>
> >> wrote:
> >>
> >>> TLDR: VSCode users can type `./mach ide vscode` in order to get code
> >>> completion, reference navigation, refactoring, reformatting, etc.
> >>>
> >>> Hello all,
> >>>
> >>> VSCode  is a multi-platform
> >>> open-source programming editor developed by Microsoft and volunteers.
> It is
> >>> partly built using source-code components but also uses proprietary
> >>> Microsoft code. It has support for many programming languages using
> >>> extensions.
> >>> In the past we had a minimal
> >>>  configuration
> setup
> >>> in the tree that reflected the basic extensions
> >>> 
> >>> that
> >>> should be used and also some tasks
> >>> 
> that can
> >>> be triggered from the editor.
> >>>
> >>> Now, we significantly improved that!
> >>>
> >>> Starting with Bug 1656740
> >>> , we’ve added
> >>> comprehensive support for C/C++with the help of the `clangd` extension
> for
> >>> Firefox development. Leveraging the `clang` toolchain compiler we now
> have
> >>> support in the IDE for:
> >>>
> >>>1.
> >>>
> >>>Syntax highlighting;
> >>>2.
> >>>
> >>>IntelliSense with comprehensive code completion and suggestion;
> >>>
> >>>
> >>>
> >>>1.
> >>>
> >>>Go-to definition and Go-to declaration;
> >>>2.
> >>>
> >>>Find all references
> >>>3.
> >>>
> >>>Open type hierarchy;
> >>>4.
> >>>
> >>>Rename symbol, all usages of the symbol will be renamed, including
> >>>declaration, definition and references;
> >>>5.
> >>>
> >>>Code formatting, based on `clang-format` that respects our coding
> >>>standard using the `.clang-format` and `.clang-format-ignore` files.
> >>> Format
> >>>can be performed on an entire file or on a code selection;
> >>>6.
> >>>
> >>>Inline parsing errors with limited auto-fix hints;
> >>>
> >>>
> >>>1.
> >>>
> >>>Basic static-code analysis using `clang-tidy` and our list of
> enabled
> >>>checkers. (This is still in progress not all checkers are supported
> by
> >>>`clangd`);
> >>>
> >>>
> >>> This new eco-system for code development is supported on all 

Re: Change to preferences via StaticPrefs, tremoval of gfxPrefs

2019-06-11 Thread Jean-Yves Avenard
Hi.

Thanks for the kind words.

> On 11 Jun 2019, at 9:49 pm, Kartikaya Gupta  wrote:
> 
> IIRC another difference between prefs in all.js and gfxPrefs was that if a 
> pref was not listed in all.js, you couldn't use it in the 
> {test-,ref-,}pref(...) annotations in reftest.list files. Can you confirm 
> that listing the pref in StaticPrefs but not all.js is not subject to this 
> restriction?

A gfxPref didn't set or create the related Preference to any value. It was only 
happening the other way round: the gfxPref would be initialised in that process 
to the value of the Preference at the time of the process creation but only if 
the preference existed.
Such that the value set in all.js would override the gfxPrefs default.

If you had only defined the gfxPref then the pref wouldn't show up in 
about:config either (which is why people for convenience also added an entry to 
all.js).

However, if you were to call the gfxPref setter method, it would then set the 
related Preference but only if called on the parent process and in the main 
thread. Otherwise, the change to the gfxPref was local to the current process 
only.

Setting a pref by listing all.js or in StaticPrefList.h initialise things in 
the same manner, they are fundamentally equivalent.

Now, while I'm not aware of the case you describe, I don't see how using 
StaticPref could break that.

I hope that none of the gfxpref flaws got carried into StaticPref, I believe 
they were all fixed following this transition.

Jean-Yves 

Get Firefox for iOS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Change to preferences via StaticPrefs, tremoval of gfxPrefs

2019-06-11 Thread Jean-Yves Avenard

Hi there.

So the changes have been lived for a couple of weeks now and some issues 
were raised or encountered since. So I would like to clarify some of those.


1- If you define a StaticPref in StaticPrefList.h there is absolutely 
no-need to also defines the value in all.js. In fact this is strongly 
discouraged and there should almost never be a need for it. There will 
be no end-user visible difference if your pref is only defined in 
StaticPrefList.h : the pref value will still appear in about:config.


Since Nick Nethercote cleaned up all.js in his first StaticPref 
iteration; 403 preferences have popped up with duplicated 
initialisation, often with different default value set between the two.


So if you add a StaticPref, remove the existing all.js entry if it exists.

2- Please read the StaticPrefList.h documentation present at the 
beginning of the file. Please keep its content organised within the 
right section and alphabetically ordered. Don't group preferences with 
different prefix together because they relate to the same topic. 
Reconside the prefix used instead.


3- Carefully consider when a StaticPref is defined with a Once policy: 
do you need to use such policy and are you appropriately testing it?


StaticPrefs with a Once policy will be initialised when one is read; all 
of them will now be frozen for the entire lifetime of the parent process 
(following bug 1554334)


If you create a test for that pref, and set that pref via means such as 
SpecialPower.pushPref(), via the web-platform-test etc; those will *not* 
update the value of the StaticPref. In order to prevent such misuse, in 
bug 1556131 at the request of Boris we created a test that is enabled 
during automation testing on debug build. Should anything attempts to 
modify the underlying preference of a `Once` StaticPref, it will crash 
with something like:


Assertion failure: staticPrefValue == preferenceValue (Preference 
'webgl.force-layers-readback' got modified since 
StaticPrefs::WebGLForceLayersReadback got initialized. Consider using a 
Live StaticPrefs instead), at 
/builds/worker/workspace/build/src/obj-firefox/dist/include/mozilla/StaticPrefList.h:6529


All the best

Jean-Yves

On 15/05/2019 11:02 pm, Jean-Yves Avenard wrote:

Dear all.

/TLDR; Wherever you used to use gfxPrefs, soon you will have to use 
StaticPrefs./


In a couple of days, once /Bug 1550422 
<https://bugzilla.mozilla.org/show_bug.cgi?id=1550422>//lands I will 
be retiring gfxPrefs. All features originally found in gfxPrefs are 
now available in StaticPrefs with some extra bonuses./


//For the background, StaticPrefs gives you the ability to access a 
preference via a thread-safe accessor.

//

/StaticPrefs and Preferences will now be available on all processes 
(not just main and content, this includes GPU, VR and RDD process)/


//

/3 levels of update policies: Skip, Once and Live:/

/* Skip policy will ignore all user overrides.
* Once will read the preference once and will never be updated again
* Live is the original behaviour, the values will be updated across 
all processes whenever the preference change.

/

/Possibility to dynamically set a StaticPref on any threads (however, 
the changes aren't propagated to other processes; doing otherwise is 
certainly doable, I'm not convinced of the use case however)./


/There are few more options, to know more I invite you to read the 
StaticPrefList.h file

/

/The desire to gfxPrefs came from yet another misuse of StaticPrefs in 
the GPU process. In addition gfxPrefs turned out to not be thread-safe./


/It became rather tiring to always juggle between gfxPrefs and 
StaticPrefs depending on which process the code could run./


/And as Mr MacLeod used to say: There can be only one/

/Jean-Yves
/


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Change to preferences via StaticPrefs, tremoval of gfxPrefs

2019-05-15 Thread Jean-Yves Avenard



On 16/05/2019 9:02 am, Botond Ballo wrote:

Will SpecialPowers.pushPrefEnv(), which currently does propagate the
prefs at least between the content and parent processes, continue to
work? A lot of tests rely on this.


Yes of course.

The changes was to add changes so that process other than the content 
process are synced. I didn't check the necko process, but reading Kris 
message it seems that it does already.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Change to preferences via StaticPrefs, tremoval of gfxPrefs

2019-05-15 Thread Jean-Yves Avenard

Hi

On 16/05/2019 1:54 am, Nicholas Alexander wrote:
Forgive my rank ignorance here, but as an outsider this surprises me: 
I expect "Gecko preferences" to be (eventually) consistent across 
processes.  Is this just not the case, and it's common for prefs to 
vary between the main and content/gfx processes?  Is there a `user.js` 
equivalent for main and child processes?



The whole handling of the preferences across processes is a bit fishy.

Today, preference are fully available only on the main and content 
processes with the limitation that modifying a preference must be done 
on the main thread, and if you modify one outside the main process, the 
change will be local to that process and won't propagate to the others.


On the RDD process (used for media decoding), preferences values are 
passed at the process creation only.


There's no preference support at all on the VR and GPU process.

If you modify a preference on the main thread, the change will propagate 
to process that specifically handle the update with custom code.


Following bug 1550422, Preference will be available on the other 
processes, and will be synced from the main process up. So the 
restrictions that you must modify the preference on the main process for 
them to propagate will remain.


When a new process is created, Preferences value are passed via command 
line arguments except on Windows. On Windows a shared memory object is 
created and the handle to that object is passed via the command line.


It is then up to each process to handle preference modifications and 
pass that change. There's no automatic or universal method on how this 
is done unfortunately.


Jean-Yves


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Change to preferences via StaticPrefs, tremoval of gfxPrefs

2019-05-15 Thread Jean-Yves Avenard

Dear all.

/TLDR; Wherever you used to use gfxPrefs, soon you will have to use 
StaticPrefs./


In a couple of days, once /Bug 1550422 
//lands I will be 
retiring gfxPrefs. All features originally found in gfxPrefs are now 
available in StaticPrefs with some extra bonuses./


//For the background, StaticPrefs gives you the ability to access a 
preference via a thread-safe accessor.

//

/StaticPrefs and Preferences will now be available on all processes (not 
just main and content, this includes GPU, VR and RDD process)/


//

/3 levels of update policies: Skip, Once and Live:/

/* Skip policy will ignore all user overrides.
* Once will read the preference once and will never be updated again
* Live is the original behaviour, the values will be updated across all 
processes whenever the preference change.

/

/Possibility to dynamically set a StaticPref on any threads (however, 
the changes aren't propagated to other processes; doing otherwise is 
certainly doable, I'm not convinced of the use case however)./


/There are few more options, to know more I invite you to read the 
StaticPrefList.h file

/

/The desire to gfxPrefs came from yet another misuse of StaticPrefs in 
the GPU process. In addition gfxPrefs turned out to not be thread-safe./


/It became rather tiring to always juggle between gfxPrefs and 
StaticPrefs depending on which process the code could run./


/And as Mr MacLeod used to say: There can be only one/

/Jean-Yves
/

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Lack of browser mochitests in non-e10s configuration and support for turning off e10s on desktop going forward

2019-04-24 Thread Jean-Yves Avenard



> On 25 Apr 2019, at 8:49 am, Bobby Holley  wrote.
>> 
> 
> I think the tradeoff boils down to (a) how many developers are using
> non-e10s debugging, with what frequency, versus (b) how much ongoing
> maintenance work is required across various components to keep non-e10s
> working. We all have intuition about these things, but I doubt we have hard
> data.
> 
> I'm open to the argument that my proposal is too aggressive. We could
> certainly try the muddle approach for a while and see how it goes, and how
> often 1proc breaks in practice. I don't think we can justify continuing to
> run the full suite of 1proc tests as tier-1, but we could potentially run a
> few smoketests, which might keep the builds usable enough for debugging.
> 
> If anyone is chomping at the bit to remove 1proc support from their module,
> please speak up.

I am one of those developers that find non-e10s essential to debug core 
features.

Debugging in e10s adds a big overhead in the effort required to determine on 
the why something isn't working as it should.

When debugging HW specialised features (for media those are running in the GPU 
process), in e10s you always hit timeouts that cause a cascade of failures 
elsewhere.

Of course we can do without, but at the expense of time and convenience. Before 
non-e10s support is removed, I'd love to see better development/debugging 
tools, particularly on Windows added to help our workflow.

I knew this day would come, tbh I'm surprise non-e10s has lived for so long. 
However, I don't believe my workflow is unique to whomever is working on gecko.

Jean-Yves 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Automatic changes during the dev workflow [was: Re: Upcoming changes to our C++ Coding Style]

2018-12-16 Thread Jean-Yves Avenard
Hi

> On 14 Dec 2018, at 7:57 pm, Sylvestre Ledru  wrote:
> 
> I think we should aim at option b) (updated automatically by bots after 
> submission to Phabricator)
> 
> 

I don’t particularly fancy this idea. Finding yourself with different code on 
Phabricator and locally is a good way to shoot yourself in the foot.

Preventing pushing non-properly formatted code, it’s so easy to properly format 
your code. Also make you have on extra check on what you’re about to review.

Similar to the Google review process. IIRC before anyone is asked to review 
anything, the submission must pass a set of tests, one of them includes 
checking the coding style.

> We have more and more tools at review phase (clang-format, flake8, eslint, 
> clang-tidy, codespell, etc) which propose some auto-fixes.
> 
> Currently, the turn around time of the tools is 14m on average which is 
> usually faster than the reviewer looking at the patch.
> If Phabricator provides the capability, we could have the bot automatically 
> proposing a new patchset on top on the normal one.
> The reviewer would look at the updated version.
> 
> 

It does feel longer that that :)

> By doing that, we would save time for everyone. The main drawback would be 
> that developer would have to retrieve the updated patch 
> before updating it.
> 
> 

A big drawback

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dogfooding WebRender

2018-11-28 Thread Jean-Yves Avenard


> On 27 Nov 2018, at 12:18 pm, Wellington Torrejais da Silva 
>  wrote:
> 
> Hi,
> 
> Nice! I don't have Windows 10, but when you will need to test in Linux 
> distributions,   here I'm. Thanks

You can already enable webrender on mac and linux:set  gfx.webrender.enabled to 
true.




smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our C++ Coding Style

2018-11-22 Thread Jean-Yves Avenard
Hi

> On 21 Nov 2018, at 3:54 am, Ehsan Akhgari  wrote:
> 
> 
> You will break the blame on VCS with this change
> 
> Yes and no. Of course, just like many tree-wide mass changes in the past
> (e.g. the MPL2 header update), this will remain in the log.
> 
> Mercurial and Git both support a -w argument to ignore whitespace with
> annotate/blame.
> 
> In addition, modern versions of Mercurial have `hg annotate --skip
> ` which allows you to specify a revset used to select revisions to
> skip over when annotating.
> 
> Last but not least, we will tag the changeset’s commit message with
> “skip-blame” so that Mercurial would automatically ignore the reformat
> changeset for blame operations.

I’ve found the Google’s depot_tools hyper-blame particularly useful here.

It takes a .git-blame-ignore-revs file containing the list of commits to ignore.

$ cat .git-blame-ignore-revs 
abd6d77c618998827e5ffc3dab12f1a34d6ed03d

That’s with Sylvestre single commit changing dom/media (hg SHA1: 
0ceae9db9ec0be18daa1a279511ad305723185d4)

$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
$ export PATH=$PATH:$PWD/depot_tools

now git hyper-blame will behave in the same fashion as git blame, but ignore 
that particular commit.

I’m guessing we could make this .git-blame-ignore-revs part of the tree, 
assuming though everyone must use git-cinnabar.

Jean-Yves



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: WebP image support

2018-10-12 Thread Jean-Yves Avenard




On 11/10/2018 6:03 PM, Tom Ritter wrote:

Are we bringing in a new third party library for this? (Seems like yes?)

Who else uses it/audits it? Does anyone else fuzz it? Is it in OSS-fuzz?
Are we fuzzing it?

How does upstream behave? Do they cut releases or do they just have
continual development and downstreams grab random versions of it? How do we
plan to track security issues upstream? How do we plan to update it
(mechanically and how often)?

-tom



We have been discussing implementation details such that webp would be 
using the media decoder framework to demux and decode the images. As 
such, webp support would automatically gain sandbox control (going 
through the same out of process decoding codepath like we will do with AV1).


Doing it that way would also greatly help adding support for images like 
AVIF or even using videos (mp4, webm) inside an  object.


Though there seems to be an urgency in shipping it now, meaning that the 
implementation details I describe above won't likely be in the first 
release.


JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows launcher process enabled by default on Nightly

2018-10-02 Thread Jean-Yves Avenard

Hi


On 2/10/2018 5:05 AM, Aaron Klotz wrote:


For various reasons we don't want to put escape hatches into any 
builds that we ship.


For local builds, if it would ease developer concerns over this 
feature, we can look into it. I have filed bug 1495628 for that purpose.


Seems that we can build with --disable-launcher-process to get around 
the issue. That seems good enough to me (though a dynamic pref/option 
would do better of course)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows launcher process enabled by default on Nightly

2018-10-01 Thread Jean-Yves Avenard

Hi


On 27/09/2018 5:19 PM, Aaron Klotz wrote:

Hi everybody,

Yesterday evening bug 1488554 [1] merged to mozilla-central, thus 
enabling the launcher process by default on Windows Nightly builds. 
This change is at the build config level.


Can we have something to entirely disable that new feature such as 
./mach run --disable-e10s that won't make firefox spawn another process?


affects development/debugging process otherwise, not everyone use Visual 
Studio nor WinDbg for debugging purposes. Having to manually attach a 
process each time is rather unpleasant.


Thanks
JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebRender on in Firefox Nightly for Windows 10 Nvidia

2018-09-12 Thread Jean-Yves Avenard
What an achievement…

Congrats to you and the team….

JY

> On 12 Sep 2018, at 10:07 pm, Jeff Muizelaar  wrote:
> 
> In bug 1490742 I have enabled WebRender in Nightly on non-laptop
> Windows 10 Nvidia (~17% of our Nightly audience). This is a rewrite of
> much the graphics backend in Firefox. We expect some edge-case
> regressions, but generally nothing serious. We have quite a few staff
> and volunteers who have been using WebRender for months without major
> issues.
> 
> If you're on this hardware and you see a problem please file a bug.
> You can check if you're using WebRender by looking at the Compositing
> section of about:support. Further, WebRender should be generally
> usable on all platforms other than Android right now so if you want to
> be keen you can try it out now with the gfx.webrender.all pref.
> 
> -Jeff
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: mercurial-setup becomes vcs-setup and adds support for git

2018-08-21 Thread Jean-Yves Avenard
Hi.

This is awesome, upgrading and configuring cinnabar had always been a sore point

But I just noticed that it appears to pull from git-cinnabar master branch…

I had to run bootstrap on android yesterday, which upgraded git-cinnabar 
(requiring a git upgrade)
and had to run it again today. which too requires a git upgrade

git upgrade takes a significant time.

Wouldn’t it be better to follow the release branch instead of master?

JY

> On 17 Aug 2018, at 9:50 am, Panos Astithas  wrote:
> 
> Hi all,
> 
> since bug 1257478 landed in m-c earlier today, you should now be using 'mach 
> vcs-setup' instead of 'mach mercurial-setup'. Nothing else changes in your 
> workflow (e.g. 'mach mercurial-setup -u' becomes 'mach vcs-setup -u') and the 
> spell checker will suggest vcs-setup if you try to use mercurial-setup.
> 
> If you are a git-cinnabar user, your workflow is now supported. 'mach 
> vcs-setup --git' will fetch the latest recommended version of git-cinnabar 
> and configure it for you. The --update-only flag is also available for git, 
> so it would be a good idea to run 'mach vcs-setup --git --update-only' (or 
> 'mach vcs-setup -gu') every now and then to make sure everything is up to 
> date.
> 
> 'mach bootstrap' will also offer to update and configure your git-cinnabar 
> environment if a git checkout is detected. 
> 
> Cheers,
> Panos
> 
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: media-capabilities

2018-08-11 Thread Jean-Yves Avenard
It appears that I hadn’t provided all the information earlier…

So here it is again:

Summary:
Media Capabilities allow for web sites to better determine what content to 
serve to the end user.
Currently a media element offers the canPlayType method 
(https://html.spec.whatwg.org/multipage/media.html#dom-navigator-canplaytype-dev)
 to determine if a container/codec can be used. But the answer is limited as a 
maybe/probably type answer.

It gives no ability to determine if a particular resolution can be played 
well/smoothly enough or be done in a power efficient manner (e.g. will it be 
hardware accelerated).

This has been a particular problem with sites such as YouTube that serves VP9 
under all circumstances even if the user agent won't play it well (VP9 is 
mostly done via software decoding and is CPU itensive). This has forced us to 
indiscriminately disable VP9 altogether).
For YouTube to know that VP9 could be used for low resolution but not high-def 
ones would allow them to select the right codec from the start.

Chrome has shipped it a while ago now and talking to several partners 
(including YouTube, Netflix, Facebook etc) , Media Capabilities support has 
been the number one request.

Bug: This issue is tracked in bugzilla 1409664  
(https://bugzilla.mozilla.org/show_bug.cgi?id=1409664)

Link to standard: The proposed spec is available at 
https://wicg.github.io/media-capabilities/
Platform coverage: It will be available for all platform, and exposed to all 
sites including insecure (http)
Estimated or target release: 63
Preference behind which this will be implemented: the feature is controllable 
via media.media-capabilities.enabled
Is this feature enabled by default in sandboxed iframes? If not, is there a 
proposed sandbox flag to enable it? If allowed, does it preserve the current 
invariants in terms of what sandboxed iframes can do?
DevTools bug: No particular requirements for additional devtools
Do other browser engines implement this? Chrome has shipped this since late 2017
web-platform-tests: Phttp://w3c-test.org/media-capabilities/

We do not enable the Screen Media-Capabilities extension due spec issues (in 
particular https://github.com/WICG/media-capabilities/issues/89), additionally, 
we have no way at present to implement those.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: media-capabilities

2018-08-09 Thread Jean-Yves Avenard
Hi

There has been some concerns about some parts of the spec, in particular the 
one extending the Screen interface.

The plan now is to keep the Screen extensions disabled by default and to enable 
the remaining parts, related purely to the playing and encoding capabilities on.

This is tracked in bug 1480190

JY

> On 4 Jul 2018, at 2:16 am, Jean-Yves Avenard  wrote:
> 
> Hi
> 
> The code is now in central and in the last nightly.
> 
> It's currently disabled by default behind the pref 
> media.media-capabilities.enabled
> 
> The bug tracking fingerprinting concerns is done in 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1461454 
> <https://bugzilla.mozilla.org/show_bug.cgi?id=1461454>
> 
> Feel free to enable it and watch videos in YouTube. On mac in particular it 
> would allow to re-enable the free vp9 codec (which has been disabled due to 
> performance reason)
> 
> Kind regards
> Jean-Yves



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship MediaSource SourceBuffer.changeType

2018-08-07 Thread Jean-Yves Avenard


> On 7 Aug 2018, at 5:24 pm, Boris Zbarsky  wrote:
> 
> OK.  Do you have any signals at all from Safari and Edge?  Even just knowing 
> "not opposed in current form but no concrete plans to implement" would be 
> useful, compared to them suddenly coming back with requests for changes in 
> the spec.

I’ll enquire on that…

https://wpt.fyi/results/media-source/mediasource-changetype-play.html 

https://wpt.fyi/results/media-source/mediasource-changetype.html 


> 
>> Google’s main interest for this is for ad insertions unfortunately, a bit 
>> sad when there’s so much potential.
> 
> Does this make the ad insertion case better for our users in some way, at 
> least?

It makes it easier for the site.
They typically gets ads from ad suppliers with h264/aac content ; for YouTube 
that allows to easily insert them in their vp9/opus content without having to 
convert them…

So previously, you would have to either pause the current video element, create 
a new one, and set it as overlay, and once done tear everything done.

Now, everything can be done inline, much smoother transitions

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship MediaSource SourceBuffer.changeType

2018-08-07 Thread Jean-Yves Avenard


> On 6 Aug 2018, at 10:30 pm, Boris Zbarsky  wrote:
> 
> On 8/6/18 5:37 AM, Jean-Yves Avenard wrote:
>> enable by default changeType method on MediaSource’s Source Buffer
> 
> To be clear, this is enabling by default on all channels, right?

yes

> 
>> The method has been availably since 61 behind the preference 
>> media.mediasource.experimental.enabled
> 
> But not default-enabled anywhere, so not much tested?

We have web-platform-tests for this feature which have landed… We know that 
YouTube intends to use this new functionality as soon as available.
http://w3c-test.org/media-source/mediasource-changetype.html
http://w3c-test.org/media-source/mediasource-changetype-play.html

As far as test is concerned, limiting the type of codecs used was an artificial 
limitation to start with. So the core code involved has always been exercised 
since the MSE rearchitecture (which came with 42)

> 
> How stable is the spec?

Stable I believe and unlikely to change again.
There’s a few “v2” MSE features in the pipeline, but they typically will 
require more work.

> 
> What is the status of implementation or interest in other browsers?

Chrome has it implemented behind a pref too, we haven’t discussed with others 
to determine their intention on that…
IMHO, it’s the most useful addition made to MSE since it first came out.
It will greatly improve adaptative quality streaming.
I’m fairly keen to see how AV1 can be used more easily that way (which 
otherwise would be limited to only the most powerful machines out there)

Google’s main interest for this is for ad insertions unfortunately, a bit sad 
when there’s so much potential.


> 
> What is the status of interest from authors?

Media Capabilities and changeType was the 2 most requested features by all 
content providers we’ve met this year.

> 
> Put another way, if we thought this was a safe change why did we have it off 
> by default to start with?  Were there substantive changes to the code since 
> 61 that prevented us from enabling by default before now?

The implementation behind changeType followed several iterations to reach that 
point. At which time the Chrome team and us agreed on the current definition.

I had put it behind a pref to start with as there were no specs at all defined 
for it. It was developed as a proof of concept, to show on what could be 
achieved and how easily it could be done.

Our MediaSource implementation and architecture had been conceived from the 
ground up with this capability in mind… So we were the first to implement it 
and it was done very quickly.

We then had a few back and forth meetings with the Chromium media team until we 
agreed (with compromises) on the final draft.

I hope I answered all your questions.

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship MediaSource SourceBuffer.changeType

2018-08-06 Thread Jean-Yves Avenard
Hi

> On 6 Aug 2018, at 9:12 pm, Nils Ohlmeier  wrote:
> 
> Which version of Firefox are you planing to ship this?
> 
> Thanks
>  Nils


Sorry, my bad.

63.. The feature was introduced in 61.

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship MediaSource SourceBuffer.changeType

2018-08-06 Thread Jean-Yves Avenard
Summary:

enable by default changeType method on MediaSource’s Source Buffer, allowing to 
change content type (codecs and/or container) on the fly…
The method has been availably since 61 behind the preference 
media.mediasource.experimental.enabled

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1481166 


Detail:

Up to now, when using Media Source Extension, you had to create a source buffer 
of a specific type using the MediaSource.addSourceBuffer method, providing the 
mime type describing the container and optionally the codec. You could then no 
longer change the container nor the codec.

Comes changeType , it allows to mix different codec within the same video 
element.
One particular use case would be to use different codecs according to the 
selected resolution.

Like using AV1 for the very low bitrate due to the exceptional performance of 
AV1 there, and switch later to VP9 or H264 as they are a bit more friendly 
resource-wise.

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-20 Thread Jean-Yves Avenard
Hi

I believe that this change may be the caused of 
https://bugzilla.mozilla.org/show_bug.cgi?id=1477254 


That is, the pref value set in all.js no longer overrides the default value set 
in StaticPrefs. The problem occurs mostly with e10s on, when e10s is disabled, 
I see the problem only about 5% of the time.

JY

> On 13 Jul 2018, at 10:37 pm, Kris Maglione  wrote:
> 
> tl;dr: A major change to the architecture preference service has just landed, 
> so please be on the lookout for regressions.
> 
> We've been working for the last few weeks on rearchitecting the preference 
> service to work better in our current and future multi-process 
> configurations, and those changes have just landed in bug 1471025.
> 
> Our preference database tends to be very large, even without any user values. 
> It also needs to be available in every process. Until now, that's meant 
> complete separate copies of the hash table, name strings, and value strings 
> in each process, along with separate initialization in each content process, 
> and a lot of IPC overhead to keep the databases in sync.
> 
> After bug 1471025, the database is split into two sections: a snapshot of the 
> initial state of the database, which is stored in a read-only shared memory 
> region and shared by all processes, and a dynamic hash table of changes on 
> top of that snapshot, of which each process has its own. This approach 
> significantly decreases memory, IPC, and content process initialization 
> overhead. It also decreases the complexity of certain cross-process 
> synchronization logic.
> 
> But it adds complexity in other areas, and represents one of the largest 
> changes to the workings of the preference service since its creation. So 
> please be on the lookout for regressions that look related to preference 
> handling. If you spot any, please file bugs blocking 
> https://bugzil.la/1471025.
> 
> Thanks,
> Kris
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Automated code analysis now also in Phabricator

2018-07-17 Thread Jean-Yves Avenard
Hi

> On 17 Jul 2018, at 3:22 pm, Jan Keromnes  wrote:
> 
> TL;DR -- “reviewbot” is now enabled in Phabricator. It reports potential 
> defects in pending patches for Firefox.
> 
> Last year, we announced Code Review Bot (“reviewbot”, née “clangbot”), a 
> Taskcluster bot that analyzes every patch submitted to MozReview, in order to 
> automatically detect and report code defects *before* they land in Nightly:
> 
> https://groups.google.com/d/msg/mozilla.dev.platform/TFfjCRdGz_E/8leqTqvBCAAJ 
> 
> 
> Developer feedback has been very positive, and the bot has caught many 
> defects, thus improving the quality of Firefox.

This is great … Thank you

When did this become active?

Can existing diff be forced to be scanned if they weren’t before?

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-11 Thread Jean-Yves Avenard
Hi

> On 11 Jul 2018, at 10:10 pm, Kris Maglione  wrote:
> Thanks. Boris added this as a blocker.
> 
> It looks like it will be helpful, but unfortunately won't give us the 2MB 
> simple arithmetic would suggest. On Windows, at least, (and probably 
> elsewhere, but need to confirm) thread stacks are lazily committed, so as 
> long as the decoders aren't used in a process, the overhead is probably 
> closer to 25KB per thread.
> 
> Shrinking the size of the thread pool and lazily spinning up threads when 
> they're first needed would probably save us 200KB per process, though...

I haven’t looked much in details, not being an expert on this and having just 
finished watching the world cup…

A quick glance at the code gives me:

On mac/linux using pthread:
when a thread is created, the stack size is set using pthread_attr_setstacksize
https://searchfox.org/mozilla-central/source/nsprpub/pr/src/pthreads/ptthread.c#355

On Linux, the man page is clear:
"The stack size attribute determines the minimum size (in bytes) that will be 
allocated for threads created using the thread attributes object attr.”

On mac, less so, I’m not sure what’s the behaviour there is, if it’s allocated 
or not…

On Windows:
https://searchfox.org/mozilla-central/source/nsprpub/pr/src/md/windows/w95thred.c#151

the thread is created with STACK_SIZE_PARAM_IS_A_RESERVATION flag set. This 
will allocate the memory immediately.

The saving I was mentioning earlier isn’t just due to media decoder threadpool 
thread stack no longer needing to be that big, but that all other threadpools 
can be reduced too. Threadpools aren’t used only when playing a video/audio 
file.

Anyway, this needs further inspection… we’ll know soon :)

I do hope that the 100 process figures scenario that was given is a worse case 
scenario though...
JY



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-11 Thread Jean-Yves Avenard
Hi

That’s great info, thank you.

There’s one place where we could gain heaps is in the media stack.
Currently, each content process allocate a thread-pool with at least 8 threads 
for use with the media decoders, each threads a default stack size of 256kB.
(https://searchfox.org/mozilla-central/source/xpcom/threads/nsIThreadManager.idl#53)

That stack size has been increased over the years due to the growing use of 
either system frameworks (in particular the mac CoreVideo framework that use 
over 200kB alone), and right now 256kB itself isn’t enough for the new AV1 
decoder from libaom.

One of the work the media team has started, is to have all those decoders run 
in a dedicated process: the reason for this work was mostly done for security 
reasons, but there will be side gains memory-wise.

This work is tracked in bug 1471535 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1471535)

Once this is done, and we no longer calls decoders in the content process, the 
decoder process could use an increase stack size, while reducing the content 
process default stack size to 128kB (and maybe even 64kB)

That alone may be sufficient to achieve your mentioned goals.

An immediate intermediary step could be to use two different stack sizes as we 
pretty much know which one needs more over others.

JY


> On 10 Jul 2018, at 8:19 pm, Kris Maglione  wrote:
> 
> Welcome to the first edition of the Fission MemShrink newsletter.[1]
> 
> In this edition, I'll sum up what the project is, and why it matters to you. 
> In subsequent editions, I'll give updates on progress that we've made, and 
> areas that we'll need to focus on next.[2]
> 
> 
> The Fission MemShrink project is one of the most easily overlooked aspects of 
> Project Fission (also known as Site Isolation), but is absolutely critical to 
> its success. And will require a company- and community-wide effort effort to 
> meet its goals.
> 
> The problem is thus: In order for site isolation to work, we need to be able 
> to run *at least* 100 content processes in an average Firefox session. Each 
> of those processes has its own base memory overhead—memory we use just for 
> creating the process, regardless of what's running in it. In the post-Fission 
> world, that overhead needs to be less than 10MB per process in order to keep 
> the extra overhead from Fission below 1GB. Right now, on our best-cast 
> platform, Windows 10, is somewhere between 17 and 21MB. Linux and OS-X hover 
> between 25 and 35MB. In other words, between 2 and 3.5GB for an ordinary 
> session.
> 
> That means that, in the best case, we need to reduce the memory we use in 
> content processes by *at least* 7MB. The problem, of course, is that there 
> are only so many places we can cut memory without losing functionality, and 
> even fewer places where we can make big wins. But, there are lots of places 
> we can make small and medium-sized wins.
> 
> So, to put the task into perspective, of all of the places we can cut a 
> certain amount of overhead, here are the number of each that we need to fix 
> in order to reach 1MB:
> 
> 250KB:   4
> 100KB:  10
> 75KB:   13
> 50KB:   20
> 20KB:   50
> 10KB:  100
> 5KB:   200
> 
> Now remember: we need to do *all* of these in order to reach our goal. It's 
> not a matter of one 250KB improvement or 50 5KB improvements. It's 4 250KB 
> *and* 200 5KB improvements. There just aren't enough places we can cut 250KB. 
> If we fall short in any of those areas, Project Fission will fail, and 
> Firefox will be the only major browser without site isolation.
> 
> But it won't fail, because all of you are awesome, and this is a totally 
> achievable goal if we all throw our effort behind it.
> 
> Essentially what this means, though, is that if we identify an area of 
> overhead that's 50KB[3] or larger that can be eliminated, it *has* to be 
> eliminated. There just aren't that many large chunks to remove. They all need 
> to go. And if an area of code has a dozen 5KB chunks that can be eliminated, 
> maybe they don't all have to go, but at least half of them do. The more the 
> better.
> 
> 
> To help us triage these issues, we have a tracking bug 
> (https://bugzil.la/memshrink-content), and a per-bug whiteboard tag 
> ([overhead:...]) which gives an estimate of how much per-process overhead we 
> believe fixing that bug would eliminate. Please feel free to add blockers to 
> the tracking bug if you think they're relevant, and to add or update 
> [overhead] tags if you have reasonable estimates.
> 
> 
> With all of that said, here's a brief update of the progress we've made so 
> far:
> 
> In the past month, unique memory per process[4] has dropped 3-4MB[5], and JS 
> memory usage in particular has dropped 1.1-1.9MB.
> 
> Particular credit goes to:
> 
> * Eric Rahm added an AWSY test suite to track base content process memory
>  (https://bugzil.la/1442361). Results:
> 
>   Resident unique: 
> 

Re: Compile webrtc from source in firefox

2018-07-05 Thread Jean-Yves Avenard




On 05/07/2018 10:17, amantell...@gmail.com wrote:

Hi,
I don't understand which webrtc code is used by firefox builder.
I need to modify the webrtc part and recompile it to be used by firefox.
any help?




Firefox use its own copy, the code is found in media/webrtc
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: block audible autoplay media intervention

2018-07-05 Thread Jean-Yves Avenard

Hi


On 05/07/2018 17:10, Mounir Lamouri wrote:
FWIW, WebKit uses the audio track availability and Blink intends to do 
this at some point.




There are fundamental technical issues to resolve before this can be 
achieved. And at this stage, I'm not convinced it could ever be 
achieved, nor if that's worth it.


It would likely require spec changes
The media element play() has synchronous steps required, while the 
entire playback demuxing/decoding steps are fully asynchronous. We can't 
reliably determine if a particular track is available in a synchronous 
fashion before the play() steps complete. If MSE is in use that becomes 
even more tricky.



JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: pay attention when setting multiple reviewers in Phabricator

2018-07-05 Thread Jean-Yves Avenard

Hi


On 05/07/2018 19:19, Mark Côté wrote:


This is, however, something we can address with our new custom 
commit-series-friendly command-line tool. We are also working towards 
the superior solution of automatically selecting reviewers based on 
module owners and peers and enforcing this in Lando.




This sounds like a great idea Or at least that component peers 
receive notifications that something is about to change  when no 
reviewers from the peers are included...


Several times we've encountered unexplained regressions, which could 
have been avoided if the right peer had been notified.


DOM components explicitly require a DOM peers for review. I'm not 
suggesting it becomes the same for all components however... But a very 
visible suggestion to add component peers in the reviewer list would be 
awesome.


JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: open socket and read file inside Webrtc

2018-07-05 Thread Jean-Yves Avenard

Hi


On 05/07/2018 10:16, amantell...@gmail.com wrote:

I want to open a file inside webrtc core part.




If you could extend the reasons on the why it would allow us to help you.

What is your ultimate objective? why do you think you need to open a 
file inside webrtc core?


Is it because you want to record the streams being sent?
You want to record some diagnostics?

Do you want to patch gecko/webrtc directly? Or want to write an extension?

With the informations you've provided it's not possible to detail an 
alternative..


But as Ekr mentioned, you can't do what you describe, and in the future 
it will be even less possible (should webrtc runs in its own 
sandbox/process)


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: media-capabilities

2018-07-03 Thread Jean-Yves Avenard
Hi

The code is now in central and in the last nightly.

It's currently disabled by default behind the pref
media.media-capabilities.enabled

The bug tracking fingerprinting concerns is done in
https://bugzilla.mozilla.org/show_bug.cgi?id=1461454

Feel free to enable it and watch videos in YouTube. On mac in particular it
would allow to re-enable the free vp9 codec (which has been disabled due to
performance reason)

Kind regards
Jean-Yves

On Mon, May 14, 2018 at 5:19 PM, Jean-Yves Avenard 
wrote:

> Media Capabilities allow for web sites to better determine what content to
> serve to the end user.
> Currently a media element offers the canPlayType method (
> https://html.spec.whatwg.org/multipage/media.html#dom-
> navigator-canplaytype-dev) to determine if a container/codec can be used.
> But the answer is limited as a maybe/probably type answer.
>
> It gives no ability to determine if a particular resolution can be played
> well/smoothly enough or be done in a power efficient manner (e.g. will it
> be hardware accelerated).
>
> This has been a particular problem with sites such as YouTube that serves
> VP9 under all circumstances even if the user agent won't play it well (VP9
> is mostly done via software decoding and is CPU itensive). This has forced
> us to indiscriminately disable VP9 altogether).
> For YouTube to know that VP9 could be used for low resolution but not
> high-def ones would allow them to select the right codec from the start.
>
> This issue is tracked in bugzilla 1409664  (https://bugzilla.mozilla.org/
> show_bug.cgi?id=1409664)
>
> The proposed spec is available at https://wicg.github.io/
> media-capabilities/
>
> Chrome has shipped it a while ago now and talking to several partners
> (including YouTube, Netflix, Facebook etc) , Media Capabilities support has
> been the number one request.
>
> We intend to implement and ship this API very soon.
>
> Early comment and feedback will be welcome.
>
> Kinds regards
> Jean-Yves
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: pay attention when setting multiple reviewers in Phabricator

2018-07-02 Thread Jean-Yves Avenard
On Mon, Jul 2, 2018 at 5:01 PM, Andreas Tolfsen  wrote:

> Also sprach Marco Bonardo:
>
> > When asking for review to multiple reviewers, and all of them must accept
> > your revision, you must mark them as blocking reviews, either in the
> > Phabricator ui or appending "!" at the end of the reviewer name.
> Otherwise
> > it's first-come-first-serve.
>
> Note that is and also has been the case for mozreview.
>
>
I don't ever recall mozreview having different kind of reviewer (blocker or
non-blocker), if two people were added as reviewer, by default both had to
review.

That explain now why the 2nd person I had been waiting on in phabricator
didn't get to do the review, I had already had r+ from the first reviewer...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-06-29 Thread Jean-Yves Avenard



On 29/06/2018 16:58, Boris Zbarsky wrote:

On 6/29/18 10:30 AM, Nathan Froyd wrote:

Given the language-required qualification for
`enum class` and a more Rust-alike syntax, I would feel comfortable
with saying CamelCase `enum class` is the way to go.


For what it's worth, I agree.  The point of the "e" prefix is to 
highlight that you have an enumeration and add some poor-man's 
namespacing for a potentially large number of common-looking names, 
and the language-required qualification already handles both of those.


+1..

Would certainly be disappointed with an ALL_CAPS (being myself a user of 
kCamelCase)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: brace initialization syntax

2018-06-05 Thread Jean-Yves Avenard


> On 5 Jun 2018, at 12:54 pm, bposteln...@mozilla.com wrote:
> 
> I would like to resurrect this thread since it would help us a lot for bug 
> 1453795 to come up to a consensus on when to use bracelets and when to use 
> parenthesis. Also I must point out a thing here, that was also mentioned here 
> earlier, that there are situations where we cannot use parenthesis. This is 
> when we want to initialize a structure that doesn't have a ctor, like:
> [1]
> struct str {
>  int a;
>  int b;
> };
> 
> class Str {
>  str s;
>  int a;
> public:
>  Str() : s{0}, a(0) {}
> };
> 
> Also it would help a lot if we would establish how many, spaces should be 
> between the parenthesis or the bracelets, like how do we prefer [1] or [2]
> 
> [2]
> class Str {
>  str s;
>  int a;
> public:
>  Str() : s{ 0 }, a( 0 ) {}
> };
> 
> I don't have a personal preference here, but right now there are several 
> places in our code that combine spaces between parenthesis/bracelets with no 
> spaces.

The current coding style: 
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style 
states to not use space.

There’s no case where a parenthesis should be followed by a space.

Many things wrong here:
First the bracket { should be on a new line :

class/struct str
{
…
}

Initialization are to be on multiple-lines.

clang-format would have made it:
  class Str
  {
str s;
int a;

  public:
Str()
  : s{ 0 }
  , a(0)
{
}
  };

IMHO, should be going for C++11 initializer, it’s much clearer, and avoid 
duplicated code when you need multiple constructors.
What is str? I assume not a plain object, so it should have its own initializer.

so it all becomes:
  class Str
  {
str s;
int a = 0;

  public:
Str() {}
  };

or:
  class Str
  {
str s;
int a = 0;

  public:
Str() = default;
  };

(and I prefer constructors to be defined at the start of the class definition)

My $0.02

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: No more mozilla::Move

2018-06-02 Thread Jean-Yves Avenard


> On 2 Jun 2018, at 3:45 pm, Jean-Yves Avenard  wrote:
>> 
>> Beware of some local mac builds maybe being broken. That should be fixed
>> by bug 1270217 (thanks jwatt!).
> 
> 
> FWIW, this breaks build with clang 6.0.0 on mac…
> 
> such as:
>  0:04.70 
> /Users/jyavenard/Work/Mozilla/obj-ff-dbg/dist/include/mozilla/Move.h:222:14: 
> error: no type named 'move' in namespace ‘std'
> 
> which is ultra weird a Move.h properly includes 

sorry for the noise, yes, applying bug 1270217 did fix it…



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: No more mozilla::Move

2018-06-02 Thread Jean-Yves Avenard


> On 2 Jun 2018, at 9:56 am, Emilio Cobos Álvarez  wrote:
> 
> Hi, just a quick PSA:
> 
> In bug 1465585 I switched all uses of mozilla::Move to std::move, and
> removed the former.
> 
> The reasoning for that is that it allows compilers to detect misuses of
> std::move and warn about them (-Wpessimizing-move / -Wself-move /
> -Wreturn-std-move).
> 
> Beware of some local mac builds maybe being broken. That should be fixed
> by bug 1270217 (thanks jwatt!).


FWIW, this breaks build with clang 6.0.0 on mac…

such as:
 0:04.70 
/Users/jyavenard/Work/Mozilla/obj-ff-dbg/dist/include/mozilla/Move.h:222:14: 
error: no type named 'move' in namespace ‘std'

which is ultra weird a Move.h properly includes 

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: media-capabilities

2018-05-14 Thread Jean-Yves Avenard
Hi

> On 14 May 2018, at 6:53 pm, Boris Zbarsky <bzbar...@mit.edu> wrote:
> 
> On 5/14/18 11:19 AM, Jean-Yves Avenard wrote:
>> The proposed spec is available at https://wicg.github.io/media-capabilities/ 
>> <https://wicg.github.io/media-capabilities/>
> 
> I have some questions about this spec and our implementation:
> 
We’re at an early stage in the implementation, and assuming the concerns of 
some, I wanted to present it early on.

> 1)  What are the fingerprinting implications?  What effect, if any, do our 
> "resist fingerprinting" preferences have on our API implementation here?  The 
> spec tries to address this but as usualy mostly handwaves around it.

The most obvious choice considered was to provide identical information to what 
the existing canPlayType information provide: that is not providing extra 
details.
so if canPlayType reports that "video/webm; codecs=vp9” is supported, then so 
will MediaCapabilities, but providing no difference then according to the 
resolution or the bitrate specified.
It is currently possible with canPlayType to query much deeper level of 
information, in particular bitrate, colorspace, HDR support, codec level etc… 
We haven’t fully implemented those because as canPlayType is a synchronous API, 
doing so properly with our asynchronous backend is hard.

> 
> 2) It looks to me that given a MediaCapabilitiesInfo there is no way to 
> figure out what configuration it represents.  Should there be such a way?  It 
> seems like it would make it simpler to deal with asking for the capabilities 
> for several configurations at once and then examining the results if you 
> don't have to keep track of which returned promise corresponds to which 
> passed-in configuration.  Doubly so if you Promise.race things (though why 
> one would do that in this case is not so clear to me).
> 
> Note that even the example in section 5.1 of the spec gets this wrong: it 
> uses result.contentType, but "result" is a MediaCapabilitiesInfo and doesn't 
> have a .contentType property.

I would invite you to submit such bug and concern you have on the wicg site: 
https://github.com/wicg/media-capabilities/issues

Or I can do so if you prefer.


> 3) The booleans in MediaCapabilitiesInfo (apart from "supported") seem rather 
> vaguely defined.  As a concrete example, if I am on 4-core (+ hyperthreading) 
> "desktop"-level system with nothing running except the video, "smooth" should 
> clearly be set to true.  Should it still be set to true on the same hardware 
> but in a situation where I am heavily swapping and my load average is 150?  
> This is a bit of a caricature, but it seems to me that if people are going to 
> treat this as a _reliable_ signal then it needs to be more clearly spelled 
> out what things it does or does not take into account.

this is an issue I’ve been raising frequently, that there’s no way to determine 
if the capabilities change over time: receiving a notification when such 
temporary workload occurs would be of benefit.
The spec isn’t set in stone, and I’m hoping that a new event could be 
dispatched on the media element to indicate that the capabilities have changed.

Having said that, with hardware decoders, typically whatever you may be doing 
has no impact on performance: it’s a dedicated circuit (even if for some 
there’s a limit on how many decoders can be used at the same time).

> 
> 4) For the "change" event on Screen, does that apply to any property defined 
> in any specification, not just the properties defined in this specification?  
> That would be a pretty significant monkeypatch in its own right.  It would be 
> better if whatever specifications define properties also define what events 
> fire if those properties change.
> 

I’m not sure I understand your question. onChange and the change event is only 
defined for the Screen interface 
(https://drafts.csswg.org/cssom-view/#the-screen-interface).
Or you’re suggesting that as the MediaCapabilities Screen extension is only 
about gamut and luminance, each should get its own event so that future 
extension to the Screen interface do no conflict?

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: media-capabilities

2018-05-14 Thread Jean-Yves Avenard
Hi

> On 14 May 2018, at 6:47 pm, Tom Ritter  wrote:
> 
> It seems like this will reveal a lot of information about the user's
> hardware. Does the Resist Fingerprinting preference disable the API or
> report standardized results? If not, can we get that bug on file (and
> if it's easy, point out exactly where we would want to add the 'if()
> return false'?)
> 
> -tom

This is a concern that has been raised previously, and one that you can 
ultimately get with existing APIs, but those are typically after the fact, and 
by then it’s already too late to allow the user to have a decent media playback 
experience

Existing canPlayType can tell you if we support a particular codec or not.
During playback, we already expose various metrics (starting from bug 
https://bugzilla.mozilla.org/show_bug.cgi?id=580531) this became an official 
spec, to determine if the content plays well : number of frames dropped, number 
of frames decoded, how many were painted etc...

As such MediaCapabilities doesn’t expose much more than what someone can 
already gather over time with what’s already existing.

There are various ways we can build the Media Capabilities answer: collecting 
past metrics and build up a dictionary, or make assumptions based on the 
decoders (e.g. we know a hardware h264 decoder will always be smooth and power 
efficient).

To get around fingerprinting, at the user’s choice, the obvious work around 
would be to report that everything is always supported, will always do so 
smoothly with great battery savings. This is something we already do for the 
existing apps. The user will end up with a poor video experience however. As it 
will typically be served content not always adapted to his machine capabilities.

Providing a way to ensure the user will get a good video experience is 
paramount IMHO. Watching video on their web browser is what people do the most…

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: media-capabilities

2018-05-14 Thread Jean-Yves Avenard
Media Capabilities allow for web sites to better determine what content to 
serve to the end user.
Currently a media element offers the canPlayType method 
(https://html.spec.whatwg.org/multipage/media.html#dom-navigator-canplaytype-dev
 
)
 to determine if a container/codec can be used. But the answer is limited as a 
maybe/probably type answer.

It gives no ability to determine if a particular resolution can be played 
well/smoothly enough or be done in a power efficient manner (e.g. will it be 
hardware accelerated).

This has been a particular problem with sites such as YouTube that serves VP9 
under all circumstances even if the user agent won't play it well (VP9 is 
mostly done via software decoding and is CPU itensive). This has forced us to 
indiscriminately disable VP9 altogether).
For YouTube to know that VP9 could be used for low resolution but not high-def 
ones would allow them to select the right codec from the start.

This issue is tracked in bugzilla 1409664  
(https://bugzilla.mozilla.org/show_bug.cgi?id=1409664 
)

The proposed spec is available at https://wicg.github.io/media-capabilities/ 


Chrome has shipped it a while ago now and talking to several partners 
(including YouTube, Netflix, Facebook etc) , Media Capabilities support has 
been the number one request.

We intend to implement and ship this API very soon.

Early comment and feedback will be welcome.

Kinds regards
Jean-Yves

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-13 Thread Jean-Yves Avenard

Hi


On 12/05/2018 04:47, Boris Zbarsky wrote:
Just to be clear, when doing a bisect, one _can_ just deal with local 
builds.  But the point is that then it takes tens of minutes per build 
as you point out.  So a bisect task that might otherwise take 10-15 
minutes total (1 minute per downloaded build) ends up taking hours...


I've found it pretty difficult to build old versions once past a couple 
of months. Different version of rustc, dev tools not yet supported 
(particularly on Windows with requirements to always use the last 
version of Visual Studio


So dowloading the build is in practice the only way to bisect things
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: new helper class for MozPromise in DOM code

2018-04-27 Thread Jean-Yves Avenard
Hi

> On 26 Apr 2018, at 10:12 pm, Ben Kelly  wrote:
> 
> Hi all,
> 
> I pushed a new helper class to inbound today to make it easier to work with
> MozPromise in DOM code.  Specifically, it allows you to auto-disconnect a
> Thenable request when the global dies.
> 
> The class is called DOMMozPromiseRequestHolder.  Here is an example using
> it:

is that just a refcounted version of MozPromiseRequestHolder?




smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please try out clang-cl and lld-link on Windows

2018-03-16 Thread Jean-Yves Avenard
Thank you David for this awesome work...


8:26.77 Your build was successful!

on windows !!

Who would have known we would ever get there...

(I had to uninstall my local install of LLVM nightly, run ./mach bootstrap
again.. The wiki should be amended with this, it gives confusing results
otherwie)

On Tue, Mar 13, 2018 at 3:31 PM, David Major  wrote:

> Link xul.dll in 20 seconds with this one weird trick!
>
>
> Hi everyone,
>
> clang-cl builds of Firefox have come a long way, from being a hobby
> project of a few developers to running static analysis in CI for more than
> a year now. The tools are in really good shape and should be ready for
> broader use within Mozilla at this point.
>
> Bug 1443590 is looking into what it would take to ship official builds
> with clang-cl and lld-link, but in the meantime it's possible to do local
> builds already. I'd like to invite people who develop on Windows to give it
> a try.
>
> *** Reasons to use clang-cl and lld-link locally ***
>
> - Speed! lld is known for being very fast. I'm serious about 20-second
> libxuls. That's a non-incremental link, with identical code folding
> enabled. For comparison, MSVC takes me over two minutes.
>
> - Speed again! clang-cl will integrate with upcoming sccache/icecream work.
>
> - Much clearer and more actionable error messages than MSVC
>
> - Make your own ASan and static analysis builds (the latter need an LLVM
> before r318304, see bug 1427808)
>
> - Help ship Firefox with clang-cl by getting more eyes and machines on
> these tools
>
> *** Reasons not to use clang-cl and lld-link locally (yet) ***
>
> - You are testing codegen-level fixes or optimizations and need to see the
> exact bits that will be going out to users
>
> - lld-link currently doesn’t support incremental linking -- but with full
> linking being so fast, this might not matter
>
> - You do artifact builds that don't use a local compiler
>
> *** How do I get started? ***
>
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_
> guide/Build_Instructions/Building_Firefox_on_Windows_with_clang-cl
>
> A number of build system changes have landed that make these builds much
> easier than before. For example you no longer need to use old versions of
> MozillaBuild.
>
> Note that clang-cl builds still depend on an MSVC installation for
> headers, libraries, and auxiliary build tools, so don't go uninstalling
> your Visual Studio just yet.
>
> If you run into any problems, please stop by #build or visit the shiny new
> Firefox Build System product in Bugzilla (formerly Core :: Build Config).
>
> Thanks!
>
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-02-18 Thread Jean-Yves Avenard
Hi

So  I got this to work under all platforms (OS X , Ubuntu 17.10 and Windows 10)
Stock speed, no OC of any type.

macOS: 7m32s
Windows 10: 12m20s
Linux Ubuntu 17.10 (had to install kernel 4.15): 6m04s

So not much better than the iMac Pro 10 cores… 

> On 2 Feb 2018, at 7:54 pm, Jean-Yves Avenard <jyaven...@mozilla.com> wrote:
> 
> Intel i9-7980XE
> Asus Prime X299-Deluxe
> Samsung 960 Pro SSD
> G.Skill F4-3200OC16Q-32GTZR x 2 (allowing 64GB in quad channels)
> Corsair AX1200i PSU
> Corsair H100i water cooloer
> Cooler Master Silencio 652S
> 
> Aim is for the fastest and most silent PC (if such thing exists)
> The price on Amazon is 4400 euros which is well below the iMac Pro cost (less 
> than half for similar core count) or the Lenovo P710.
> 
> The choice of the motherboard is that there’s successful report on the 
> hackintosh forum to run macOS High Sierra (though no wifi support)




smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-02-02 Thread Jean-Yves Avenard
Hi

> On 17 Jan 2018, at 12:38 am, Gregory Szorc  wrote:
> 
> On an EC2 c5.17xlarge (36+36 CPUs) running Ubuntu 17.10 and using Clang 5.0, 
> 9be7249e74fd does a clobber but configured `mach build` in 7:34. Rust is very 
> obviously the long pole in this build, with C++ compilation (not linking) 
> completing in ~2 minutes.
> 
> If I enable sccache for just Rust by setting "mk_add_options "export 
> RUSTC_WRAPPER=sccache" in my mozconfig, a clobber build with populated cache 
> for Rust completes in 3:18. And Rust is still the long pole - although only 
> by a few seconds. It's worth noting that CPU time for this build remains in 
> the same ballpark. But overall CPU utilization increases from ~28% to ~64%. 
> There's still work to do improving the efficiency of the overall build 
> system. But these are mostly in parts only touched by clobber builds. If you 
> do `mach build binaries` after touching compiled code, our CPU utilization is 
> terrific.
> 
> From a build system perspective, C/C++ scales up to dozens of cores just fine 
> (it's been this way for a few years). Rust is becoming a longer and longer 
> long tail (assuming you have enough CPU cores that the vast amount of C/C++ 
> completes before Rust does).

After playing with the iMac Pro and loving its performance (though I’ve 
returned it now)

I was thinking of testing this configuration

Intel i9-7980XE
Asus Prime X299-Deluxe
Samsung 960 Pro SSD
G.Skill F4-3200OC16Q-32GTZR x 2 (allowing 64GB in quad channels)
Corsair AX1200i PSU
Corsair H100i water cooloer
Cooler Master Silencio 652S

Aim is for the fastest and most silent PC (if such thing exists)
The price on Amazon is 4400 euros which is well below the iMac Pro cost (less 
than half for similar core count) or the Lenovo P710.

The choice of the motherboard is that there’s successful report on the 
hackintosh forum to run macOS High Sierra (though no wifi support)

Any ideas when the updated Lenovo P710 will come out?

Anandtech had a nice article about the i9-7980EX in regards to clock speed 
according to the number of core in use… It clearly shows that base frequency 
matters very little as the turbo frequencies almost make them all equal.

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jean-Yves Avenard


> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
> 
> Something simple with the jobserver logic might work here, but I think we
> want to complete the long-term project of getting a complete dependency
> graph available before looking at that kind of optimization.

Just get every person needing to work on mac an iMac Pro, and those on 
Windows/Linux a P710 or better and off we go.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Jean-Yves Avenard


> On 16 Jan 2018, at 8:19 pm, Jean-Yves Avenard <jyaven...@mozilla.com> wrote:
> 
> 
> 
>> On 16 Jan 2018, at 7:02 pm, Ralph Giles <gi...@mozilla.com 
>> <mailto:gi...@mozilla.com>> wrote:
>> 
>> On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux
>> 
>> debug -Og build with gcc: 12:34
>> debug -Og build with clang: 12:55
>> opt build with clang: 11:51
> 
> I didn’t succeed in booting linux unfortunately. so I can’t compare…
> 12 minutes sounds rather long, it’s about what the macpro is currently doing. 
> I typically get compilation times similar to mac…

so I didn’t manage to get linux to boot (tried all known main distributions)

But I ran a compilation inside VMWare on Mac, allocating “only” 16 cores as 
that’s the maximum and 32GB of RAM, it took 13m51s

No doubt it would go much lower once I manage to boot linux.

Damn fast machine !

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Jean-Yves Avenard


> On 16 Jan 2018, at 7:02 pm, Ralph Giles  wrote:
> 
> On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux
> 
> debug -Og build with gcc: 12:34
> debug -Og build with clang: 12:55
> opt build with clang: 11:51

I didn’t succeed in booting linux unfortunately. so I can’t compare…
12 minutes sounds rather long, it’s about what the macpro is currently doing. I 
typically get compilation times similar to mac...

> 
> Interestingly, I can almost no longer get any benefits when using icecream, 
> with 36 cores it saves 11s, with 52 cores it saves 50s only…
> 
> Are you staturating all 52 cores during the buidls? Most of the increase in 
> build time is new Rust code, and icecream doesn't distribute Rust. So in 
> addition to some long compile times for final crates limiting the minimum 
> build time, icecream doesn't help much in the run-up either. This is why I'm 
> excited about the distributed build feature we're adding to sccache.

icemon certainly shows all machines to be running (I ran it with -j36 and -j52)


> 
> I'd still expect some improvement from the C++ compilation though.
>  
> It’s a very sweet machine indeed
> 
> Glad you finally got one! :)
> 

probably will return it though, prefer to wait on the next mac pro.




smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Jean-Yves Avenard
Sorry for resuming an old thread.

But I would be interested in knowing how long that same Lenovo P710 takes to 
compile *today*….
In the past 6 months, compilation times have certainly increased massively.

Anyhow, I’ve received yesterday the iMac Pro I ordered early December. It’s a 
10 cores Xeon-W (W-2150B) with 64GB RAM

Here are the timings I measured, in comparison with the Mac Pro 2013 I have 
(which until today was the fastest machines I had ever used)

macOS 10.13.2:
Mac Pro late 2013 : 13m25s
iMac Pro : 7m20s

Windows 10 fall creator
Mac Pro late 2013 : 24m32s (was 16 minutes less than a year ago!)
iMac Pro : 14m07s (16m10s with windows defender going)

Interestingly, I can almost no longer get any benefits when using icecream, 
with 36 cores it saves 11s, with 52 cores it saves 50s only…

It’s a very sweet machine indeed

Jean-Yves

> On 24 Mar 2017, at 11:32 am, Ted Mielczarek  wrote:
> 
> Just as a data point, I have one of those Lenovo P710 machines and I get
> 14-15 minute clobber builds on Windows.
> 
> -Ted



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Using icecream distributed compiler with a mac...

2017-11-29 Thread Jean-Yves Avenard
Hi there.

Last week I spent a bit of time getting icecream working and usable with
mac builds (on mac).

The generated binary can be used with a debugger, all stack traces are good.

The repository with the various required files and the steps to follow are
available there.
https://github.com/jyavenard/mozilla-icecream

The source tree required some changes for compilation to consistently
succeed.

For compiling beta and release, you will have to apply the full
central-icecream.patch

patch
provided on the repository.
As of today, bug 1412240 change is still needed for central, and is pending
review, central-icecream.patch will only partially apply, but that's okay.

I have attempted to make the process as simple as possible, so that there's
no need to compile your own version of clang, nor checkout a massively big
repository.

Note for whomever is maintaining the icecream boxes in the various offices:
Current version of Debian/Ubuntu are running an old version (1.0.1 last
changed in 2013). It is buggy and has the tendency to crash often, in
particular during the configure script.
You need to update the iceccd on those boxes, I have made debian/ubuntu
packages that will upgrade the existing package.
The binary deb file is for ubuntu, not sure if it will work on debian, but
the source package will.


Being a remotee, I wasn't allowed to move all Paris office machines home to
test, so I ran my own networks made of 2 recent quad-core i7 laptops (a
Dell XPS 9560 and a Gigabyte Aero 14), as well as a Mac Pro 2013 8 cores (8
cores xeon-e5).

Interestingly, the mac pro makes a poor buildbot, even though it rocks on
its own.

Some numbers:

mac pro 2013, 8 cores, 32GB

Dell XPS15 i7 2.8Ghz 16GB

Gigabyte Aero 14, i7 2.6GHz 16GB

Without icecream, time to compile central

Mac pro: mac OS 10.13.1 : 12:57.14
XPS 15 : Linux Ubuntu 17.10: 16:42.34

Aero 14: Linux Ubuntu 17.10: 17:13.52


With icecream:

Using all 3 machines: -j32 06:12.24

Only using 2 linux bots: -j16 08:44.88


around 6 minutes build time, back to how long it took in September 2014
when I first got the mac pro...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Building mozilla-central with clang + icecream

2017-11-13 Thread Jean-Yves Avenard


> On 7 Nov 2017, at 2:48 am, zbranie...@mozilla.com wrote:
> 
> I tried to build m-c today with clang 5.0 and icecream using the following 
> mozconfig:
> 
> 
> ```
> mk_add_options MOZ_MAKE_FLAGS="-j$(icecc-jobs)"
> 
> mk_add_options 'export CCACHE_PREFIX=icecc'
> mk_add_options "export RUSTC_WRAPPER=sccache" 
> 
> export CC=clang
> export CXX=clang++
> 
> ac_add_options --with-ccache
> 
> ```


Ensure you’re running the latest stable version of icecream.

The version shipping with ubuntu in particular is rather old, and there’s a few 
bugs related to races causing intermittent failure.

Those problems have almost all disappeared using icecream 1.1 for me.

I still have intermittent build failures however. I often have a ./mach build 
following a ./mach clobber to completing successfully and restarting the build 
will then complete with no trouble.

so there’s still races out there.

I run icecream with clang as compiler…

JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Jean-Yves Avenard
With all this talk…

I’m eagerly waiting for the iMac Pro.

Best of all worlds really:
- High core count
- ECC RAM
- 5K 27” display
- Great graphic card
- Super silent…

I’ve been using a Mac Pro 2013 (the trash can one), Xeon E5 8 cores, 32 GB ECC 
RAM, connected to two 27” screens (one 5K with DPI set at 200%, the other a 
2560x1440 Apple thunderbolt)

It runs flawlessly Windows, Mac and Linux (though under Linux I never managed 
to get more than one screen working at a time).

It compiles on mac, even with stylo under 12 minutes and on Windows in 19 
minutes (used to be 6 minutes and 12 minutes respectively before all this rust 
thing came in)…. And that’s using mach with 14 jobs only so that I continue to 
work on the machine without noticing it’s doing a CPU intensive task. The UI 
stays ultra responsive.

And best of all, it’s sitting 60cm from my hear and I can’t hear anything at 
all…

This has been my primary machine since 2014, I’ve had no desire to upgrade as 
no other machine will allow me such comfortable development environment under 
all platforms we support.

It had been difficult to choose at the beginning between the higher frequency 6 
cores or the 8 cores. But that turned out to be a moot issue as the 8 cores, 
when only 6 cores are run will go as high as the 6 cores version…

The mac pro was an expensive machine, but seeing that it will last me longer 
than your usual machine, I do believe that in the long term it will be best 
value for money.

My $0.02

> On 8 Nov 2017, at 8:43 am, Henri Sivonen  wrote:
> 
> I agree that workstation GPUs should be avoided. Even if they were as
> well supported by Linux distro-provided Open Source drivers as
> consumer GPUs, it's at the very least more difficult to find
> information about what's true about them.
> 
> We don't need the GPU to be at max spec like we need the CPU to be.
> The GPU doesn't affect build times, and for running Firefox it seems
> more useful to see how it runs with a consumer GPU.
> 
> I think we also shouldn't overdo multi-monitor *connectors* at the
> expense of Linux-compatibility, especially considering that
> DisplayPort is supposed to support monitor chaining behind one port on
> the graphics card. The Quadro M2000 that caused trouble for me had
> *four* DisplayPort connectors. Considering the number of ports vs.
> Linux distros Just Working, I'd expect the prioritizing Linux distros
> Just Working to be more useful (as in letting developers write code
> instead of troubleshoot GPU issues) than having a "professional"
> number of connectors as the configuration offered to people who don't
> ask for a lot of connectors. (The specs for the older generation
> consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
> one DisplayPort connector on the card, but I haven't verified it
> empirically, since I don't have that many screens to test with.)
> 
> On Tue, Nov 7, 2017 at 10:27 PM, Jeff Gilbert  > wrote:
>> Avoid workstation GPUs if you can. At best, they're just a more
>> expensive consumer GPU. At worst, they may sacrifice performance we
>> care about in their optimization for CAD and modelling workloads, in
>> addition to moving us further away from testing what our users use. We
>> have no need for workstation GPUs, so we should avoid them if we can.
>> 
>> On Mon, Nov 6, 2017 at 10:32 AM, Sophana "Soap" Aik  wrote:
>>> Hi All,
>>> 
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>> 
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>> 
>>> Also there are some AMD Radeon workstation GPU's that look interesting to
>>> me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
>>> (5820T) so we can start testing that as well.
>>> 
>>> Stay tuned...
>>> 
>>> On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:
>>> 
 Thank you for including an AMD card among the ones to be tested.
 
 - -
 
 The Radeon RX 460 mentioned earlier in this thread arrived. There was
 again enough weirdness that I think it's worth sharing in case it
 saves time for someone else:
 
 Initially, for multiple rounds of booting with different cable
 configurations, the Lenovo UEFI consistenly displayed nothing if a
 cable with a powered-on screen was plugged into the DisplayPort
 connector on the RX 460. To see the boot password prompt or anything
 else displayed by the Lenovo UEFI, I needed to connect a screen to the
 DVI port and *not* have a powered-on screen connected to DisplayPort.
 However, Lenovo UEFI started displaying on a DisplayPort-connected
 screen (with or without DVI also connected) after one time I had had a
 powered-on screen 

Re: Intent to Enable: Automated Static Analysis feedback in MozReview

2017-10-17 Thread Jean-Yves Avenard
Hi

Do you have ways to prevent running this code on some external frameworks?

Whenever we modify a gtest this causes hundreds of errors about not using a 
default constructor … (which shouldn’t be an error anyway)

> On 9 Oct 2017, at 4:33 pm, Jan Keromnes  wrote:
> 
> 
> C/C++ was our top priority because code defects are very costly, but we'd
> love to make our static analysis bot support additional languages, so we're
> looking into integrating with mozlint [0] (the  `./mach lint` wrapper
> around eslint, flake8 and wptlint), and I think we should also use clippy
> to lint Rust code.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-20 Thread Jean-Yves Avenard
On Tue, Sep 19, 2017 at 7:21 PM, Eric Rescorla  wrote:

> I've also had cinnabar fail badly at various times, and then it's been
> pretty unclear what
> the service level guarantees for that were.
>
>
time to try again maybe?

I use git with cinnabar on windows, linux and mac, and I haven't had any
issues with cinnabar in a long while.

I do copy my .git/config file whenever I'm switching machine because I can
never be bothered to learn the various URLs.

Whenever I ran into a (very rare) issue, :glandium was there to help

Thanks Mike!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Canonical cinnabar repository

2017-09-19 Thread Jean-Yves Avenard
On Mon, Sep 18, 2017 at 4:05 PM, Kartikaya Gupta  wrote:

>
> I've tried using cinnabar a couple of times now and the last time I
> tried, this was the dealbreaker for me. My worfklow often involves
> moving a branch from one machine to another and the extra hassle that
> results from mismatched SHAs makes it much more complicated than it
> needs to be.
>

I just clone or pull the local git repo. There's no issue with SHA then.

The only time I use cinnabar is pulling/pushing from the central mercurial
repository
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style question: Meaningless argument names in declarations

2017-09-07 Thread Jean-Yves Avenard
not answering your question yet, but...

On Thu, Sep 7, 2017 at 2:07 PM, Emilio Cobos Álvarez  wrote:
>
>   enum class Operation {
>  // ..
>   };


that should be:
enum class Operation
{
  // ...
};

https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Control_Structures

simply because only control structure follows the K style of having
the brace at the end.
Just like class and function definition, enum aren't control
structure, as such "left brace goes by itself on the second line and
without extra indentation, in general for C++."

>
>   void DoBar(Operation aOperation);
>
> I personally think those argument names are mostly noise (the type gives
> you the same information), and C++ allows omitting them.
>
> That would make the signature more concise, like:
>
>   void DoBar(Operation);
>
> Which is helpful specially in long signatures.
>
> I don't see anything mentioned in the style guide about this, should it be?

When I don't see anything specific in the style guide, I look at the
long example provided there:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#Classes

argument names are always provided in all those examples. As such, I
assume they are to always be there.

It would also not be very consistent for all the functions that take
POD, where there it is definitely required to make your code
consistent.

My $0.02.

JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Placement of binary operators when breaking a long line

2017-09-07 Thread Jean-Yves Avenard
On Thu, Sep 7, 2017 at 3:03 AM, Eric Rahm  wrote:
> As I said, I was hoping to avoid rehashing this point, but the general
> consensus from the last rather contentious post [1] was that changing from
> the prevalent style of the codebase for primarily aesthetic reasons was
> hard to justify (points about readability were made on both sides). Nick
> pointed out that our code base very clearly tilts to the operators on the
> end of the line style [2].


Seeing that the plan is after 57, to run clang-format on the entire
codebase (with the exclusions of 3rd party code).
Do we really need to care on what the current code mainly use?

I don't think it matters, seeing that no matter what, the great
majority of the code will change, as nothing is following the current
coding style to the letter (with the exception of the majority of
dom/media)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Response.body streams landing on trunk, default off

2017-08-17 Thread Jean-Yves Avenard

> On 14 Aug 2017, at 4:14 pm, Ben Kelly  wrote:
> 
> FYI, the Fetch API side of streams has landed and is now in nightly.
> Please test and file bugs.  Thanks!
> 
> Ben

Awesome thank you.

JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: 64-bit Firefox progress report: 2017-07-18

2017-07-20 Thread Jean-Yves Avenard
Hi

On Wed, Jul 19, 2017 at 9:01 AM, Mike Hommey  wrote:
> What's the plan for eligible people that still want to keep 32-bit
> Firefox? Are they going to have to stop auto upgrades, which would get
> them automatically on 64-bits and upgrade manually? This is especially
> going to be a problem for users with less than 2GB RAM that do still
> want 32-bit Firefox if we decide against the minimum memory requirement.

Just curious.

What would be the rationale behind this choice?

JY
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rationalising Linux audio backend support

2017-04-03 Thread Jean-Yves Avenard
On Fri, Mar 31, 2017 at 3:49 PM, Chris Coulson
 wrote:
> The Firefox package in Ubuntu is maintained by 1 contributor in his
> spare time and myself who is only able to do the minimum in order to
> provide updates, so Ubuntu flavors that don't ship Pulseaudio need to
> step up to maintain this code if they want it to continue working and
> don't want it to be disabled again in a future update.

Sorry for hijacking.

But ubuntu's firefox build doesn't enable rust.

A side effect is that FLAC support (and other codecs in MP4)  isn't
enabled as a consequence.

Any chances to get that enabled?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-27 Thread Jean-Yves Avenard
Hi. 

I have received the new Dell XPS 15 9560 and got very puzzled as to why 
compiling central was so slow on this machine. 
This is comparing against a Gigabyte Aero 14 with a gen 6 intel CPU (2.6Ghz 
i7-6600HQ) vs Dell's 7th gen (2.8Ghz i7-7700HQ)
On the Aero 14, compiling central takes 24 minutes. On the XPS 15, first go 38 
minutes. 
The XPS 15 came with McAfee anti-virus and Windows Defender is disabled. An 
exclusion list made almost no difference. Disabling entirely McAfee: the time 
dropped to 28 minutes.Uninstalling McAfee completely, enabling Windows defender 
with an exclusion list as mentioned in the first post: 26.5 minutes
Now disabling Windows Defender: not just an exclusion list saw the time dropped 
to 25 minutes.Interestingly, on the Aero disabling Windows Defender or having 
just an exclusion list made almost no difference in compilation time. I can't 
explain the reason. Maybe because big brother is watching all the time!
After following the instructions listed there: 
http://www.ultrabookreview.com/14875-fix-throttling-xps-15/
Compilation time dropped to 23.8 minutes.The main factor was adding thermal 
pads to the MOSFETs. 
Undervolting the CPU by 125mV added 50s of compilation time, but dropped the 
processor temperature by 10C (max 78C vs 68C) and my guess will also add quite 
a lot of battery life.
So if you're in a hurry, you may want to try disabling Windows Defender 
completely. 
FWIW: on those same machines running Linux Ubuntu 16.10;Aero 14: 14 minutesXPS 
15: 13 minutes.That's out of the box, no tweaks of any kinds. 
JY 
---
If it ain't broken, please fix it




On Fri, Mar 17, 2017 at 4:26 AM +0100, "Ben Kelly"  wrote:










Hi all,

I'm trying to configure my new windows build machine and noticed that
builds were still somewhat slow.  I did:

1) Put it in high performance power profile
2) Made sure my mozilla-central dir was not being indexed for search
3) Excluded my mozilla-central directory from windows defender

Watching the task monitor during a build, though, I still saw MsMpEng.exe
(antivirus) running during the build.

I ended up added some very broad exclusions to get this down close to
zero.  I am now excluding:

  - mozilla-central checkout
  - mozilla-build install dir
  - visual studio install dir
  - /users/bkelly/appdada/local/temp
  - /users/bkelly (because temp dir was not enough)

I'd like to narrow this down a bit.  Does anyone have a better list of
things to exclude from virus scanning for our build process?

Thanks.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform





___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-17 Thread Jean-Yves Avenard
And yet, despite many people’s concerns, it appears that policy of removing r+ 
whenever a new push has been made effective.

And so, here I am with a r+ requesting to fix a comment, I have to ask for r+ 
again from someone not in my timezone and already on week-end.

Turn around time, from 30 minutes to 3.5 days…. How is that making our tree 
safer?

JY

> On 15 Mar 2017, at 4:15 pm, Mike Hoye <mh...@mozilla.com> wrote:
> 
> 
> 
> On 2017-03-14 7:10 PM, Jean-Yves Avenard wrote:
>> /me just loves when a new set of “rules” are put in place to prevent a 
>> problem that has never existed so far and will be a hindrance to everyone in 
>> the future.
> 
> Two dozen or so of our most veteran engineers are deeply involved in this 
> discussion. Their time and attention are extraordinarily valuable, and there 
> is no question about their commitment to Mozilla's success. And yet: here 
> they are, working through the details.
> 
> On top of everything else that's been said here, maybe take a moment to 
> reflect on that.
> 
> - mhoye
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-14 Thread Jean-Yves Avenard

> On 12 Mar 2017, at 9:40 pm, Ehsan Akhgari  wrote:
> And I still don't understand what the proposal means with rebases in
> practice.  What if, after automation tries to land your change after you
> got your final r+ the final rebase fails and you need to do a manual
> rebase?  Do you need to get another r+?  What happens if you're touching
> one of our giant popular headers and someone else manages to land a
> conflict while your reviewer gets back to you?


+1, this is a very common scenario for me…

/me just loves when a new set of “rules” are put in place to prevent a problem 
that has never existed so far and will be a hindrance to everyone in the future.

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-18 Thread Jean-Yves Avenard



On 17/02/17 23:18, gsquel...@mozilla.com wrote:

Hi again Nick,

Someone made me realize that I didn't fully read your message, sorry for that.

I now see that as well as &&/||, you have grepped for other operators, and 
shown that the overwhelming usage is to put all of them at the end of lines!

In light of this, and from what others here have discussed, I'm now thinking 
that we probably should indeed just update our coding style with whatever is used the most 
out there, and model our "Official Formatter" on it.


Another thought, if technically possible:
If our main repository is going to always be under the control of some official 
clang-format style, it should be possible for anybody to pull the repository, 
and use a different formatter locally with their favorite style. And when 
pushing, their modified code could be automatically reformatted with the 
official formatter -- Everybody feels good when programming! :-)



What worries me here, is that I'm yet to read a compelling argument on 
the *why* people were doing things a particular way (with the exception 
of David Major who provided a logical explanation for it).


That you've done something out of habits for the last XX years is no 
argument. Neither a "liking" for a form over another.


How things have been is also no enticing argument, as if we were to base 
our future use on what currently exists (like the grep command provided 
did), then I vote for absolute free coding style left to the developer 
wishes. After all it certainly feels that way. There's not a single 
directory in our code tree that has a consistent coding style. Braces, 
position of the return type vary wildly across the code.


Regardless, we need to make a decision. Little of the most commonly seen 
style in our tree is reflected in the coding style wiki page.
So either we decide that we amend the official coding style to reflect 
what's already there, tweak the coding style so there's no more 
ambiguities (like the distinction between the || and && operators and 
all the others), or modify the code to comply with those rules.


I'll just re-iterate that operators be placed at the end of a logical 
expression can't be reasonably and logically argued for. But maybe I've 
been using a RPN calculators for too long.


I do hope that we end up with a single rule for all, because the current 
state of things is terrible.


My $0.02
JY



smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should &&/|| really be at the end of lines?

2017-02-16 Thread Jean-Yves Avenard

Hi


On 16/02/17 23:56, Jeff Gilbert wrote:

I don't visually like ||/&& at start of line, but I can't fault the
reasoning, so I'm weakly for it.
I don't think it's important enough to change existing code though.


Disclaimer: I'm now biased about this.

I've been writing C and C++ code for now most of my life. And during 
that time you take strong habits. Not always good habits, but there are 
there.


Now having operators at the end of the line, including logical ones like 
&& and || is something I've always done, because it's just the way 
things were done.
Never really thought much about it, you place the operator at the end of 
the line when you're splitting a long logical operations.


but is it the proper things to do?
Just like Gerald above, I had misread (or more accurately understood 
differently) what the rules were in that document, because there's 
various discrepancies in it.


When you read code, especially during review, the review process is made 
much more easily when you can at a glance understand one particular 
logical calculation.


reading this: (turn on courier)

  return ((aCodecMask & VPXDecoder::VP8)
  && aMimeType.EqualsLiteral("video/webm; codecs=vp8"))
 || ((aCodecMask & VPXDecoder::VP9)
 && aMimeType.EqualsLiteral("video/webm; codecs=vp9"))
 || ((aCodecMask & VPXDecoder::VP9)
 && aMimeType.EqualsLiteral("video/vp9"));

than:
  return ((aCodecMask & VPXDecoder::VP8) &&
  aMimeType.EqualsLiteral("video/webm; codecs=vp8")) ||
 ((aCodecMask & VPXDecoder::VP9) &&
  aMimeType.EqualsLiteral("video/webm; codecs=vp9")) ||
 ((aCodecMask & VPXDecoder::VP9) &&
  aMimeType.EqualsLiteral("video/vp9"));

where does the || apply, where does the && ?
I must recompose the entire expression in my mind to understand what's 
going on.

The previous one require no such mental process.

If we are talking about new code, the question becomes, can this become 
the rule *and* can we add such rule to clang-format
seeing that we're in the process of applying clang-format to the entire 
source tree (with exceptions), modifying existing code would become a 
moot issue.


Cheers
JY


On Thu, Feb 16, 2017 at 1:47 PM,   wrote:

Question of the day:
When breaking overlong expressions, should &&/|| go at the end or the beginning 
of the line?

TL;DR: Coding style says 'end', I think we should change it to 
'beginning' for better clarity, and consistency with other operators.


Our coding style reads:
"Break long conditions after && and || logical connectives. See below for the rule 
for other operators." [1]
"""
Overlong expressions not joined by && and || should break so the operator 
starts on the second line and starts in the same column as the start of the expression 
in the first line. This applies to ?:, binary arithmetic operators including +, and 
member-of operators (in particular the . operator in JavaScript, see the Rationale).

Rationale: operator at the front of the continuation line makes for faster 
visual scanning, because there is no need to read to end of line. Also there 
exists a context-sensitive keyword hazard in JavaScript; see bug 442099, 
comment 19, which can be avoided by putting . at the start of a continuation 
line in long member expression.
""" [2]


I initially focused on the rationale, so I thought *all* operators should go at 
the front of the line.

But it seems I've been living a lie!
&&/|| should apparently be at the end, while other operators (in some 
situations) should be at the beginning.


Now I personally think this just doesn't make sense:
- Why the distinction between &&/|| and other operators?
- Why would the excellent rationale not apply to &&/||?
- Pedantically, the style talks about 'expression *not* joined by &&/||, but what about 
expression that *are* joined by &&/||? (Undefined Behavior!)

Based on that, I believe &&/|| should be made consistent with *all* operators, 
and go at the beginning of lines, aligned with the first operand above.

And therefore I would propose the following changes to the coding style:
- Remove the lonely &&/|| sentence at [1].
- Rephrase the first sentence at [2] to something like: "Overlong expressions should 
break so that the operator starts on the following line, in the same column as the first 
operand for that operator. This applies to all binary operators, including member-of 
operators (in particular the . operator in JavaScript, see the Rationale), and extends to 
?: where the 2nd and third operands should be on separate lines and start in the same 
column as the first operand."
- Keep the rationale at [2].

Also, I think we should add something about where to break expressions with operators of 
differing precedences, something like: "Overlong expressions containing operators of 
differing precedences should first be broken at the operator of lowest precedence. E.g.: 
'a+b*c' should be split at '+' before '*'"

Intent to ship: MediaError::message attribute

2016-12-12 Thread Jean-Yves Avenard

Hi

As of December 13th 2016, I intend to turn MediaError::message attribute 
on by default on all platforms. It has been developed behind the 
dom.MediaError.message.enabled preference.


Other UAs shipping this or intending to ship it are chrome and edge 
(that I know of).


This feature was first introduced in bug 1299072 (Firefox 51) as a way 
to help developers identify why they got decoding failures.


Since Chrome caught on and created 
https://github.com/whatwg/html/issues/2085..


Changes have been accepted and is pending a WPT test.

Regards

Jean-Yve




smime.p7s
Description: S/MIME Cryptographic Signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Generating Visual Studio project files by default

2016-06-02 Thread Jean-Yves Avenard
On Wednesday, May 25, 2016 at 9:00:48 AM UTC+10, Gregory Szorc wrote:
> This change was tracked in bug 1275297. Bug 1275419 tracks a follow-up to
> allow disabling their generation.

so when is xcode support coming too ? :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-25 Thread Jean-Yves Avenard
On Tuesday, April 26, 2016 at 3:39:53 AM UTC+10, Botond Ballo wrote:

> That's why we have explicit operator bool() in C++11. That only allows

Indeed. explicit operator was implied.

Here is an example of definition:
https://dxr.mozilla.org/mozilla-central/source/dom/media/MediaData.h#201
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: use nsresult& outparams in constructors to represent failure

2016-04-25 Thread Jean-Yves Avenard
On Thursday, April 21, 2016 at 11:15:10 AM UTC+10, Nicholas Nethercote wrote:
> The only disadvantage I can see is that it looks a bit strange at first. But 
> if
> we started using it that objection would quickly go away.
> 
> I have some example patches that show what this code pattern looks like in
> practice. See bug 1265626 parts 1 and 4, and bug 1265965 part 1.
> 
> Thoughts?
> 
> Nick

I too believe that passing a pointer makes a tad more obvious than it's a out 
parameter. But maybe I'm too old school.

In the media code, we have a few particular use where we construct and then 
initialise. The initialisation itself is typically asynchronous and it is 
required to be performed on a specific thread (while the construction can 
happen everwhere).

For this we have a Init() member that returns a RefPtr

I don't see how we could do that in a combined fallible constructor. As such, 
so long as we're still left with the flexibility to continue using a separate 
init methods..

why not...

Personally, in a few recent classes I actually made an operator bool() method.
This was for objects where fallibility was important.

So you could simply do something like:

T myobject(arg);
if (!myobject) {
  HandleOOM();
  return;
}

In this particular case, the aim was to replace a UniquePtr which would 
work like:
UniquePtr blah = MakeUniqueFallible(arg);
if (!blah) {
  HandleOOm();
  return;
}

I could then simply find replace all UniquePtr blah = 
MakeUniqueFallible(arg) with T blah(arg)

everything else kept working the same.

I don't know how popular this method would be, nor if people would be shocked 
by providing a operator bool() but here it is :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: About static analyzers on some various projects

2015-09-24 Thread Jean-Yves Avenard
On Friday, September 25, 2015 at 7:29:19 AM UTC+10, Ehsan Akhgari wrote:
> On 2015-09-24 1:41 PM, Sylvestre Ledru wrote:
> > = Static analyzers =
> > For now, we are running:
> > * Coverity, a proprietary tool with a great (but slow) web interface. As
> > Firefox is Free software, the service is provided for free
> > but with a restriction in term of number of build. Now, the analysis is
> > launched once a week on Monday. Supports C, C++ & Java.
> > A few improvements will be made to silent some of the defects.
> 
> Does anybody look at these regularly?  I would be interested to know if 
> they produce high quality results these days.  My past experience with 
> Coverity has been that it's full of false positivies.
> 

I regularly looked at coverity logged bug in our bugzilla, and fix the ones 
related to media (not many)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Constructors callable with one argument should be marked as explicit/implicit

2014-12-18 Thread Jean-Yves Avenard
Hi

On Friday, December 19, 2014, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

 That should be it!  The plugin is transparent, unless it finds an error in
 which case you'll get a normal error diagnostic.


That is weird then. Because I have a patch that fails the Linux64 static
analysis test.

But I can't reproduce the issue locally (on a Mac but the code path is
common across all platforms)
I use clang/LLVM 3.6 installed using Mac ports.

Hence why I thought I was missing something.

FWIW, the LLVMCONFIG if set in the .mozconfig doesn't appear to be used,
and I get an error about llvm-config not being found. Instead I must
manually set the environment variable so compilation can proceed.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform