Re: [IMPORTANT] Submit your PI Requests for Firefox 72 QA feature testing by Nov 1

2019-10-29 Thread Tom Grabowski
PI team has made the criteria for feature inclusion in release scope

more crisp, starting with Fx72.


At the start of every Nightly cycle, all features in scope will need to
provide the following information (as per Fx72 release schedule

):


   -

   PI Request submitted in JIRA (Done by milestone: PI Request due -- November
   1st for Fx72)
   - Trello card in Firefox board 
   (Done by milestone: Feature documentation due *-- *November 8th for Fx72)
   -

   Feature documentation shared (Done by milestone: *Feature documentation
   due -- *November 8th for Fx72)

If a feature does *not *meet all the criteria listed above by *November 8th*,
it will need to go through a *VP (product, engineering) release exception
process* to remain in release scope.



On Wed, Oct 23, 2019 at 6:28 PM Tom Grabowski 
wrote:

>
> Similar to what QA did for previous Firefox feature testing prioritization
> ,
> we would like to do the same for Fx72. In order to help with the process,
> please *submit your pi-request
> *
> by *November 1*.
> This is needed to assure QA will be involved in your feature development
> work for the 72 cycle. Kindly ensure to update the *Priority of the PI
> request *(Highest, High, Medium, Low, Lowest) as during feature
> prioritization process, this will be factored in to ensure critical
> features have sufficient resources assigned.
>
> Please note that the *Feature technical documentation* for *features
> require beta testing* needs to be ready before *November 8*. Please
> follow the Feature Technical Documentation Guidelines Template
> 
> and share the information with the QA owners or add the link in the PI
> request in JIRA. For *features that require Nightly testing*, please
> provide documentation *as soon as possible*. QA cannot start working on
> your feature without documentation.
>
> *Q: What happens after the deadline?*
> A: After the deadline QA will come back with a prioritized list of work
> that represents what we are committing to for the next cycle. We want to
> ensure this list matches eng and product expectations.
>
> * Q: What if I miss the deadline?*
> A: We reserve the right to say that we can't pick up work for late
> requests in the current cycle. You can still develop and execute your own
> test plan or defer the work to the following cycle.
>
> * Q: What about unknown or unexpected requests? What if there is a
> business reason for a late request? What do we do with experiments and
> System*
> A: In order to remain flexible, we will keep some percentage of time open
> for requests like these.
>
> *Q: There's no way I'm going to remember to do this.*
> A: Do it now! I'll also send out a reminder next week.
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-29 Thread Marcos Caceres
On Wednesday, October 30, 2019 at 4:53:55 AM UTC+11, Steve Fink wrote: 
> I really ought to put my decade-old desktop into action again. My last 
> attempt was with icecc, and though it worked pretty well when it worked, 
> the pain in keeping it alive wasn't worth the benefit.

Thanks Steve... so it does really sound like perhaps just investing in more 
individual computing power might be the way to go. I think that's ok... even if 
it means, in my own case, personally paying the extra "Apple Tax" for a Mac Pro 
once they are released (I guess I'd be looking at ~US$10,000). The increased 
productivity might make it worth it tho.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [blink-dev] Re: What to do about scroll anchoring?

2019-10-29 Thread Emilio Cobos Álvarez

Hi all,

 10/18/19 7:19 PM, Chris Harrelson wrote:

Hi,

Another quick update: Emilio, Navid, Nick, Stefan and I met today and
discussed which issues are important to fix and why. We now have a list of
spec issues, and WPT tests to fix that are Chromium bugs, that should
substantially improve interop. Nick and Stefan will take on the work to fix
them, with the review and feedback support of Emilio.


So, today another scroll-anchoring bug crossed my radar, and this one 
I'm not sure at all how to fix it, because there's no obvious answer 
here as far as I can tell.


My diagnosis (for one of the pages, the one I could repro and reduce) is 
in here[1], but basically my current explanation is that the page should 
be broken per spec, and that when it works it's hitting a bug in both 
Chromium[2] which we have an equivalent of but are just not hitting 
because in Firefox changing `overflow` does more/different layout work 
than in Chrome.


The test-case may as well work if we change our scroll event or timer 
scheduling (see there), but that is obviously pretty flaky.


I honestly don't have many better ideas for more fancy heursitics about 
how to unbreak that kind of site. From the point of view of the 
anchoring code, the page is just toggling height somewhere above the 
anchor, which is the case where scroll anchoring _should_ work, usually.


I can, of course (and may as a short-term band-aid, not sure yet) add 
`overflow` to the magic list of properties like `position` that suppress 
scroll anchoring everywhere in the scroller, but that'd be just kicking 
the can down the road and waiting for the next difference in layout 
performance optimizations between Blink and Gecko to hit us.


I think (about to go on PTO for the next of the week) I'll add telemetry 
for pages that have scroll event listeners, and see if disabling scroll 
anchoring on a node when there are scroll event listeners attached to it 
is something reasonable (plus adding an explicit opt-in of course).


I'm not terribly hopeful that the percentage of such documents is going 
to be terribly big, to be honest, but providing an opt-in and doing 
outreach may be a reasonable alternative.


Another idea would be to restrict the number of consecutive scrolls made 
by scroll anchoring to a given number at most. That would made the 
experience in such broken websites somewhat less annoying, but it'll 
also show flickering until that happens, which would make the browser 
still look broken :/.


Thoughts / ideas I may not have thought of/be aware of?

Thanks,

 -- Emilio

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1592094#c15
[2]: https://bugs.chromium.org/p/chromium/issues/detail?id=920289


Thanks all,
Chris


On Thu, Oct 10, 2019 at 2:13 PM Rick Byers  wrote:


Sorry for the delay.

We agree that scroll anchoring has unrealized potential to be valuable for
the web at large, and to make that happen we should be investing a lot more
working with y'all (and if we can't succeed, probably removing it from
chromium). Concretely +Chris Harrelson who leads rendering for Chrome (and
likely someone else from his team), as well as +Nick Burris from the Chrome
input team will start digging in ASAP. In addition to just the normal
high-bandwidth engineer-to-engineer collaboration between chromium and
gecko I propose the following high-level goals for our work:

- Ensure that there are no known deviations in behavior between
chromium and the spec (one way or the other).
- Ensure all the (non-ua-specific) site compat constraints folks are
hitting are captured in web-platform-tests. I.e. if Gecko passes the tests
and serves a chromium UA string it should work as well as in Chrome (modulo
other unrelated UA compat issues of course).
- Look for any reasonable opportunity to help deal with UA-specific
compat issues (i.e. those that show up on sites that are explicitly looking
for a Gecko UA string or other engine-specific feature). This may include
making changes in the spec / chromium implementation. This is probably the
toughest one, but I'm optimistic that if we nail the first two, we can find
some reasonable tradeoff for the hard parts that are left here. Philip (our
overall interop lead) has volunteered to help out here as well.

Does that sound about right? Any suggestions on the best forum for tight
engineering collaboration? GitHub good enough, or maybe get on an IRC /
slack channel together somewhere?

Rick

On Mon, Oct 7, 2019 at 2:11 PM Mike Taylor  wrote:


Hi Rick,

On 9/28/19 10:07 PM, Rick Byers wrote:

Can you give us a week or so to chat about this within the Chrome team
and get back to you?


Any updates here?

Thanks.

--
Mike Taylor
Web Compat, Mozilla


--
You received this message because you are subscribed to the Google Groups
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to blink-dev+unsubscr...@chromium.org.
To view this 

Re: To what extent is sccache's distributed compilation usable?

2019-10-29 Thread Steve Fink

On 10/28/19 9:17 PM, Marcos Caceres wrote:

On Tuesday, October 29, 2019 at 3:27:52 AM UTC+11, smaug wrote:

Quite often one has just a laptop. Not compiling tons of Rust stuff all the 
time would be really nice.
(I haven't figured out when stylo decides to recompile itself - it seems to be 
somewhat random.)

Probably a gross misunderstanding on my part, but the sccache project page states [1]: 
"It is used as a compiler wrapper and avoids compilation when possible, storing a 
cache in a remote storage using the Amazon Simple Cloud Storage Service (S3) API, the 
Google Cloud Storage (GCS) API, or Redis."

I'm still (possibly naively) imagining that we will leverage the "the cloud"™️ 
to speed up compiles? Or am I totally misreading what the above is saying?

[1] https://github.com/mozilla/sccache#sccache---shared-compilation-cache


My experience with other distributed compilation tools (distcc, icecc) 
indicates that cloud resources are going to be of very limited use here. 
Compiles are just way too sensitive to network bandwidth and latency, 
especially when compiling with debuginfo which tends to be extremely 
large. Even if the network transfer takes way less time than the 
compile, the sending/receiving scheduling never seems to work out very 
well and things collapse down to a trickle.


Also, I've had very limited luck with using slow local machines. A CPU 
is not a CPU  -- even on a local gigabit network, farming off compiles 
to slow machines is more likely to slow things down than speed them up. 
Despite the fancy graphical tools, I was never completely satisfied with 
my understanding of exactly why that is. It could be that a lack of 
parallelism meant that everything ended up repeatedly waiting on the 
slow machine to finish the last file in a directory (or whatever your 
boundary of parallelism is). Or it could be network contention, 
especially when your object files have massive debuginfo portions. (I 
always wanted to have a way to generate split debuginfo, and not block 
on the debuginfo transfers.) The tools tended to show things working 
great for a while, and then slowing down to a snail's pace.


I've long thought [1] that predictive prefetching would be cool: when 
you do something (eg pull from mozilla-central), a background task 
starts prefetching cached build results that were generated remotely. 
Your local compile would use them if they were available, or generate 
them locally if not. That would at least do no harm (if you don't count 
network bandwidth).


sccache's usage of S3 makes sense when running from within AWS. I'm 
skeptical of its utility when running remotely. But I haven't tried 
setting up sccache on my local network, and my internet connectivity 
isn't great anyway.


I really ought to put my decade-old desktop into action again. My last 
attempt was with icecc, and though it worked pretty well when it worked, 
the pain in keeping it alive wasn't worth the benefit.



[1] Ancient history - 
https://wiki.mozilla.org/Sfink/Thought_Experiment_-_One_Minute_Builds



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-29 Thread Simon Sapin

On 28/10/2019 17:27, smaug wrote:

Quite often one has just a laptop. Not compiling tons of Rust stuff all the 
time would be really nice.
(I haven't figured out when stylo decides to recompile itself - it seems to be 
somewhat random.)


I’m pretty sure there is no call to a random number generator in Cargo’s 
code for dependency tracking. Spurious recompiles are bugs that are 
worth filing so that they can be fixed.


If you manage to find steps to reproduce, building with a 
CARGO_LOG=cargo::core::compiler::fingerprint environment variable should 
print some details about why Cargo things some crate (and there for 
those that transitively depend on it) needs to be rebuilt.


https://github.com/servo/mozjs/pull/203 is an example of a fix for such 
a bug: a combination of a `build.rs` script being over-eager in using 
`rerun-if-changed` and calling something that ends up writing to the 
source directory.


`rerun-if-changed` is documented at 
https://doc.rust-lang.org/cargo/reference/build-scripts.html



On 29/10/2019 00:54, Gerald Squelart wrote:

It's a bit annoying to see things like "force-cargo-...-build" when
nothing has changed,


That much makes sense to me: running `cargo build` is part of figuring 
out that nothing has changed. Cargo has a lot of dependency tracking 
logic, duplicating it all to avoid calling it would be unproductive and 
fragile. Running `cargo build` with a warm filesystem cache when there’s 
nothing to do should not take more than a couple of seconds. (Looking at 
the mtime of each source file.)




accompanied by lots of "waiting for file lock on package cache".


This line is printed − once − when Cargo is waiting for a filesystem 
lock. It showing up N times is a sign that there are at least N+1 copies 
of Cargo running concurrently.


It would be nice to run Cargo only once and tell it up-front everything 
it needs to do (with `-p foo` multiple times, or `--all`, or 
`default-memebers` 
https://doc.rust-lang.org/cargo/reference/manifest.html#package-selection) 
rather that having multiples copies competing for a shared resource.


--
Simon Sapin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform