Re: Is Quantum DOM affecting DevTools?

2017-09-28 Thread Jason Duell
We did do some work to slow down network loads in background tabs (in order
to prioritize the active tab).  Is the issue that you need to switch to the
background tab in order to make it connect faster?  Or does the foreground
tab not load quickly unless you switch back and forth from it?  I'm hoping
the former.

If you've got reliable STR it's probably time to open a bug and cc me
(:jduell), :mcmanus and Honza (:mayhemer).

Jason

On Thu, Sep 28, 2017 at 10:19 AM, Bill McCloskey 
wrote:

> If that's caused by anything Quantum-related, it's more likely to be
> Quantum networking stuff.
> -Bill
>
> On Thu, Sep 28, 2017 at 2:30 AM, Salvador de la Puente <
> sdelapue...@mozilla.com> wrote:
>
> > Hello there!
> >
> > I was testing some WebRTC demos in two separate tabs in Nightly. I
> realized
> > I needed to switch from one to another for them to connect faster. I
> > thought it would be related to Quantum DOM throttling and I was
> > wondering...
> >
> >- Is it possible that throttling would be affecting DevTools?
> >- Would it be possible to disable throttling if DevTools are open?
> >
> > What do you think?
> >
> > --
> > 
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Race Cache With Network experiment on Nightly

2017-05-25 Thread Jason Duell
>I think you were looking at the docs for opt-in Shield studies

Ah, right you are :)

OK, I've filed a bug for the pref study:

   https://bugzilla.mozilla.org/show_bug.cgi?id=1367951

Thanks!

Jason

On Thu, May 25, 2017 at 2:27 PM, Matthew Grimes <mgri...@mozilla.com> wrote:

> I think you were looking at the docs for opt-in Shield studies
> (experiments deployed as add-ons), not for pref flipping experiments. Due
> to the nature of some of the opt-in studies we run they require a different
> approval process. Pref flipping is available for all users, it is not
> opt-in. The process currently requires one bug and an email to release
> drivers. Feedback on the doc/process is always welcome!
>
> On Thu, May 25, 2017 at 1:40 PM, Jason Duell <jdu...@mozilla.com> wrote:
>
>> I'm worried we're going from too little process here to too much (at
>> least for this bug).  Opening a meta-bug + 4 sub-bugs and doing a legal
>> review, etc., is a lot of overhead to test some network plumbing that is
>> not going to be especially noticeable to users.
>>
>> Also, we expect that this code will mostly benefit users with slow
>> hardware (disk drives especially).  We'll need to cast a very wide net to
>> get nightly users that match that profile.  The Shield docs say that
>> "participation for Shield Studies is currently around 1-2% of randomly
>> selected participants" (does that map to 1-2% of nightly users?), so I'm
>> not sure we'd get enough coverage if we used Shield.
>>
>> Jason
>>
>> On Thu, May 25, 2017 at 11:30 AM, <mgri...@mozilla.com> wrote:
>>
>>> Hey folks. I run the Shield team. Pref flipping experiments ARE
>>> available on Nightly and will be available in all channels (including
>>> Release) at some point in Firefox 54.
>>>
>>> Since the process is still relatively new, I've been hacking on some how
>>> to docs: https://docs.google.com/document/d/16bpDZGCPKrOIgkkIo5mWKHPT
>>> lYXOatyg_-CUi-3-e54/edit#heading=h.mzzhkdagng85
>>>
>>> Feel free to give those a spin. Feedback on the docs/process is welcome.
>>>
>>> On Wednesday, May 24, 2017 at 6:14:55 PM UTC-7, Patrick McManus wrote:
>>> > a howto for a pref experiment would be awesome..
>>> >
>>> > On Wed, May 24, 2017 at 9:03 PM, Eric Rescorla <e...@rtfm.com> wrote:
>>> >
>>> > > What's the state of pref experiments? I thought they were not yet
>>> ready.
>>> > >
>>> > > -Ekr
>>> > >
>>> > >
>>> > > On Thu, May 25, 2017 at 7:15 AM, Benjamin Smedberg <
>>> benja...@smedbergs.us>
>>> > > wrote:
>>> > >
>>> > > > Is there a particular reason this is landing directly to nightly
>>> rather
>>> > > > than using a pref experiment? A pref experiment is going to
>>> provide much
>>> > > > more reliable comparative data. In general we're pushing everyone
>>> to use
>>> > > > controlled experiments for nightly instead of landing experimental
>>> work
>>> > > > directly.
>>> > > >
>>> > > > --BDS
>>> > > >
>>> > > > On Wed, May 24, 2017 at 11:36 AM, Valentin Gosu <
>>> valentin.g...@gmail.com
>>> > > >
>>> > > > wrote:
>>> > > >
>>> > > > > As part of the Quantum Network initiative we are working on a
>>> project
>>> > > > > called "Race Cache With Network" (rcwn) [1].
>>> > > > >
>>> > > > > This project changes the way the network cache works. When we
>>> detect
>>> > > that
>>> > > > > disk IO may be slow, we send a network request in parallel, and
>>> we use
>>> > > > the
>>> > > > > first response that comes back. For users with slow spinning
>>> disks and
>>> > > a
>>> > > > > low latency network, the result would be faster loads.
>>> > > > >
>>> > > > > This feature is currently preffed off - network.http.rcwn.enabled
>>> > > > > In bug 1366224, which is about to land on m-c, we plan to enable
>>> it on
>>> > > > > nightly for one or two days, to get some useful telemetry for our
>>> > > future
>>> > > > > work.
>>> > > > >
>>> > > > > For any crashes o

Re: Race Cache With Network experiment on Nightly

2017-05-25 Thread Jason Duell
I'm worried we're going from too little process here to too much (at least
for this bug).  Opening a meta-bug + 4 sub-bugs and doing a legal review,
etc., is a lot of overhead to test some network plumbing that is not going
to be especially noticeable to users.

Also, we expect that this code will mostly benefit users with slow hardware
(disk drives especially).  We'll need to cast a very wide net to get
nightly users that match that profile.  The Shield docs say that
"participation for Shield Studies is currently around 1-2% of randomly
selected participants" (does that map to 1-2% of nightly users?), so I'm
not sure we'd get enough coverage if we used Shield.

Jason

On Thu, May 25, 2017 at 11:30 AM,  wrote:

> Hey folks. I run the Shield team. Pref flipping experiments ARE available
> on Nightly and will be available in all channels (including Release) at
> some point in Firefox 54.
>
> Since the process is still relatively new, I've been hacking on some how
> to docs: https://docs.google.com/document/d/16bpDZGCPKrOIgkkIo5mWKHPTlYXOa
> tyg_-CUi-3-e54/edit#heading=h.mzzhkdagng85
>
> Feel free to give those a spin. Feedback on the docs/process is welcome.
>
> On Wednesday, May 24, 2017 at 6:14:55 PM UTC-7, Patrick McManus wrote:
> > a howto for a pref experiment would be awesome..
> >
> > On Wed, May 24, 2017 at 9:03 PM, Eric Rescorla  wrote:
> >
> > > What's the state of pref experiments? I thought they were not yet
> ready.
> > >
> > > -Ekr
> > >
> > >
> > > On Thu, May 25, 2017 at 7:15 AM, Benjamin Smedberg <
> benja...@smedbergs.us>
> > > wrote:
> > >
> > > > Is there a particular reason this is landing directly to nightly
> rather
> > > > than using a pref experiment? A pref experiment is going to provide
> much
> > > > more reliable comparative data. In general we're pushing everyone to
> use
> > > > controlled experiments for nightly instead of landing experimental
> work
> > > > directly.
> > > >
> > > > --BDS
> > > >
> > > > On Wed, May 24, 2017 at 11:36 AM, Valentin Gosu <
> valentin.g...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > As part of the Quantum Network initiative we are working on a
> project
> > > > > called "Race Cache With Network" (rcwn) [1].
> > > > >
> > > > > This project changes the way the network cache works. When we
> detect
> > > that
> > > > > disk IO may be slow, we send a network request in parallel, and we
> use
> > > > the
> > > > > first response that comes back. For users with slow spinning disks
> and
> > > a
> > > > > low latency network, the result would be faster loads.
> > > > >
> > > > > This feature is currently preffed off - network.http.rcwn.enabled
> > > > > In bug 1366224, which is about to land on m-c, we plan to enable
> it on
> > > > > nightly for one or two days, to get some useful telemetry for our
> > > future
> > > > > work.
> > > > >
> > > > > For any crashes or unexpected behaviour, please file bugs blocking
> > > > 1307504.
> > > > >
> > > > > Thanks!
> > > > >
> > > > > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=rcwn
> > > > > [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1366224
> > > > > ___
> > > > > dev-platform mailing list
> > > > > dev-platform@lists.mozilla.org
> > > > > https://lists.mozilla.org/listinfo/dev-platform
> > > > >
> > > > ___
> > > > dev-platform mailing list
> > > > dev-platform@lists.mozilla.org
> > > > https://lists.mozilla.org/listinfo/dev-platform
> > > >
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > >
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: NetworkInformation

2016-12-16 Thread Jason Duell
On Fri, Dec 16, 2016 at 11:35 AM, Tantek Çelik 
wrote:

>
> Honestly this is starting to sound more and more like a need for a
> "Minimal Network" variant of the "Work Offline" option we have in
> Firefox (which AFAIK no other current browser has), since no amount of
> OS-level guess-work is going to give you a reliable answer (as this
> thread has documented).
>

So a switch that toggles the "network is expensive" bit, plus turns off
browser updates, phishing list fetches, etc?  I can see how this would be
nice for power users on a tethered cell phone network.  One issue would be
to make sure users don't forget to turn it off (and never update their
browser again, etc).  Maybe it could time out.

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux content sandbox tightened

2016-10-07 Thread Jason Duell
Never mind--file:// only does reads.

Haven't had my coffee yet this morning :)

Jason

On Fri, Oct 7, 2016 at 10:13 AM, Jason Duell <jdu...@mozilla.com> wrote:

> It sounds like this is going to break all file:// URI accesses until we
> finish implementing e10s support for them:
>
>   https://bugzilla.mozilla.org/show_bug.cgi?id=922481
>
> That may be more bustage on nightly than is acceptable?
>
> Jason
>
>
> On Fri, Oct 7, 2016 at 9:49 AM, Gian-Carlo Pascutto <g...@mozilla.com>
> wrote:
>
>> Hi all,
>>
>> the next Nightly build will have a significantly tightened Linux
>> sandbox. Writes are no longer allowed except to shared memory (for IPC),
>> and to the system TMPDIR (and we're eventually going to get rid of the
>> latter, perhaps with an intermediate step to a Firefox-content-specific
>> tmpdir).
>>
>> There might be some compatibility fallout from this. Extensions/add-ons
>> that try to write from the content process will no longer work, but the
>> impact there should be limited given that similar (and stricter)
>> restrictions have been tried out on macOS. (See bug 1187099 and bug
>> 1288874 for info/discussion). Because Firefox currently still loads a
>> number of external libraries into the content process (glib, gtk,
>> pulseaudio, etc) there is some risk of breakage there as well. You know
>> where to report (Component: Security - Process Sandboxing).
>>
>> This behavior can be controlled via a pref:
>> pref("security.sandbox.content.level", 2);
>>
>> Reverting this to 1 goes back to the previous behavior where the set of
>> allowable system calls is restricted, but no filtering happens on
>> filesytem IO.
>>
>> When Firefox is built with debugging enabled, it will log any policy
>> violations. Currently, a clean Nightly build will show some of those.
>> They are inconsequential, and we'll deal with them, eventually. (Patches
>> welcome though!)
>>
>> --
>> GCP
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
>
> --
>
> Jason
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linux content sandbox tightened

2016-10-07 Thread Jason Duell
It sounds like this is going to break all file:// URI accesses until we
finish implementing e10s support for them:

  https://bugzilla.mozilla.org/show_bug.cgi?id=922481

That may be more bustage on nightly than is acceptable?

Jason


On Fri, Oct 7, 2016 at 9:49 AM, Gian-Carlo Pascutto  wrote:

> Hi all,
>
> the next Nightly build will have a significantly tightened Linux
> sandbox. Writes are no longer allowed except to shared memory (for IPC),
> and to the system TMPDIR (and we're eventually going to get rid of the
> latter, perhaps with an intermediate step to a Firefox-content-specific
> tmpdir).
>
> There might be some compatibility fallout from this. Extensions/add-ons
> that try to write from the content process will no longer work, but the
> impact there should be limited given that similar (and stricter)
> restrictions have been tried out on macOS. (See bug 1187099 and bug
> 1288874 for info/discussion). Because Firefox currently still loads a
> number of external libraries into the content process (glib, gtk,
> pulseaudio, etc) there is some risk of breakage there as well. You know
> where to report (Component: Security - Process Sandboxing).
>
> This behavior can be controlled via a pref:
> pref("security.sandbox.content.level", 2);
>
> Reverting this to 1 goes back to the previous behavior where the set of
> allowable system calls is restricted, but no filtering happens on
> filesytem IO.
>
> When Firefox is built with debugging enabled, it will log any policy
> violations. Currently, a clean Nightly build will show some of those.
> They are inconsequential, and we'll deal with them, eventually. (Patches
> welcome though!)
>
> --
> GCP
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Basic Auth Prevalence (was Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies)

2016-06-10 Thread Jason Duell
This data also smells weird to me.  8% of pages using basic auth seems very
very high, and only 0.7% of basic auth being done unencypted seems low.

Perhaps we should chat in London (ideally with Honza Bambas) and make sure
we're getting the telemetry right here.

Jason

On Fri, Jun 10, 2016 at 2:15 PM, Adam Roach  wrote:

> On 4/18/16 09:59, Richard Barnes wrote:
>
>> Could we just disable HTTP auth for connections not protected with TLS?
>> At
>> least Basic auth is manifestly insecure over an insecure transport.  I
>> don't have any usage statistics, but I suspect it's pretty low compared to
>> form-based auth.
>>
>
> As a follow up from this: we added telemetry to answer the exact question
> about how prevalent Basic auth over non-TLS connections was. Now that 49 is
> off Nightly, I pulled the stats for our new little counter.
>
> It would appear telemetry was enabled for approximately 109M page
> loads[1], of which approximately 8.7M[2] used HTTP auth -- or approximately
> 8% of all pages. (This is much higher than I expected -- approximately 1
> out of 12 page loads uses HTTP auth? It seems far less dead than we
> anticipated).
>
> 749k of those were unencrypted basic auth[2]; this constitutes
> approximately 0.7% of all recorded traffic.
>
> I'll look at the 49 Aurora stats when it has enough data -- it'll be
> interesting to see how much if it is nontrivially different.
>
> /a
>
>
> [1]
> https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0_date=2016-06-06=__none__!__none__!__none___channel_version=nightly%252F49=HTTP_PAGELOAD_IS_SSL_channel_version=null=Firefox=1_keys=submissions_date=2016-05-04=0=1_submission_date=0
>
> [2]
> https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0_date=2016-06-06=__none__!__none__!__none___channel_version=nightly%252F49=HTTP_AUTH_TYPE_STATS_channel_version=null=Firefox=1_keys=submissions_date=2016-05-04=0=1_submission_date=0
>
>
> --
> Adam Roach
> Principal Platform Engineer
> Office of the CTO
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The Whiteboard Tag Amnesty

2016-06-08 Thread Jason Duell
Emma,

> it's not a indexed field or a real tag system, making it hard to parse,
search, and update.

Could we dig into details a little more here?  I assume we could add a
database index for the whiteboard field if performance is an issue.
Do we give keywords an enum value or something (so bugzilla can
index/search them faster)?  I'm not clear on what a "real tag system" means
concretely here.

thanks!

Jason

On Wed, Jun 8, 2016 at 3:29 PM, Emma Humphries  wrote:

> There is a ticket for a "proper" tag system,
> https://bugzilla.mozilla.org/show_bug.cgi?id=1266609 which given time, I'd
> like to get implemented, but with limited resources, this is what I can do
> now.
>
> On Wed, Jun 8, 2016 at 2:08 PM, Patrick McManus 
> wrote:
>
> > as you note the whiteboard tags are permissionless. That's their killer
> > property. Keywords as you note are not, that's their critical weakness.
> >
> > instead of fixing that situation in the "long term" can we please fix
> that
> > as a precondition of converting things? Mozilla doesn't need more
> > centralized systems. If they can't be 100% automated to be permissionless
> > (e.g. perhaps because they don't scale) then the new arrangement of
> things
> > is definitely worse.
> >
> > I'll note that even for triage, our eventual system evolved rapidly and
> > putting an administrator in the middle to add and drop keywords and
> > indicies would have just slowed stuff down. Permissionless to me is a
> > requirement.
> >
> >
> > On Wed, Jun 8, 2016 at 2:43 PM, Kartikaya Gupta 
> > wrote:
> >
> >> What happens after June 24? Is the whiteboard field going to be removed?
> >>
> >> On Wed, Jun 8, 2016 at 4:32 PM, Emma Humphries 
> wrote:
> >> > tl;dr -- nominate whiteboard tags you want converted to keywords. Do
> it
> >> by
> >> > 24 June 2016.
> >> >
> >> > We have a love-hate relationship with the whiteboard field in
> bugzilla.
> >> On
> >> > one hand, we can add team-specific meta data to a bug. On the other
> >> hand,
> >> > it's not a indexed field or a real tag system, making it hard to
> parse,
> >> > search, and update.
> >> >
> >> > But creating keywords is a hassle since you have to request them.
> >> >
> >> > The long term solution is to turn whiteboard into proper tag system,
> but
> >> > the Bugzilla Team's offering to help with some bulk conversion of
> >> > whiteboard tags your teams use into keywords.
> >> >
> >> > To participate:
> >> >
> >> > 1. Create a Bug in the bugzilla.mozilla.org::Administration component
> >> for each
> >> > whiteboard tag you want to convert.
> >> >
> >> > 2. The bug's description should have the old keyword, the new keyword
> >> you
> >> > want to replace it with, and the description of this new keyword which
> >> will
> >> > appear in the online help.
> >> >
> >> > 3. Make sure your keyword doesn't conflict with existing keywords, so
> be
> >> > prepared to rename it. If your keyword is semantically similar to an
> >> > existing keyword or other existing bugzilla field we'll talk you
> about a
> >> > mass change to your bugs.
> >> >
> >> > 4. Make the parent bug,
> >> https://bugzilla.mozilla.org/show_bug.cgi?id=1279022,
> >> > depend on your new bug.
> >> >
> >> > 5. CC Emma Humphries on the bug
> >> >
> >> > We will turn your whiteboard tag into a keyword and remove your old
> tag
> >> > from the whiteboard tags, so make sure your dashboards and other tools
> >> that
> >> > consume Bugzilla's API are updated to account for this.
> >> >
> >> > Please submit your whiteboard fields to convert by Friday 24 June
> 2016.
> >> >
> >> > Cheers,
> >> >
> >> > Emma Humphries
> >> > ___
> >> > dev-platform mailing list
> >> > dev-platform@lists.mozilla.org
> >> > https://lists.mozilla.org/listinfo/dev-platform
> >> ___
> >> firefox-dev mailing list
> >> firefox-...@mozilla.org
> >> https://mail.mozilla.org/listinfo/firefox-dev
> >>
> >
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies

2016-04-15 Thread Jason Duell
On Fri, Apr 15, 2016 at 2:12 AM, Jason Duell <jdu...@mozilla.com> wrote:

> On Thu, Apr 14, 2016 at 10:54 PM, Chris Peterson <cpeter...@mozilla.com>
> wrote:
>
>>
>> Focusing on third-party session cookies is an interesting idea.
>> "Sessionizing" non-HTTPS third-party cookies would encourage ad networks
>> and CDNs to use HTTPS, allowing content sites to use HTTPS without mixed
>> content problems. Much later, we could consider sessionizing even HTTPS
>> third-party cookies.
>>
>
> How about we sessionize only 3rd party HTTP cookies from sites that are on
> our tracking protection list?  That seems the most targeted way to
> encourage ad networks to bump up to HTTPS with a minimal amount of
> collateral damage to other users of 3rd party HTTP cookies.
>

(We could presumably keep a list of CDNs too and sessionize those as well)

Jason



> > We seem to have this already: network.cookie.thirdparty.sessionOnly
>
> Correct, that's what it does.
>
> Jason
>
>
>
>>
>> On 4/14/16 1:54 AM, Chris Peterson wrote:
>>
>>> Summary: Treat cookies set over non-secure HTTP as session cookies
>>>
>>> Exactly one year ago today (!), Henri Sivonen proposed [1] treating
>>> cookies without the `secure` flag as session cookies.
>>>
>>> PROS:
>>>
>>> * Security: login cookies set over non-secure HTTP can be sniffed and
>>> replayed. Clearing those cookies at the end of the browser session would
>>> force the user to log in again next time, reducing the window of
>>> opportunity for an attacker to replay the login cookie. To avoid this,
>>> login-requiring sites should use HTTPS for at least their login page
>>> that set the login cookie.
>>>
>>> * Privacy: most ad networks still use non-secure HTTP. Content sites
>>> that use these ad networks are prevented from deploying HTTPS themselves
>>> because of HTTP/HTTPS mixed content breakage. Clearing user-tracking
>>> cookies set over non-secure HTTP at the end of every browser session
>>> would be a strong motivator for ad networks to upgrade to HTTPS, which
>>> would unblock content sites' HTTPS rollouts.
>>>
>>> However, my testing of Henri's original proposal shows that too few
>>> sites set the `secure` cookie flag for this to be practical. Even sites
>>> that primarily use HTTPS, like google.com, omit the `secure` flag for
>>> many cookies set over HTTPS.
>>>
>>> Instead, I propose treating all cookies set over non-secure HTTP as
>>> session cookies, regardless of whether they have the `secure` flag.
>>> Cookies set over HTTPS would be treated as "secure so far" and allowed
>>> to persist beyond the current browser session. This approach could be
>>> tightened so any "secure so far" cookies later sent over non-secure HTTP
>>> could be downgraded to session cookies. Note that Firefox's session
>>> restore will persist "session" cookies between browser restarts for the
>>> tabs that had been open. (This is "eternal session" feature/bug 530594.)
>>>
>>> To test my proposal, I loaded the home pages of the Alexa Top 25 News
>>> sites [2]. These 25 pages set over 1300 cookies! Fewer than 200 were set
>>> over HTTPS and only 7 had the `secure` flag. About 900 were third-party
>>> cookies. Treating non-secure cookies as session cookies means that over
>>> 1100 cookies would be cleared at the end of the browser session!
>>>
>>> CONS:
>>>
>>> * Sites that allow users to configure preferences without logging into
>>> an account would forget the users' preferences if they are not using
>>> HTTPS. For example, companies that have regional sites would forget the
>>> user's selected region at the end of the browser session.
>>>
>>> * Ad networks' opt-out cookies (for what they're worth) set over
>>> non-secure HTTP would be forgotten at the end of the browser session.
>>>
>>> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1160368
>>>
>>> Link to standard: N/A
>>>
>>> Platform coverage: All platforms
>>>
>>> Estimated or target release: Firefox 49
>>>
>>> Preference behind which this will be implemented:
>>> network.cookie.lifetime.httpSessionOnly
>>>
>>> Do other browser engines implement this? No
>>>
>>> [1]
>>>
>>> https://groups.google.com/d/msg/mozilla.dev.platform/xaGffxAM-hs/aVgYuS3QA2MJ
>>>
>>> [2] http://www.alexa.com/topsites/category/Top/News
>>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
>
> --
>
> Jason
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies

2016-04-15 Thread Jason Duell
On Thu, Apr 14, 2016 at 10:54 PM, Chris Peterson 
wrote:

>
> Focusing on third-party session cookies is an interesting idea.
> "Sessionizing" non-HTTPS third-party cookies would encourage ad networks
> and CDNs to use HTTPS, allowing content sites to use HTTPS without mixed
> content problems. Much later, we could consider sessionizing even HTTPS
> third-party cookies.
>

How about we sessionize only 3rd party HTTP cookies from sites that are on
our tracking protection list?  That seems the most targeted way to
encourage ad networks to bump up to HTTPS with a minimal amount of
collateral damage to other users of 3rd party HTTP cookies.

> We seem to have this already: network.cookie.thirdparty.sessionOnly

Correct, that's what it does.

Jason



>
> On 4/14/16 1:54 AM, Chris Peterson wrote:
>
>> Summary: Treat cookies set over non-secure HTTP as session cookies
>>
>> Exactly one year ago today (!), Henri Sivonen proposed [1] treating
>> cookies without the `secure` flag as session cookies.
>>
>> PROS:
>>
>> * Security: login cookies set over non-secure HTTP can be sniffed and
>> replayed. Clearing those cookies at the end of the browser session would
>> force the user to log in again next time, reducing the window of
>> opportunity for an attacker to replay the login cookie. To avoid this,
>> login-requiring sites should use HTTPS for at least their login page
>> that set the login cookie.
>>
>> * Privacy: most ad networks still use non-secure HTTP. Content sites
>> that use these ad networks are prevented from deploying HTTPS themselves
>> because of HTTP/HTTPS mixed content breakage. Clearing user-tracking
>> cookies set over non-secure HTTP at the end of every browser session
>> would be a strong motivator for ad networks to upgrade to HTTPS, which
>> would unblock content sites' HTTPS rollouts.
>>
>> However, my testing of Henri's original proposal shows that too few
>> sites set the `secure` cookie flag for this to be practical. Even sites
>> that primarily use HTTPS, like google.com, omit the `secure` flag for
>> many cookies set over HTTPS.
>>
>> Instead, I propose treating all cookies set over non-secure HTTP as
>> session cookies, regardless of whether they have the `secure` flag.
>> Cookies set over HTTPS would be treated as "secure so far" and allowed
>> to persist beyond the current browser session. This approach could be
>> tightened so any "secure so far" cookies later sent over non-secure HTTP
>> could be downgraded to session cookies. Note that Firefox's session
>> restore will persist "session" cookies between browser restarts for the
>> tabs that had been open. (This is "eternal session" feature/bug 530594.)
>>
>> To test my proposal, I loaded the home pages of the Alexa Top 25 News
>> sites [2]. These 25 pages set over 1300 cookies! Fewer than 200 were set
>> over HTTPS and only 7 had the `secure` flag. About 900 were third-party
>> cookies. Treating non-secure cookies as session cookies means that over
>> 1100 cookies would be cleared at the end of the browser session!
>>
>> CONS:
>>
>> * Sites that allow users to configure preferences without logging into
>> an account would forget the users' preferences if they are not using
>> HTTPS. For example, companies that have regional sites would forget the
>> user's selected region at the end of the browser session.
>>
>> * Ad networks' opt-out cookies (for what they're worth) set over
>> non-secure HTTP would be forgotten at the end of the browser session.
>>
>> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1160368
>>
>> Link to standard: N/A
>>
>> Platform coverage: All platforms
>>
>> Estimated or target release: Firefox 49
>>
>> Preference behind which this will be implemented:
>> network.cookie.lifetime.httpSessionOnly
>>
>> Do other browser engines implement this? No
>>
>> [1]
>>
>> https://groups.google.com/d/msg/mozilla.dev.platform/xaGffxAM-hs/aVgYuS3QA2MJ
>>
>> [2] http://www.alexa.com/topsites/category/Top/News
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIProtocolHandler in Electrolysis?

2016-01-04 Thread Jason Duell
Cameron,

The way the builtin protocols (HTTP/FTP/Websockets/etc) handle this is that
the protocol handler code checks whether we're in a child process or not
when a channel is created, and we hand out different things depending on
that.  In the parent, we hand out a "good old HTTP channel" (nsHttpChannel)
just as we've always done in single-process Firefox.  In the child we hand
out a stub channel (HttpChannelChild) that looks and smells like an
nsIHttpChannel, but actually uses IPDL (our C++ cross-platform messaging
language) to essentially shunt all the real work to the parent.  When
AsyncOpen is called on the child, the stub channel winds up telling the
parent to create a regular "real" http channel, which does the actual work
of creating/sending an HTTP request, and as the reply come back, sending
the data to the child, which dispatches OnStart/OnData/OnStopRequest
messages as they arrive from the parent.

One key ingredient here is to make sure all cross-process communication is
asynchronous whenever possible (and that should be 95%+ of the time).  You
want to avoid blocking the child process waiting for synchronous
cross-process communication (and you're not allowed to block the parent
waiting to the child to respond).  You also generally want to have the
parent channel send all the relevant data to the child that it will need to
service all nsI[Foo]Channel requests, as opposed to doing a "remote object"
style approach (where you'd send a message off to the parent process to ask
the "real" channel for the answer).  This is both because 1) that would be
painfully slow, and 2) the parent and child objects may not be in the same
state. For instance, if client code calls channel.isPending() on a child
channel that hasn't dispatched OnStopRequest yet, the answer should be
'true'.  But if you ask the parent channel, it may have already hit
OnStopRequest and sent that data for that to the child (where it's waiting
to be dispatched).  So for instance, HTTP channels ship the entire set of
HTTP response headers to the child as part of receiving OnStartRequest from
the parent, so that they can service any GetResponseHeader() calls without
asking the parent.

>From talking to folks who know JS better than I do, it sounds like the
mechanism you'll want to use for all your cross-process communication is
the Message Manager:


https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Message_Manager

https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Message_Manager/Message_manager_overview

http://mxr.mozilla.org/mozilla-central/source/dom/base/nsIMessageManager.idl?force=1#15

One difference between C++/IPDL and JS/MM  is that IPDL has the builtin
concept of an IPDL "channel": it's like a pipe you set up.  Each C++ necko
channel in e10s sets up its own IPDL 'channel' (which is really just a
unique ID under the covers).  So when, for instance, an OnDataAvailable
message gets sent from the parent to the child, we automatically know which
necko channel it belongs to (from the IPDL channel it arrives on).  The
Message Manager's messages are more like DOM events--there's no notion of a
channel that they belong to, so you'll need to include as part of the
message some kind of ID that you map back to the necko channel that it's
for (I'd say use the URI, but that wouldn't work if you've got multiple
channels open to the same URI.  So you'll probably assign each channel a
GUID and keep a hashtable on both the parent and child that lets you map
from GUID->channel.

I'm happy to help some more off-list with examples of how our current
protocols handle various things as you have questions.  Hopefully this will
get you started, along with this inspirational gopher:// video:

   https://www.youtube.com/watch?v=WaSUyYSQie8

:)

Jason

On Mon, Jan 4, 2016 at 4:03 PM, Cameron Kaiser  wrote:

> On 1/4/16 12:09 PM, Dave Townsend wrote:
>
>> On Mon, Jan 4, 2016 at 12:03 PM, Cameron Kaiser 
>> wrote:
>>
>>> What's different about nsIProtocolHandler in e10s? OverbiteFF works in 45
>>> aurora without e10s on, but fails to recognize the protocol it defines
>>> with
>>> e10s enabled. There's no explanation of this in the browser console and
>>> seemingly no error. Do I have to do extra work to register the protocol
>>> handler component, or is there some other problem? A cursory search of
>>> MDN
>>> was no help.
>>>
>>> Assuming you are registering the protocol handler in chrome.manifest
>> it will only be registered in the parent process but you will probably
>> need to register it in the child process too and make it do something
>> sensible in each case. You'll have to do that with JS in a frame or
>> process script.
>>
>
> That makes sense, except I'm not sure how to split it apart. Are there any
> examples of what such a parent-child protocol handler should look like in a
> basic sense? The p-c goop in netwerk/protocol/ is not really amenable to
> determining this, 

Re: Too many oranges!

2015-12-22 Thread Jason Duell
On Tue, Dec 22, 2015 at 11:38 AM, Ben Kelly  wrote:

>
> I'd rather see us do:
>
> 1) Raise the visibility of oranges.  Post the most frequent intermittents
> without an owner to dev-platform every N days.
> 2) Make its someone's job to find owners for top oranges.  I believe RyanVM
> used to do that, but not sure if its still happening now that he has
> changed roles.
>
> Ben
>
>
I'm with bkelly on this one.  Maybe with some additional initial messaging
("War on Orange raised in priority!") too.  I don't think we want to pivot
all work in platform for this.

Jason



>
> >
> > On Tue, Dec 22, 2015 at 7:41 AM Mike Conley  wrote:
> >
> > > I would support scheduled time[1] to do maintenance[2] and help improve
> > our
> > > developer tooling and documentation. I'm less sure how to integrate
> such
> > a
> > > thing in practice.
> > >
> > > [1]: A day, a week, heck maybe even a release cycle
> > > [2]: Where maintenance is fixing oranges, closing out papercuts,
> > > refactoring, etc.
> > >
> > > On 21 December 2015 at 17:35,  wrote:
> > >
> > > > On Monday, December 21, 2015 at 1:16:13 PM UTC-6, Kartikaya Gupta
> > wrote:
> > > > > So, I propose that we create an orangefactor threshold above which
> > the
> > > > > tree should just be closed until people start fixing intermittent
> > > > > oranges. Thoughts?
> > > > >
> > > > > kats
> > > >
> > > > How about regularly scheduled test fix days where everyone drops what
> > > they
> > > > are doing and spends a day fixing tests? mc could be closed to
> > everything
> > > > except critical work and test fixes. Managers would be able to opt
> > > > individuals out of this as needed but generally everyone would be
> > > expected
> > > > to take part.
> > > >
> > > > Jim
> > > > ___
> > > > dev-platform mailing list
> > > > dev-platform@lists.mozilla.org
> > > > https://lists.mozilla.org/listinfo/dev-platform
> > > >
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: about:profiles and the new profile manager

2015-12-18 Thread Jason Duell
I think we need to compact the new UI.  The old profile manager shows me up
to 5 profiles that I can simply click on to launch the browser.  With the
new UI the info for the first profile fills up the whole popup window, so
launching other profiles now requires me to scroll down, then click.  That
sounds minor (and in some sense it is) but 95% of the time the profile
manager is launched to select which profile to run (versus trying to find
where the profile is stored on disk, etc), so making that a slower
experience seems like a step backward.

It's great to see it running as a normal tab, though!

Do we have any plans to make Profile manager more prominent in our user
experience?  I've used it for years now to keep one instance of firefox
running with work stuff, and one for personal use.  But it remains a buried
secret, when it's really handy for a lot of use cases (people sharing a
computer, etc).  I know a lot of people who wind up using different
browsers to achieve the same thing, which seems like a waste.

Jason


On Fri, Dec 18, 2015 at 12:54 PM, Andrea Marchesini  wrote:

> >
> >
> > The replacement for the ProfileManager probably needs some UX work,
> > though.  It was not clear to me which profile was actually going to be
> > launched if I clicked the "Start Nightly" button.
> >
> >
> Right. This is one of the bug I'm working on (bug 1233032). The new UI has
> been written but, definitely, and we still need some UX work.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: jar: URIs from content

2015-10-15 Thread Jason Duell
OMG yes please.

Jason

On Thu, Oct 15, 2015 at 11:31 AM, Ehsan Akhgari 
wrote:

> On 2015-10-15 1:58 PM, Ehsan Akhgari wrote:
>
>> We currently support URLs such as
>> > http://mxr.mozilla.org/mozilla-central/source/modules/libjar/test/mochitest/bug403331.zip?raw=1=application/java-archive!/test.html
>> >.
>>   This is a Firefox specific feature that no other engine implements,
>> and it increases our attack surface unnecessarily.  As such, I would
>> like to put it behind a pref and disable it for Web content by default.
>>
>
> FWIW I filed bug 1215235 for this.  We'll wait for this discussion before
> landing code there.
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-09 Thread Jason Duell
I've never seen votes make a real difference in the 6 years I've been
around on Bugzilla.  The one use case I can think for keeping them is as an
escape valve for user frustration on old, long-standing bugs like

  https://bugzilla.mozilla.org/show_bug.cgi?id=41489

I.e. when people start griping about I can't believe lame Mzilla hasn't
fixed this yet we can tell people to vote instead of filling the comments
with complaints.  But that's a rare case and I'm not sure it's worth
keeping voting just for that.

On Tue, Jun 9, 2015 at 2:24 PM, Chris Peterson cpeter...@mozilla.com
wrote:

 On 6/9/15 2:09 PM, Mark Côté wrote:

 In a quest to simplify both the interface and the maintenance of
 bugzilla.mozilla.org, we're looking for features that are of
 questionable value to see if we can get rid of them.  As I'm sure
 everyone knows, Bugzilla grew organically, without much of a road map,
 over a long time, and it experienced a lot of scope bloat, which has
 made it complex both on the inside and out.  I'd like to cut that down
 at least a bit if I can.

 To that end, I'd like to consider the voting feature.  While it is
 enabled on a quite a few products, anecdotally I have heard
 many times that it isn't actually useful, that is, votes aren't really
 being used to prioritize features  fixes.  If your team uses voting,
 I'd like to talk about your use case and see if, in general, it makes
 sense to continue to support this feature.


 I vote for bugs as a polite (sneaky?) way to watch a bug's bugmail without
 spamming all the other CCs by adding myself to the bug's real CC list.


 chris


 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform




-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rust in Gecko. rust-url compatibility

2015-04-30 Thread Jason Duell
+1 to asserting during tests. I'd feel better about doing it on nightly too
if there were a way to include the offending URI in the crash report.  But
I'm guessing there's not?

On Thu, Apr 30, 2015 at 3:42 PM, Jet Villegas jville...@mozilla.com wrote:

 I wonder why we'd allow *any* parsing differences here? Couldn't you just
 assert and fail hard while you're testing against our tests and in Nightly?
 I imagine the differences you don't catch this way will be so subtle that
 crowd-sourcing is unlikely to catch them either.

 --Jet

 On Thu, Apr 30, 2015 at 3:34 PM, Valentin Gosu valentin.g...@gmail.com
 wrote:

  As some of you may know, Rust is approaching its 1.0 release in a couple
 of
  weeks. One of the major goals for Rust is using a rust library in Gecko.
  The specific one I'm working at the moment is adding rust-url as a safer
  alternative to nsStandardURL.
 
  This project is still in its infancy, but we're making good progress. A
 WIP
  patch is posted in bug 1151899, while infrastructure support for the rust
  compiler is tracked in bug 1135640.
 
  One of the main problems in this endeavor is compatibility. It would be
  best if this change wouldn't introduce any changes in the way we parse
 and
  encode/decode URLs, however rust-url does differ a bit from Gecko's own
  parser. While we can account for the differences we know of, there may
 be a
  lot of other cases we are not aware of. I propose using our volunteer
 base
  in trying to find more of these differences by reporting them on Nightly.
 
  My patch currently uses printf to note when a parsing difference occurs,
 or
  when any of the getters (GetHost, GetPath, etc) returns a string that's
  different from our native implementation. Printf might not be the best
 way
  of logging these differences though. NSPR logging might work, or even
  writing to a log file in the current directory.
 
  These differences are quite privacy sensitive, so an automatic reporting
  tool probably wouldn't work. Has anyone done something like this before?
  Would fuzzing be a good way of finding more cases?
 
  I'm waiting for any comments and suggestions you may have.
  Thanks!
  ___
  dev-platform mailing list
  dev-platform@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-platform
 
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform




-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTTP/1.1 Multiplexing

2015-04-09 Thread Jason Duell
At this point the HTTP/2 ship has sailed.  It's exceedingly unlikely that
we or any other browser vendor or the IETF are going to pivot to a modified
HTTP/1.1 to get the feature set here.

Sorry!

Jason

On Thu, Apr 9, 2015 at 11:53 AM, Daniel Stenberg dan...@haxx.se wrote:

 On Wed, 8 Apr 2015, max.bruc...@gmail.com wrote:

  A request begins by adding a header: X-Req-ID, set to a connection-unique
 value. The server responded with an exact copy of this ID, and a
 X-Req-Target header which specifies the location of the response(for server
 pushing mostly). The server sets the X-Req-ID = push/based-id for pushed
 responses. In doing so, we do away with the standard in-order processing 
 one-response-per-request model.


 In doing away with that, you're also no longer HTTP/1.1 compliant (which
 is a pretty significant drawback) and you also miss a few of the definite
 benefits that HTTP/2 brings: like priorities and HPACK on the first request.

 My guess is that you will have a hard time to convince HTTP people that
 this is a good idea worthy to implement even when you call it something
 else than 1.1. But then that's just my opinion.

 --

  / daniel.haxx.se

 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform




-- 

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-05-05 Thread Jason Duell

Our trial run of the HTTP cache v2 is done and we will be back to using
the old cache as of tonight's nightly.  We found one very important bug
that didn't show up in automated tests, which is great.

Jason

https://bugzilla.mozilla.org/show_bug.cgi?id=1006181
https://bugzilla.mozilla.org/show_bug.cgi?id=1006197




On 04/30/2014 11:23 PM, Jason Duell wrote:

In February we briefly turned on the new HTTP cache (cache2) for a few
days--it was quite useful at shaking out some bugs.  We're planning to
do this again starting in the next day or two--if this seems like a Bad
Idea to you please comment ASAP in

https://bugzilla.mozilla.org/show_bug.cgi?id=1004185

Note: like last time, cache2 will be turned on for nightly Firefox
desktop users only, and not for android/b2g.  It will also not be turned
on for the buildbots, as we still have a few orange/perf bugs that we're
tracking down.

We have a number of people who have been using cache2 for their daily
browsing for quite some time, so we don't expect catastrophic failure.
Please file any bugs you see in Bugzilla under Networking:Cache...

Jason
___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning


___
dev-planning mailing list
dev-plann...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-planning


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-05-01 Thread Jason Duell
In February we briefly turned on the new HTTP cache (cache2) for a few 
days--it was quite useful at shaking out some bugs.  We're planning to 
do this again starting in the next day or two--if this seems like a Bad 
Idea to you please comment ASAP in


   https://bugzilla.mozilla.org/show_bug.cgi?id=1004185

Note: like last time, cache2 will be turned on for nightly Firefox 
desktop users only, and not for android/b2g.  It will also not be turned 
on for the buildbots, as we still have a few orange/perf bugs that we're 
tracking down.


We have a number of people who have been using cache2 for their daily 
browsing for quite some time, so we don't expect catastrophic failure. 
Please file any bugs you see in Bugzilla under Networking:Cache...


Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Enabling new HTTP cache on nightly (browser only, not automated tests) soon

2014-05-01 Thread Jason Duell

On 05/01/2014 08:01 AM, Gavin Sharp wrote:

I had the same concern in bug 967693. There was some back and forth in
a private email thread (we should have discussed it in the bug...)
that essentially boiled down to the orange/perf investigations are
blocked, we want more nightly crash/bug reports to work on in parallel
while those are figured out, if I recall correctly.


Yes, that's exactly right.  The orange/perf bugs are not major bugs, but 
they're enough to keep us from doing a full landing.  But meanwhile a 
wider test audience is likely to shake out bugs that don't emerge from 
our automated test coverage.  We found a few bugs that way in our last 
trial run.  It'd be good to be able to work on those in parallel.


 Do we have the telemetry or feedback channels in place to
 measure what you need to measure?

We'll be keeping an eye on telemetry measures like cache hit rate and 
response time.  The other feedback channel is to alert engineers and 
other nightly users that this is happening so we're more likely to get 
bugs filed.


Jason




Gavin

On Thu, May 1, 2014 at 10:00 AM, Benjamin Smedberg
benja...@smedbergs.us wrote:

On 5/1/2014 2:23 AM, Jason Duell wrote:




Note: like last time, cache2 will be turned on for nightly Firefox desktop
users only, and not for android/b2g.  It will also not be turned on for the
buildbots, as we still have a few orange/perf bugs that we're tracking down.



This part doesn't make much sense to me. What is the goal of getting nightly
feedback if we know that there are unresolved test failures and performance
issues? Do we think that enabling it on nightly will help fix the known
issues?

What kind of feedback are you hoping to get from nightly? Do we have the
telemetry or feedback channels in place to measure what you need to measure?

--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to (finish) implementing: Resource Timing API

2014-04-30 Thread Jason Duell

We have landed (so far pref'd off) most of the Resource Timing API:

  http://www.w3.org/TR/resource-timing/
  https://bugzilla.mozilla.org/show_bug.cgi?id=822480

We've opened a meta-bug for the followups that are needed for the full API:

  https://bugzilla.mozilla.org/show_bug.cgi?id=1002855

It's not clear to those of us toiling in the necko trenches whether the 
full API is actually needed to pref on or not (I hear rumors the code 
we've got so far might help devtools people with timings that they need).


Have thoughts?  Discuss in the meta-bug.

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We live in a memory-constrained world

2014-02-21 Thread Jason Duell

On 02/21/2014 01:38 PM, Nicholas Nethercote wrote:

Greetings,

We now live in a memory-constrained world. By we, I mean anyone
working on Mozilla platform code. When desktop Firefox was our only
product, this wasn't especially true -- bad leaks and the like were a
problem, sure, but ordinary usage wasn't much of an issue. But now
with Firefox on Android and particularly Firefox OS, it is most
definitely true.

In particular, work is currently underway to get Firefox OS working on
devices that only have 128 MiB of RAM. The codename for these devices
is Tarako (https://wiki.mozilla.org/FirefoxOS/Tarako). In case it's
not obvious, the memory situation on these devices is *tight*.

Optimizations that wouldn't have been worthwhile in the desktop-only
days are now worthwhile. For example, an optimization that saves 100
KiB of memory per process is pretty worthwhile for Firefox OS.


Thanks for the heads-up.  Time to throw

   https://bugzilla.mozilla.org/show_bug.cgi?id=807359

back on the barbie... (Feel free to give it a new memshrink priority as 
you see fit, Nicholas)


Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New necko cache?

2014-02-19 Thread Jason Duell


On Tue, Feb 18, 2014 at 7:56 PM, Neil n...@parkwaycc.co.uk 
mailto:n...@parkwaycc.co.uk wrote:


Where can I find documentation for the new necko cache? So far
I've only turned up some draft planning documents. In particular,
I understand that there is a preference to toggle the cache. What
does application code have to do in order to work with whichever
cache has been enabled?



Application code generally doesn't have to do anything to work with the 
new cache, unless it's a direct consumer of the cache API (we changed a 
few APIs from sync to async IIRC).


Mostly we just have the new code, but as you point out there are some 
design docs and blogs posts and meta-bugs tracking the work:


http://www.janbambas.cz/new-firefox-http-cache-backend-implementation/

   https://wiki.mozilla.org/Necko/Cache/Plans

   https://bugzilla.mozilla.org/show_bug.cgi?id=913806

Is there something specific you're wondering about?

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Jason Duell

On 01/06/2014 06:35 PM, Joshua Cranmer  wrote:


Side-by-side diffs are one use case I can think of. Another is that some
people prefer to develop by keeping tiled copies of windows; wider lines
reduce the number of tiled windows that one can see.


Yes--if we jump to 80 chars per line, I won't be able to keep two 
columns open in my editor (vim, but emacs would be the same) on my 
laptop, which would suck.


(Yes, my vision is not what it used to be--I'm using 10 point font. But 
that's not so huge.)


Jason



People who use

terminal-based editors for their coding are probably going to be rather
likely to use the default window size for terminals: 80x24. Given that
our style guide also requires adding vim and emacs modelines to files
(aside: if we're talking about doing mass style-conversions, can we also
fix modelines?), it seems reasonable that enough of our developers use
vim and emacs to make force-resizing of terminal size defaults a
noticeable cost.

With 2-space indent, parsimonious indenting requirements (e.g., don't
indent case: statements in switch or public in class), and liberal use
of importing names into localized namespaces, the 80-column width isn't
a big deal for most code.

I don't think most JS hackers care for abuse of Hungarian notation for
scope-based (or const) naming.  Every member/argument having a capital
letter in it surely makes typing slower.  And extra noise in every
name but locals seems worse for new-contributor readability.
Personally this doesn't bother me much (although aCx will always be
painful compared to cx as two no-cap letters, I'm sure), but others
are much more bothered.


And a '_' at the end of member names requires less typing than 'm' +
capital letter?

My choice of when to use or not use Hungarian notation is often messy
and seemingly inconsistent, although there is some method to my madness.
I find prefixing member variables with 'm' to be useful, although I
dislike using it in POD-ish structs where all the members are public.
The use of 'a' for arguments is where I am least consistent, especially
as I extremely dislike it being used for an outparam return value
(disclaimer: I'm used to XPCOM-taminated code, so having to write
NS_IMETHODIMP nsFoo::GetBoolValue(bool *retval) is common for me, and
this colors my judgements a lot). I've never found much use for the 's',
'g', and 'k' prefixes, although that may just as well be because I've
never found much use for using those types of variables in the first
place (or when I do, it's because I'm being dictated by other concerns
instead, e.g., type traits-like coding or C++11 polyfilling).



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsITimers are officially now safe to use on threads besides the main thread

2013-04-05 Thread Jason Duell
Not that this had been stopping us from using them off-main thread anyway :) 

nsITimer.idl now clarifies thread usage.  For gory details:

  https://bugzilla.mozilla.org/show_bug.cgi?id=792920
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: LOAD_ANONYMOUS + LOAD_NOCOOKIES

2013-02-28 Thread Jason Duell

On 02/28/2013 07:29 PM, bernhardr...@gmail.com wrote:

just to keep this thread up to date. I asked jduell if it is possible to change 
long to int64_t


We're going to upgrade to 64 bits in

   https://bugzilla.mozilla.org/show_bug.cgi?id=846629

Jason

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using anonymous namespace vs 'static'

2012-11-03 Thread Jason Duell

On 11/03/2012 02:20 PM, Justin Lebar wrote:

Are there compiler annotations we could put on a class so that it
behaves as though it's in an anonymous namespace but also plays nicely
with the debugger?  If so, maybe we should switch to those.


Sure:  static  :)

Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Using anonymous namespace vs 'static'

2012-11-02 Thread Jason Duell
I see an increasing number of patches using anonymous namespaces instead 
of 'static'.   This is debugger unfriendly:  setting a breakpoint in gdb 
for 'foo' in an anonymous namespace requires the following syntax:


  b (anonymous namespace)::foo

(If there's a less verbose way of doing this, please let me know.)

Do we have some good reason for preferring anonymous namespaces? Yes, I 
understand that there are certain cases (making types anonymous; certain 
template arguments) where 'static' is less powerful:


   http://www.comeaucomputing.com/techtalk/#nostatic

But on balance the debugger inconvenience doesn't make them worth it for 
the common case IMO.


Thoughts?

Jason

--
The good news is that in 1995 we will have a good operating system and
 programming language; the bad news is that they will be Unix and C++.
 - Richard P. Gabriel, Worse is Better
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal for reorganizing test directories

2012-10-25 Thread Jason Duell
 Why have those tests be placed into this location and not beside the 
actual implementation code?


Necko has had a general policy of not allowing mochitests within the 
/netwerk tree, so any cookie, etc tests that need browser functionality 
(i.e. more than xpcshell) live elsewhere.  It's really mostly a nervous 
tic at this point, left over from when we had the priority of making our 
network libraries a browser-independent module that other projects could 
take up (few have, and we officially Don't Care® as of a few years ago).


Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increase in mozilla-inbound bustage due to people not using Try

2012-08-16 Thread Jason Duell

On 08/16/2012 12:03 AM, Nicholas Nethercote wrote:

On Wed, Aug 15, 2012 at 11:41 PM, Mike Hommey m...@glandium.org wrote:

A few months back, John Ford wrote a standalone win32 executable
that used the proper APIs to delete an entire directory. I think he
said that it deleted the object directory 5-10x faster or something.
No clue what happened with that.

I wish this were true, but I seriously doubt it. I can buy that it's
faster, but not 5-10 times so.

http://blog.johnford.org/writting-a-native-rm-program-for-windows/
says that it deleted a mozilla-central clone 3x faster.


And renaming the directory (then deleting it in parallel with the build, 
or later) ought to be some power of ten faster than that, at least from 
the build-time perspective. At least if you don't do anything expensive 
like our nsIFile NTFS renaming goopage (that traverses the directory 
tree making sure NTFS ACLs are preserved for all files).Which most 
versions of 'rm' aren't going to do, I'd guess.


Jason
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increase in mozilla-inbound bustage due to people not using Try

2012-08-16 Thread Jason Duell

On 08/16/2012 06:23 AM, Aryeh Gregor wrote:

On Thu, Aug 16, 2012 at 4:18 PM, Ben Hearsum bhear...@mozilla.com wrote:

I don't think this would be any more than a one-time win until the disk
fills up. At the start of each job we ensure there's enough space to do
the current job. By moving the objdir away we'd avoiding doing any clean
up until we need more space than is available. After that, each job
would still end up cleaning up roughly one objdir to clean up enough
space to run.

Why can't you move it, then spawn a background thread to remove it at
minimum priority?  IIUC, Vista and later support I/O prioritization,


Brian Bondy just added I/O prioritization to our code that removes 
corrupt HTTP caches, in bug 773518, in case that code helps.


Jason


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform