Looking for examples of bad IPC latency

2020-04-27 Thread Adam Roach

Hey, folks --

I've been looking at the amount of time IPC messages spend in flight. So 
far, I've been using general web pages (e.g., modern news sites like 
CNN) to generate the profile information that I'm analyzing.


Given that several people I spoke to in Berlin believed that IPC latency 
was a major concern, I'm interested in finding out whether any of you 
have specific use cases that you know or believe are hampered by IPC 
performance, to make sure I look at them in particular. If you know of 
any such cases, please let me know (either via email, or by pinging me 
on the Matrix server -- I'm abr:mozilla.org).


Thanks!

/a

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Autodiscovery of WebExtension search engines

2020-02-19 Thread Adam Roach

On 2/14/2020 5:05 PM, Daniel Veditz wrote:

On Fri, Feb 14, 2020 at 11:50 AM Dale Harvey  wrote:


We’re proposing a new mime-type [...]: “x-xpinstall” for WebExtension
search
engines. Example: 

This is confusingly similar to "application/x-xpinstall" which we use to
trigger extension installs from link clicks. Since standard media-type
syntax is "/" some authors will tend to fill in the
"missing" bit and get it wrong, and others will complain that the syntax is
non-standard and broken.

Does this code enforce that the .xpi we download and attempt to install is
actually a search type and not an arbitrary WebExtension? If any extension
type will work then re-using the full application/x-xpinstall is
appropriate, but that sounds like it would go against user expectation and
might trick users into doing something dangerous. "This page would like to
install 'Steal all your data from every page search engine'. OK?" If the
code does enforce only search type add-ons will it be confusing to use the
generic media-type? Or maybe it's OK anyway, since rel="search" is required
and can be taken as requiring that subset.

If you _do_ invent a new one shared with other browser vendors, please
don't use an "x-" prefix in anything new.
https://tools.ietf.org/html/rfc6648 [2012] (hey -- our very own St. Peter)



I had a response composed, and then realized that Dan had covered most 
of what I wanted to say. The only additional point I would like to make 
is: unless you're re-using a media type already in use (e.g., 
application/x-xpinstall), or planning to run this through a standards 
process first, this should look something like 
"application/vnd.mozilla.webextension." See 
 for 
details.


/a

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Timed Text (TT) Working Group

2019-08-29 Thread Adam Roach
Most of the deltas range from editorial to general good hygiene. The 
only changes of any real consequence that I see are:


 * Updating their previous work to new versions
 * Charter item to work on a profile of TTML2 to support audio-only use
   cases
 * Catch-all clause at the bottom of §2.1 that grants the WG carte
   blanche to work on any random thing they want

Having little background in this technology, I'm pretty ambivalent about 
the first two changes. I think we should object to the third change: 
charters serve both the guide work and limit scope, and this clause 
removes all scope limitations.


/a

On 8/28/19 5:41 PM, L. David Baron wrote:

The W3C is proposing a revised charter for:

   Timed Text (TT) Working Group
   https://www.w3.org/2019/08/ttwg-proposed-charter.html
   https://lists.w3.org/Archives/Public/public-new-work/2019Aug/0004.html

The comparison to the group's previous charter is:
   
https://services.w3.org/htmldiff?doc1=https%3A%2F%2Fwww.w3.org%2F2018%2F05%2Ftimed-text-charter.html=https%3A%2F%2Fwww.w3.org%2F2019%2F08%2Fttwg-proposed-charter.html

Mozilla has the opportunity to send comments or objections through
Tuesday, September 10.

Please reply to this thread if you think there's something we should
say as part of this charter review, or if you think we should
support or oppose it.

-David



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Must we rebuild all our rust code constantly?

2019-08-19 Thread Adam Gashlin
Is this https://bugzilla.mozilla.org/show_bug.cgi?id=1427313 ?

On Mon, Aug 19, 2019 at 5:27 PM Kris Maglione  wrote:

> This is apparently a known bug that no-one seems to be able to
> track down the cause of. It suddenly started happening to me one
> night for every build, even if I changed nothing. Then, just as
> suddenly, stopped happening after a couple of hours.
>
> On Mon, Aug 19, 2019 at 05:11:19PM -0700, Dave Townsend wrote:
> >Thanks to a tip I've tracked this down. This seems to only be the case
> when
> >I have sccache enabled. Disabling it gives me nice quick incremental
> builds
> >again. Of course that isn't an ideal solution but it will do for now.
> >
> >On Mon, Aug 19, 2019 at 1:55 PM Dave Townsend 
> wrote:
> >
> >> For a couple of weeks now I've seen that any attempt to build Firefox,
> >> even incremental builds seem to rebuild an awful lot of rust code. I
> found
> >> this in the source which seems to suggest why:
> >>
> https://searchfox.org/mozilla-central/source/config/makefiles/rust.mk#238.
> >> But, this means that now an incremental build with a couple of code
> change
> >> that have no impact on rust is taking upwards of 4 minutes to complete
> in
> >> comparison to around 40 seconds, and the log file is full of cargo
> output.
> >> I've heard similar comments from other developers.
> >>
> >> This is a pretty big increase in the time to compile and test and is
> >> really slowing down my work. Is there any way we can avoid this?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: HTMLMediaElement.allowedToPlay

2018-07-31 Thread Adam Gashlin
I don't think that Netflix would accept allowedToPlay == false for media
which is the whole point of visiting a page. They probably wouldn't even
check it then, instead allowing for the possibility that the user will be
prompted or that it just won't autoplay if they expressed that preference.
This feature doesn't help in such cases, but I don't see how it hurts. I
guess there's a possibility for overzealous postering.

On Mon, Jul 30, 2018 at 2:04 PM, Jan-Ivar Bruaroey  wrote:

> On 7/29/18 10:39 PM, Chris Pearce wrote:
>
>> Summary: HTMLMediaElement.allowedToPlay allows web authors to determine
>> in advance of calling HTMLMediaElement.play() whether the HTMLMediaElement
>> in its current state would be allowed to play, or would be blocked by the
>> browser's autoplay blocking policies.
>>
>> This is useful to web authors as if they can't autoplay they may prefer
>> to download a poster image instead of paying the price of downloading media
>> data.
>>
>> This feature is particularly useful for Firefox, as web authors can poll
>> HTMLMediaElement.allowedToPlay to determine whether a permission prompt
>> would show if they were to call play().
>>
>
> Doesn't this amputate the user flow we just implemented?
>
> Without this attribute, Netflix queues up rich media, Firefox asks the
> user if they want autoplay, who answers "Duh, it's Netflix", and levels up
> to auto-playing Netflix forever.
>
> With this attribute, Netflix sees HTMLMediaElement.allowedToPlay == false,
> and downloads a poster image instead. User must click to get engaging
> media, which now takes longer to load, and they never level up to
> auto-playing Netflix?
>
> Doesn't seem like an improvement. Am I missing something?
>
> .: Jan-Ivar :.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: open socket and read file inside Webrtc

2018-07-04 Thread Adam Roach

On 7/4/18 7:24 AM, amantell...@gmail.com wrote:

Hi,
I'm very new with firefox (as developer, of course).
I need to open a file and tcp sockets inside webrtc.
I read the following link
https://wiki.mozilla.org/Security/Sandbox#File_System_Restrictions
there is the sandbox that does not permit to open sockets or file descriptors.
could you give me the way how I can solve these my problems?
Thank you very much


For files, you want to use the File API. See 
https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications 
for a good introduction.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust crate approval

2018-06-28 Thread Adam Gashlin
On Thu, Jun 28, 2018 at 4:42 PM, Nathan Froyd  wrote:

> Thanks for raising these points.
>

Thanks for the response!

On Tue, Jun 26, 2018 at 10:02 PM, Adam Gashlin  wrote:
> > * Already vendored crates
> > Can I assume any crates we have already in mozilla-central are ok to use?
> > Last year there was a thread that mentioned making a list of "sanctioned"
> > crates, did that ever come about?
>
> I don't recall the discussion on sanctioned crates, do you have a
> pointer to that thread?
>

It turns out what I was thinking of was just a brief suggestion here:
https://groups.google.com/d/msg/mozilla.dev.platform/a_vhnvoM3co/yxwzqOkUBgAJ



> Regardless, anything that's already vendored should be OK
>

That's generally been my assumption, but a number of things have been
vendored that may not have been reviewed for possible inclusion in shipping
code. I was wondering what winapi was doing in the tree (as it is essential
for the Windows-specific stuff I'm doing), it seems to have been pulled in
for geckodriver.

I'll use judgment, as recommended :)

-Adam
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Rust crate approval

2018-06-26 Thread Adam Gashlin
I'm in the process of writing my first Rust for Firefox, a standalone
Windows service to be used for background updates. I've found a few good
documents on how to handle the build technically, but I'm unclear on what
process we use to review external crates. If there are general guidelines
for external libraries in any language I'd appreciate pointers to that as
well.

More specifically:

* Already vendored crates
Can I assume any crates we have already in mozilla-central are ok to use?
Last year there was a thread that mentioned making a list of "sanctioned"
crates, did that ever come about?

* Updates
I need winapi 0.3.5 for BITS support, currently third_party/rust/winapi is
0.3.4. There should be no problem updating it, but should I have this
reviewed by the folks who originally vendored it into mozilla-central?

* New crates
I'd like to use the windows-service crate, which seems well written and has
few dependencies, but the first 0.1.0 release was just a few weeks ago. I'd
like to have that reviewed at least as carefully as my own code,
particularly given how much unsafety there is, but where do I draw the
line? For instance, it depends on "widestring", which is small and has been
around for a while but isn't widely used, should I have that reviewed
internally as well? Is popularity a reasonable measure?

Thanks!
-Adam Gashlin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-09 Thread Adam Roach

On 5/9/18 12:11 PM, L. David Baron wrote:

It's useful for tracking down regressions no matter how old the
regression is; I pretty regularly see mozregression finding useful
data on bugs that regressed multiple years ago.


I want to agree with David -- I recall one incident in particular where 
I used mozregression to track a problem down to a three-year-old change 
that was only exposed when we flipped the big "everyone gets e10s now" 
switch. I would have been pretty lost figuring out the root cause 
without older builds.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New Policy: Marking Bugzilla bugs for features riding behind a pref

2018-05-03 Thread Adam Roach

On 5/3/18 12:18 PM, Nicholas Alexander wrote:

Not all features are feasible to ship behind feature flags.


I'm pretty sure the proposed policy isn't intended to change anything 
regarding features that ship without associated feature flags, nor is it 
trying to get more features to ship behind flags than currently do. It's 
just trying to rationalize a single, more managable process for those 
that *do* ship behind flags.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: JSON-LD Working Group

2018-04-27 Thread Adam Roach

On 4/27/18 2:02 PM, L. David Baron wrote:

On Friday 2018-04-27 10:07 +0300, Henri Sivonen wrote:

For this reason, I think we should resist introducing dependencies on
JSON-LD in formats and APIs that are relevant to the Web Platform. I
think it follows that we should not support this charter. I expect
this charter to pass in any case, so I'm not sure us saying something
changes anything, but it might still be worth a try to register
displeasure about the prospect of JSON-LD coming into contact with
stuff that Web engines or people who make Web apps or sites need to
deal with and to register displeasure with designing formats whose
full processing model differs from how the format is evangelized to
developers (previously: serving XHTML as text/html while pretending to
get benefits of the XML processing model that way).

Yeah, I'm not quite sure how to register such displeasure.  In
particular, I think it's probably poor form to object to maintenance
work on a base specification, even if we're opposed to that
specification's use elsewhere.  At least, assuming we don't want to
make the argument that the energy being spent on that maintenance
shouldn't be.

I'm inclined to leave this one alone, unless somebody else comes up
with a better position we could take.


With the caveat that I have very limited knowledge about JSON-LD and am 
basing this mostly on the preceding exchange:


If there's a set of behaviors defined by the 1.0 spec, and a different 
set of behaviors implemented, deployed, and evangelized, I think it 
would be reasonable to object (on that basis) to a charter that does not 
explicitly include work items to bring the spec into line with reality.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox build issues with Rust and the new VS2017 15.5 update

2017-12-06 Thread Adam Gashlin
I encountered an issue building with the latest VS update, warnings treated
as errors regarding TR1 deprecation, in at least some gtest files. This can
be avoided by running as

CXXFLAGS=-D_SILENCE_TR1_NAMESPACE_DEPRECATION_WARNING ./mach build

though I imagine there are better ways of adding in that define.

On Tue, Dec 5, 2017 at 11:14 AM, Ryan VanderMeulen <
rvandermeu...@mozilla.com> wrote:

> As a follow-up, it looks like updating to a newer LLVM version fixes the
> problem. That update is being tracked in https://bugzilla.mozilla.org/
> show_bug.cgi?id=1423307.
>
> For anybody already hitting this bustage locally, you can try updating
> your clang toolchain under ~/.mozbuild/clang to the one below until the
> in-tree changes are landed:
> https://queue.taskcluster.net/v1/task/Q7sN0gfPSE-
> OAEV5vuGtEA/runs/0/artifacts/public/build/clang.tar.bz2
>
> -Ryan
>
> On Tue, Dec 5, 2017 at 11:16 AM, Ryan VanderMeulen <
> rvandermeu...@mozilla.com> wrote:
>
>> FYI, the VC++ 2017 v14.12 toolset included in the recently-released
>> VS2017 15.5 update appears to have broken building Firefox due to issues
>> with the Rust compiler (in particular, the version of libclang we ship with
>> it) and one of the system headers:
>>
>> C:\PROGRA~2\MIB055~1\2017\COMMUN~1\VC\Tools\MSVC\1412~1.258\include\type_traits:898:47:
>> error: '_Ty' does not refer to a value
>>
>> Which in turns leads to a Rust panic and build failure.
>>
>> The Visual Studio installer allows you to install the prior v14.11
>> toolset as well, but I haven't verified yet that our build system will
>> properly use it if it's there. In the mean time, I'd strongly advise
>> avoiding this update until it's sorted out.
>>
>> -Ryan
>>
>
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to restrict to secure contexts: navigator.geolocation

2016-10-24 Thread Adam Roach
I'm hearing general agreement that we think turning this off is the 
right thing to do; that maintaining compatibility with Chrome's behavior 
is important (since that's what existing code will presumably be tested 
against); and -- as bz points out -- we don't want to throw an exception 
here for spec compliance purposes. I propose that we move forward with a 
plan to immediately deny permission in non-secure contexts. Kan-Ru's 
proposal that we put this behind a pref seems like a good one -- that 
way, if we discover that something unexpected happens in deployment, 
it's a very simple fix to go back to our current behavior.


I would be hesitant to over-analyze additional complications, such as 
https-everywhere or user education on this topic. We are, after all, 
simply coming into alignment with the rest of the web ecosystem here.


/a

On 10/22/16 12:05, Ehsan Akhgari wrote:

On 2016-10-22 10:16 AM, Boris Zbarsky wrote:

On 10/22/16 9:38 AM, Richard Barnes wrote:

I'm not picky about how exactly we turn this off, as long as the
functionality goes away.  Chrome and Safari both immediately call the
error
handler with the same error as if the user had denied permission.  We
could
do that too, it would just be a little more code.

Uh...  What does the spec say to do?

It seems like the geolocation spec just says the failure callback needs
to be called when permission is defined, with the PERMISSION_DENIED
code, but doesn't mention anything about non-secure contexts.  The
permissions spec explicitly says that geolocation *is* allowed in
non-secure contexts <https://w3c.github.io/permissions/#geolocation>.
The most relevant thing I can find is
<https://w3c.github.io/webappsec-secure-contexts/#legacy-example>, which
is an implementation consideration.  But as far as I can tell, this is
not spec'ed.


Your intent, and the whole "sites that would break are already broken"
thing sounded like we were going to match Chrome and Safari behavior; if
that was not the plan you really needed to explicitly say so!

Yes, indeed.  It seems that making Navigator.geolocation [SecureContext]
is incompatible with their implementation.


We certainly should not be shipping anything that will change behavior
here to something _different_ from what Chrome and Safari are shipping,
assuming they are shipping compatible things.  Again, what does the spec
say to do?

-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Engineer, Mozilla
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Action Script 4

2016-08-08 Thread Adam Roach

On 8/7/16 12:45, Jonathan Moore wrote:

I was wondering about how one would go about integrating ActionScript
into Gecko.



I'd start by looking at Shumway <http://mozilla.github.io/shumway/> -- I 
believe it only does Actionscript 1, 2, and 3, and that the support is 
only partial, but it's probably easier than trying to start something 
from scratch. Note that Shumway is no longer under active development.


I'd also encourage you to back up to first principles and ask why you're 
not just targeting HTML5 directly. My understanding is that Adobe Flash 
Professional can target JS just as easily as it can Actionscript: 
http://tv.adobe.com/watch/adobe-technology-sneaks-2012/export-to-html5-from-flash-professional/


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Group Photo

2016-06-15 Thread Adam Roach

Here is the group photo from this morning's session:

https://www.flickr.com/photos/9361819@N04/27686864545/in/album-72157669699014665/

https://www.flickr.com/photos/9361819@N04/27686865615/in/album-72157669699014665/


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Basic Auth Prevalence (was Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies)

2016-06-10 Thread Adam Roach

On 4/18/16 09:59, Richard Barnes wrote:

Could we just disable HTTP auth for connections not protected with TLS?  At
least Basic auth is manifestly insecure over an insecure transport.  I
don't have any usage statistics, but I suspect it's pretty low compared to
form-based auth.


As a follow up from this: we added telemetry to answer the exact 
question about how prevalent Basic auth over non-TLS connections was. 
Now that 49 is off Nightly, I pulled the stats for our new little counter.


It would appear telemetry was enabled for approximately 109M page 
loads[1], of which approximately 8.7M[2] used HTTP auth -- or 
approximately 8% of all pages. (This is much higher than I expected -- 
approximately 1 out of 12 page loads uses HTTP auth? It seems far less 
dead than we anticipated).


749k of those were unencrypted basic auth[2]; this constitutes 
approximately 0.7% of all recorded traffic.


I'll look at the 49 Aurora stats when it has enough data -- it'll be 
interesting to see how much if it is nontrivially different.


/a


[1] 
https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0_date=2016-06-06=__none__!__none__!__none___channel_version=nightly%252F49=HTTP_PAGELOAD_IS_SSL_channel_version=null=Firefox=1_keys=submissions_date=2016-05-04=0=1_submission_date=0


[2] 
https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0_date=2016-06-06=__none__!__none__!__none___channel_version=nightly%252F49=HTTP_AUTH_TYPE_STATS_channel_version=null=Firefox=1_keys=submissions_date=2016-05-04=0=1_submission_date=0 




--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FF49a1: Page load of jumping points doesn't work like it should in Wikipedia

2016-05-20 Thread Adam Roach
Ah, I think I spoke to quickly -- the jumping is caused by javascript, 
but not by javascript scrolling. It's certainly possible that javascript 
hiding of large elements would be treated as reflow events by this 
approach...


/a

On 5/20/16 15:19, Adam Roach wrote:

There is one FAQ on that page, and I think it basically says the opposite.

/a

On 5/20/16 12:35, Kartikaya Gupta wrote:

Note that this might get fixed in chrome with their new "scroll
anchoring" feature -
https://developers.google.com/web/updates/2016/04/scroll-anchoring?hl=en

kats

On Fri, May 20, 2016 at 12:15 PM, Adam Roach<a...@mozilla.com>  wrote:

On 5/20/16 10:13, Gijs Kruitbosch wrote:

On 20/05/2016 16:11, Tobias B. Besemer wrote:

Plz open e.g. this URL:

https://en.wikipedia.org/wiki/Microsoft_Windows#Alternative_implementations

FF49a1 loads the page, jumps to "Alternative implementations", stays
there for 1-2 sec and then go ~1 screen-high (page) down.

Can someone very this bug?

The same thing happens in Chrome, so it seems like it's more likely to be
an issue with Wikipedia.

The fact that turning JavaScript off prevents this behavior would certainly
seem to support that supposition.

--
Adam Roach
Principal Platform Engineer
Office of the CTO

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
Office of the CTO



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FF49a1: Page load of jumping points doesn't work like it should in Wikipedia

2016-05-20 Thread Adam Roach

There is one FAQ on that page, and I think it basically says the opposite.

/a

On 5/20/16 12:35, Kartikaya Gupta wrote:

Note that this might get fixed in chrome with their new "scroll
anchoring" feature -
https://developers.google.com/web/updates/2016/04/scroll-anchoring?hl=en

kats

On Fri, May 20, 2016 at 12:15 PM, Adam Roach <a...@mozilla.com> wrote:

On 5/20/16 10:13, Gijs Kruitbosch wrote:

On 20/05/2016 16:11, Tobias B. Besemer wrote:

Plz open e.g. this URL:

https://en.wikipedia.org/wiki/Microsoft_Windows#Alternative_implementations

FF49a1 loads the page, jumps to "Alternative implementations", stays
there for 1-2 sec and then go ~1 screen-high (page) down.

Can someone very this bug?


The same thing happens in Chrome, so it seems like it's more likely to be
an issue with Wikipedia.


The fact that turning JavaScript off prevents this behavior would certainly
seem to support that supposition.

--
Adam Roach
Principal Platform Engineer
Office of the CTO

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: FF49a1: Page load of jumping points doesn't work like it should in Wikipedia

2016-05-20 Thread Adam Roach

On 5/20/16 10:13, Gijs Kruitbosch wrote:

On 20/05/2016 16:11, Tobias B. Besemer wrote:

Plz open e.g. this URL:
https://en.wikipedia.org/wiki/Microsoft_Windows#Alternative_implementations 



FF49a1 loads the page, jumps to "Alternative implementations", stays 
there for 1-2 sec and then go ~1 screen-high (page) down.


Can someone very this bug?


The same thing happens in Chrome, so it seems like it's more likely to 
be an issue with Wikipedia. 


The fact that turning JavaScript off prevents this behavior would 
certainly seem to support that supposition.


--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-05-13 Thread Adam Roach

On 5/13/16 14:26, Ben Hearsum wrote:
I intend to make sure that Beta/Release/ESR is configured in such a 
way that users get the most up to date release possible. Eg: serve 
10.6-10.8 users the latest 48.0 point release, then give them a 
deprecation notice. 


Presumably, the deprecation notice will mention ESR as a way to continue 
to get security updates for several more months?



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-05-03 Thread Adam Roach

On 5/3/16 4:59 PM, Justin Dolske wrote:

On 5/3/16 12:21 PM, Gregory Szorc wrote:

* The update server has been reconfigured to not serve Nightly 
updates to

10.6-10.8 (bug 1269811)


Are we going to be showing some kind of notice to affected users upon 
Release? That is, if I'm a 10.6 user and I update to Firefox 48, at 
some point should I see a message saying I'll no longer receive future 
updates?


Even better, is there any way to get the update system to automatically 
move such users over to 45ESR?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to (sort of) unship SSLKEYLOGFILE logging

2016-04-26 Thread Adam Roach
I think we need to have reasonable answers to Patrick's questions before 
landing this patch. It's clear what we're losing, but unclear what we're 
gaining.


/a

On 4/26/16 08:30, Patrick McManus wrote:

I don't think the case for making this change (even to release builds) has
been successfully made yet and the ability to debug and iterate on the
quality of the application network stack is hurt by it.

The Key Log - in release builds - is part of the debugging strategy and is
used fairly commonly in the network stack diagnostics. The first line of
defense is dev tools, the second is NSPR logging, and the third is
wireshark with a key log because sometimes what is logged is not what is
really happening on the 'wire' (thus the need to troubleshoot).

Bug reporters are often not developers and sometimes do not have the option
of (or willingness to) running other builds. Removing functionality that
helps with that is damaging to our strategic goal of building our Core and
emphasizing quality. Bug 1188657 suggests that this functionality is for
diagnosing tricky TLS bugs, but its just as helpful for diagnosing anything
using TLS which we of course hope to make be everything.

But of course if it represents a security hole then it is medicine that
needs to be swallowed - I wouldn't argue against that. That's why I say the
case hasn't been made yet.

The mechanism requires machine level control to enable - the same level of
control that can alter the firefox binary, or annotate the CA root key
store or any number of other well understood things. Daniel suggests that
Chrome will keep this functionality. The bug 1183318 handwaves around
social engineering attacks against this - but of course that's the same
vector for machine level control of those other attacks as well - I don't
see anything really improved by making this change, but our usability and
ability to iterate on quality are damaged. Maybe I'm mis understanding the
attack this change ameliorates?

Minimally we should be having this discussion about a change in
functionality for  Firefox 49 - not something that just moved up a
release-train channel.

Lastly, as a more strategic point I think reducing the tooling around HTTPS
serves to dis-incentivize HTTPS. Obviously, we don't want to do that.
Sometimes there are tradeoffs to be made, I'm skeptical of this one though.


On Tue, Apr 26, 2016 at 12:44 AM, Martin Thomson <m...@mozilla.com> wrote:


In NSS, we have landed bug 1183318 [1], which I expect will be part of
Firefox 48.

This disables the use of the SSLKEYLOGFILE environment variable in
optimized builds of NSS.  That means all released Firefox channels
won't have this feature as it rides the trains.

This feature is sometimes used to extract TLS keys for decrypting
Wireshark traces [2].  The landing of this bug means that it will no
longer be possible to log all your secret keys unless you have a debug
build.

This is a fairly specialized thing to want to do, and weighing
benefits against risks in this case is an exercise in comparing very
small numbers, which is hard.  I realize that this is very helpful for
a select few people, but we decided to take the safe option in the
absence of other information.

(I almost forgot to send this, but then [3] reminded me in a very
timely fashion.)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1183318
[2]
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
[3]
https://lists.mozilla.org/pipermail/dev-platform/2016-April/014573.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
Office of the CTO
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proof-of-concept firefox branch

2016-04-25 Thread adam
I am a developer who has experimented with Firefox tweaking before and I would be intrested in running a "proof-of-concept" branch or fork of Firefox with the latest features added before any sort of review to see how they play out in a normal browser. I would like to have everybody's thoughts on this before I release anything and would like to know the possibility of Mozilla ever endorsing this. As a web developer I have some cutting-edge features, especially from the CSS4 Drafts, that I would like to experiment with and I would also like to break the concept that everybody should use Blink because of cutting-edge features. None of these would be official features that would definitely make it into mainstream Gecko and Firefox. I would love to get opinions on this idea and possible help since I am nowhere near the level of experience with gecko and firefox as alot of other people on this mailing list.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on Webcomponents?

2016-04-25 Thread adam
I support this as a web designer

25.04.2016, 15:10, "Philipp Kewisch" :
> Hi Anne,
>
> thanks for the update! I'm looking forward to seeing web components
> work, I think they will be very helpful for future web and html based
> application development.
>
> Philipp
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Hello new data collection

2016-04-05 Thread adam
I think this should be abandoned in favour of an optional survey for Hello Users

05.04.2016, 17:06, "Chris Hofmann" :
> On Mon, Apr 4, 2016 at 3:01 AM, Romain Testard  wrote:
>
>>  Firefox Hello has its own privacy notice (details here
>>  ).
>
> Its unclear to me reading the follow through link to the
> TokBox Privacy Policy. -> https://tokbox.com/support/privacy-policy
>
> Does TokBox already have access to the contents of the messages and URLs
> that might have been shared?
>
> the tokbox policy says:
>
> The types of information collected include your name, e-mail address, and
> any other data you actively choose to provide.
>
> and leaves it vague about the definition of "other data you actively
> provide." Does that include shared URLs and message content?
>
> Thie passage in https://www.mozilla.org/en-US/privacy/firefox-hello/ also
> would lead me to believe that the contents of my communication with another
> user (including shared URLs) are encrypted (and would be private).
>
> We've just invested heavily in making this point and trying to make that
> association that encryption mean strong privacy and vice-versa.
> https://blog.mozilla.org/blog/2016/03/30/everyday-internet-users-can-stand-up-for-encryption-heres-how/
>
> How are we going to address the possible take away that some will have that
> we've just created a backdoor for parts (shared urls that are part of the
> message content) of the hello encrypted message channel if we turn this
> change on?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-04-04 Thread adam
Doesn't hombrew provide a version of g++ that includes c++11

04.04.2016, 18:12, "Nathan Froyd" :
> Re-ping on this thread. It would be really useful to have a decision
> one way or the other for figuring out exactly how a C++11 STL on OS X
> is going to work.
>
> -Nathan
>
> On Thu, Mar 24, 2016 at 12:51 PM, Ralph Giles  wrote:
>>  Discussion seems to have wound down. Is there a decision on this?
>>
>>   -r
>>  ___
>>  dev-platform mailing list
>>  dev-platform@lists.mozilla.org
>>  https://lists.mozilla.org/listinfo/dev-platform
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Hello new data collection

2016-04-04 Thread adam
I don't know much about Mozilla's privacy but in my opinion feel the need to 
immediately remove it from Firefox and push a new beta build

04.04.2016, 16:45, "Gijs Kruitbosch" :
> On 04/04/2016 11:01, Romain Testard wrote:
>>  The privacy review bug is
>>  https://bugzilla.mozilla.org/show_bug.cgi?id=1261467.
>>  More details added below.
>
> See response at the bottom.
>
>>  On Mon, Apr 4, 2016 at 11:23 AM, Gijs Kruitbosch 
>>  wrote:
>>>  On 04/04/2016 10:01, Romain Testard wrote:
>>>
   We would use a whitelist client-side to only collect domains that are
   part of the top 2000 domains (Alexa list of top domains). This
  prevents
   personal identification based on obscure domain usage.
>>>
>>>  Mathematically, the combination of a set of (popular) domains shared could
>>>  still be uniquely identifying, especially as, AIUI, you will get the counts
>>>  of each domain and in what sequence they were visited / which ones were
>>>  visited in which session. It all depends on the number of unique users and
>>>  the number of domains they visit / share (not clear: see above). Because
>>>  the total number of Hello users compared with the number of Firefox users
>>>  is quite low, this still seems somewhat concerning to me. Have you tried to
>>>  remedy this in any way?
>>
>>  We are aggregating domain names, and are not storing session histories.
>>  These are submitted at the end of the session, so exact timestamps of any
>>  visit are not included.
>
> But both Firefox and Hello sessions are commonly relatively short (<1d)
> and numerous. That means lots of data points, which will likely be
> enough to uniquely identify people even without exact timestamps of
> their visits. (FWIW, from a technical perspective, there is no reason
> why the submission time implies ("so") that exact timestamps of visits
> are not included.)
>
>>>  We looked into this approach originally although we found that we'd lose a
>>  level of granularity that can have an importance. We may find that Hello
>>  gets used a lot with a specific Website for a specific reason and using
>>  client side categories would prevent us from learning this.
>
> This was explicitly not in your original motivation, so you're moving
> the goalposts here. If the goal is about separate categories or separate
> sites then those are pretty distinct goals that require different
> approaches. If the real point is "we have no idea, so we figured we'd
> just get the data and then go from there", why not be upfront about it?
> But in that case, yeah, why not consider a survey or something less
> intrusive, like asking people explicitly what type of site they were
> using, or asking if Mozilla can use the domain in question ?
>
>>  Also Alexa
>>  website categories are far from perfect which would add another level of
>>  complexity to understand the collected data.
>
> At no point did I say I expected you to use their categorization,
> whatever that is. Categorize as you see fit, rather than as Alexa does it.
>
> Conversely, if their categorization is questionable, then your scrubbing
> of the Adult category sounds like it might need auditing? Also, why not
> other categories like "Banking" or "Medical" (NB: no idea what
> categorization Alexa employs, but these seem like categories that ought
> to be scrubbed, too)?
>
>>>  6 months also seems incredibly long. You should be able to aggregate the
>>>  data and keep that ("60% of users share on sites of type X") and throw away
>>>  the raw data much sooner than that.
>>  Yes agreed, we'll look into what's the most optimal amount of time required
>>  to process the data and extract the useful information. I agree we should
>>  try to make this shorter - we'll learn from being on Beta and will adjust
>>  this accordingly.
>
> Well, why not make it 1 week to start with, and make it longer if you
> don't get enough information from beta (with a rationale as to why that
> is the case) ?
>
>>>  Finally, I am surprised that you're sharing this 2 weeks before we're
>>>  releasing Firefox 46. Hasn't this been tested and verified on Nightly
>>>  and/or other channels? Why was no privacy update made at/before that time?
>>
>>  We are shipping Hello through Go Faster. The Go Faster process allows us to
>>  uplift directly to Beta 46 directly since we're a system add-on
>>  (development was done about 2 weeks ago).
>>  Firefox Hello has its own privacy notice (details here
>>  ).
>
> But shipping through go faster does not absolve you from adequately
> testing changes and getting feedback on them. Is the add-on not getting
> tested on nightly at all? Or at the same time as it goes to beta? When
> will it be used on release - when 46 ships as release, or earlier, or later?
>
> It also seems like you filed the privacy review after the functionality
> was implemented and is now shipping, which 

Re: Firefox Hello new data collection

2016-04-04 Thread adam
I agree with chofmann in that a simple survey request when users open Hello 
would probably work since Mozilla is trusted by alot of people.

04.04.2016, 16:22, "Chris Hofmann" :
> It also seems like you haven't explored other alternatives to get the data
> you are after, have some theories around what results you might expect, and
> what possible out comes will be pursed once you get the data.
>
> Have you looked at other studies like this and many more that tell about
> general browsing habits?
> http://www.adweek.com/socialtimes/online-time/463670
>
> Have you looked at just doing a simple survey to ask people to tell you
> what kinds of activities they most use when sharing sites with hello?
>
> If the survey or data collection results tell you that some people play
> games against each other *and* some people shop together what will you do
> then?
>
> -chofmann
>
> On Mon, Apr 4, 2016 at 3:01 AM, Romain Testard  wrote:
>
>>  The privacy review bug is
>>  https://bugzilla.mozilla.org/show_bug.cgi?id=1261467.
>>  More details added below.
>>
>>  On Mon, Apr 4, 2016 at 11:23 AM, Gijs Kruitbosch >  >
>>  wrote:
>>
>>  > Hi,
>>  >
>>  > It's very concerning to me that you have not answered the obvious
>>  > question: what domains are collected? All of the ones visited while the
>>  > browser is running? The ones visited while Hello is open? The ones
>>  visited
>>  > while shared through Hello? What about the ones that someone shared with
>>  > you through Hello, rather than that you shared with someone else?
>>  >
>>
>>  We only collect domains browsed whilst sharing your tabs on Firefox Hello
>>  (link generator side).
>>
>>  >
>>  > What about Private Browsing mode, have you disabled collection there?
>>
>>  Firefox Hello cannot be used with private browsing mode.
>>
>>  >
>>  >
>>  > On 04/04/2016 10:01, Romain Testard wrote:
>>  >
>>  >> We would use a whitelist client-side to only collect domains that
>>  are
>>  >> part of the top 2000 domains (Alexa list of top domains). This
>>  >> prevents
>>  >> personal identification based on obscure domain usage.
>>  >>
>>  >
>>  > Mathematically, the combination of a set of (popular) domains shared
>>  could
>>  > still be uniquely identifying, especially as, AIUI, you will get the
>>  counts
>>  > of each domain and in what sequence they were visited / which ones were
>>  > visited in which session. It all depends on the number of unique users
>>  and
>>  > the number of domains they visit / share (not clear: see above). Because
>>  > the total number of Hello users compared with the number of Firefox users
>>  > is quite low, this still seems somewhat concerning to me. Have you tried
>>  to
>>  > remedy this in any way?
>>  >
>>
>>  We are aggregating domain names, and are not storing session histories.
>>  These are submitted at the end of the session, so exact timestamps of any
>>  visit are not included.
>>
>>  The beginning of your message mentioned that you were interested in
>>  > different "types" of sites. I don't think it would be necessary to
>>  optimize
>>  > Hello for one shopping site over another, or for one search engine over
>>  > another, or for one news site over another. So, why don't you categorize
>>  > the domains in the whitelist according to broad categories ("news",
>>  > "search", "shopping", "games", or something like this) on the client
>>  side,
>>  > and then send that information instead? If the set of domains is limited
>>  > (which it is) then this should not take that long, and get you exactly
>>  the
>>  > information you want, and limit the privacy invasion that the current
>>  > collection scheme represents.
>>  >
>>  > We looked into this approach originally although we found that we'd lose
>>  a
>>  level of granularity that can have an importance. We may find that Hello
>>  gets used a lot with a specific Website for a specific reason and using
>>  client side categories would prevent us from learning this. Also Alexa
>>  website categories are far from perfect which would add another level of
>>  complexity to understand the collected data.
>>
>>  > 6 months also seems incredibly long. You should be able to aggregate the
>>  > data and keep that ("60% of users share on sites of type X") and throw
>>  away
>>  > the raw data much sooner than that.
>>  >
>>  Yes agreed, we'll look into what's the most optimal amount of time required
>>  to process the data and extract the useful information. I agree we should
>>  try to make this shorter - we'll learn from being on Beta and will adjust
>>  this accordingly.
>>
>>  >
>>  > Finally, I am surprised that you're sharing this 2 weeks before we're
>>  > releasing Firefox 46. Hasn't this been tested and verified on Nightly
>>  > and/or other channels? Why was no privacy update made at/before that
>>  time?
>>  >
>>
>>  We are shipping Hello through Go Faster. The Go Faster process allows us to
>>  uplift 

Re: Firefox Hello new data collection

2016-04-04 Thread adam
This isn't technically about the data collection but it would be better if 
there was some sort of api that web developers could implement on sites like 
games so instead of regular chat things like co-op and game events could be 
streamlined into Hello itself

04.04.2016, 10:02, "Romain Testard" :
> Hi all,
>
> We wanted to let you know about new data collection that we will be doing
> for Firefox Hello starting with FF46 launch on April 19th, and the steps we
> took to prevent it from collecting personal identification. We want to
> collect more data about the websites that people share with Hello, to help
> optimize the product UX, understand what people use our new tab sharing
> feature for, and prioritize features accordingly. The product features and
> UX can be very different if we decide to optimize against “Shopping
> together” use cases as opposed to “Playing online games together”, just as
> examples.
>
> We did a lot of diligence for this and explored several options for getting
> the data. The approach described below is the one we settled on. It
> prevents personal identification and gets us the data we need to build the
> best tool we can while being sensitive to our users. This involves
> collecting the domain names for tabs shared on Firefox Hello on our own
> servers.
>
> How we collect the data
>
> We plan to put in place a data collection solution that prevents personal
> identification. The technical approach to doing this through the use of
> client-side whitelisting is outlined here:
>
>    -
>
>    Data will go to our servers and will be stored with our other server
>    metrics. We are aggregating domain names, and are not storing session
>    histories. These are submitted at the end of the session, so exact
>    timestamps of any visit are not included.
>    -
>
>    Users who have disabled Health Reports will also not submit this data.
>    -
>
>    We would use a whitelist client-side to only collect domains that are
>    part of the top 2000 domains (Alexa list of top domains). This prevents
>    personal identification based on obscure domain usage. We would subtract
>    the sites from the Adult
>     category and add all
>    the subdomains of:
>
>    -
>
>   google.com
>   (e.g.,
>   drive.google.com)
>   -
>
>   yahoo.com (e.g., games.yahoo.com)
>   -
>
>   developer.mozilla.org, bugzilla.mozilla.org, wiki.mozilla.org (this
>   helps us understand how much our user base is Mozillians)
>   -
>
>   tunes.apple.com
>   -
>
>    You can see the exact list here: DomainWhitelist.jsm
>    
> 
>
>    -
>
>    The data will only be kept for 6 months and we plan to revisit this
>    collection in 6 months. We’ll evaluate at the end of this period if we
>    should carry on collecting the data (the data is still useful and will help
>    further shape the product) or just stop.
>
> This e-mail is intended to make everyone aware of the data we’re collecting
> in Hello in an effort to be as transparent as possible. We want make sure
> people get the full picture of what we are trying to achieve and what we’re
> putting in place to protect our users.
>
> Let me know if you have any questions.
>
> Implementation bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1211542
>
> Technical documentation:
> https://github.com/mozilla/loop/blob/master/docs/DataCollection.md
>
> -Romain
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implement multiple names for properties in the css system

2016-04-04 Thread adam
Thank you. Maybe the documentation at 
https://developer.mozilla.org/en-US/docs/Mozilla/Adding_a_new_style_property 
should be updated to include a section about aliases

04.04.2016, 01:11, "L. David Baron" :
> On Monday 2016-04-04 00:52 +0100, a...@imgland.xyz wrote:
>>  How exactly would I go around implementing multiple names for the same 
>> css property. eg:-moz-property for compatiability with older 
>> sitesproperty as a standard css property doing the same thing as 
>> the prefixed counterpart
>
> by adding entries to:
> https://mxr.mozilla.org/mozilla-central/source/layout/style/nsCSSPropAliasList.h
>
> -David
>
> --
> 턞 L. David Baron http://dbaron.org/ 턂
> 턢 Mozilla https://www.mozilla.org/ 턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>    - Robert Frost, Mending Wall (1914)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Implement multiple names for properties in the css system

2016-04-03 Thread adam
How exactly would I go around implementing multiple names for the same css property. eg:-moz-property for compatiability with older sitesproperty as a standard css property doing the same thing as the prefixed counterpart
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-03-10 Thread Adam Roach

On 3/10/16 5:17 PM, Trevor Saunders wrote:

On Thu, Mar 10, 2016 at 04:01:15PM -0700, Tyler Downer wrote:

The other thing to note is many of those users can still update to 10.11,
and I imagine that over the next year that number will continue to go down.

given they haven't upgraded from 10.6 - 10.8 why do you believe they are
likely to in the future?


Or even can? As I point out in my other message, a lot of the Intel Mac 
hardware cannot go past 10.6.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving FirefoxOS into Tier 3 support

2016-01-26 Thread Adam Farden
If we jump on the Marshmallow bandwagon we can drop stlport, Google already
did that too.

https://bugzilla.mozilla.org/show_bug.cgi?id=1213259
https://bugzilla.mozilla.org/show_bug.cgi?id=1218802 - specifically patch
[07]


On Mon, Jan 25, 2016 at 6:34 PM, Nathan Froyd  wrote:

> On Mon, Jan 25, 2016 at 12:30 PM, Ehsan Akhgari 
> wrote:
>
>> For example, for a long time b2g partners held back our minimum supported
>> gcc.  Now that there are no such partner requirements, perhaps we can
>> consider bumping up the minimum to gcc 4.8?  (bug 1175546)
>>
>> I'm sure others have similar examples to fill in.
>>
>
> One current example is b2g's reliance on stlport and changing the build to
> support a modern C++ library like libc++.
>
> -Nathan
>
> ___
> dev-fxos mailing list
> dev-f...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-fxos
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML mozbrowser frames on desktop Firefox?

2016-01-08 Thread Adam Roach
Regardless of technical feasibility, I believe we're discouraging new 
uses of XUL in Firefox.


/a

On 1/8/16 04:55, Tim Guan-tin Chien wrote:

What prevents you from using ? Is it because the parent
frame is (X)HTML?

I don't know what prevents browser-element from being enabled on desktop
though -- it's tests are running on desktop, and the actual feature is
hidden behind a permission so we won't expose it to the web content even if
we turn it on.


On Fri, Jan 8, 2016 at 3:31 PM, J. Ryan Stinnett <jry...@gmail.com> wrote:


(CCing dev-platform as suggested on IRC.)

On Thu, Jan 7, 2016 at 9:58 PM, J. Ryan Stinnett <jry...@gmail.com> wrote:

DevTools is working on rebuilding the responsive design UI in an HTML,
chrome-scoped page. This page will want to manage child frames to show
the page content, which could be remote frames. So, I would want to
use  for cases like these.

However, I noticed mozbrowser frames are currently preffed off
(dom.mozBrowserFramesEnabled) on desktop. Is there a reason for this?
Can it be turned on, or is there some kind of work still needed before
it is usable?

I assume we would eventually want to enable this anyway, so that HTML
frames can be used in the primary browser UI.

- Ryan

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 1:00 PM, Adam Roach wrote:
One of the points that Benjamin Smedberg has been trying to drive home 
is that data collection is everyone's job.


After sending, I realized that this is a slight misquote. It should have 
been "data is everyone's job" (i.e.: there's more to data than collection).


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 12:29 PM, Daniel Holbert wrote:

I had a similar thought, but I think it's too late for such telemetry to
be effective. The vast majority of users who are affected will have
already stopped using Firefox, or will immediately do so, as soon as
they discover that their webmail, bank, google, facebook, etc. don't work.


That's a valid point for the first batch of users that is hit with the 
issue on day one. (Aside: I wonder what the preponderant behavior will 
be when Chrome also starts choking on those sites.) It'll be interesting 
to see whether there's a detectable decline in user count that 
correlates with the beginning of the year.


At the same time, I know that Google tends to measure quite a bit about 
Chrome's behavior. Lacking our own numbers, perhaps we reach out to them 
and ask if they're willing to share what they know.


In any case, people install new things all the time. While it is too 
late to catch the large wave of users who are running into the problem 
this week, it would be nice to have data about this problem on an 
ongoing basis.



(We could have used this sort of telemetry before Jan 1 if we'd forseen
this potential problem.  I don't blame us for not forseeing this, though.)


You're correct: given our current habits, it's understandable that no 
one thought to measure this. I think there's an object lesson to be 
learned here.


Mozilla has a clear and stated intention to be more data driven in how 
we do things. One of the points that Benjamin Smedberg has been trying 
to drive home is that data collection is everyone's job. In the same way 
that we would never land code without thinking about how to test it, we 
need to develop a mindset in which we don't land code without 
considering whether and how to measure it. It's not a perfect analogy, 
since many things won't need specific new metrics, but it should be part 
of the mental checklist: "did I think about whether we need to measure 
anything about this feature?"


If just asking that question were part of our culture, I'm certain we 
would have thought of landing exactly this kind telemetry as part of the 
same patch that disabled SHA-1; or, even better, in advance of it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads-up: SHA1 deprecation (for newly issued certs) causes trouble with local ssl-proxy mitm spyware

2016-01-04 Thread Adam Roach

On 1/4/16 2:19 AM, Daniel Holbert wrote:

I'm not sure what action we should (or can) take about this, but for
now we should be on the lookout for this, and perhaps consider writing a
support article about it if we haven't already.


I propose that we minimally should collect telemetry around this 
condition. It should be pretty easy to detect: look for cases where we 
reject very young SHA-1 certs that chain back to a CA we don't ship. 
Once we know the scope of the problem, we can make informed decisions 
about how urgent our subsequent actions should be.


It would also be potentially useful to know the cert issuer in these 
cases, since that might allow us to make some guesses about whether the 
failures are caused by malware, well-intentioned but kludgy malware 
detectors, or enterprise gateways. Working out how to do that in a way 
that respects privacy and user agency may be tricky, so I'd propose we 
go for the simple count first.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Adam Roach

On 11/30/15 09:38, Henri Sivonen wrote:

The only known realistic source of ISO-2022-JP-2 data is Apple's Mail
application under some circumstances, which may impact Thunderbird and
SeaMonkey.


Does this mean it might interact with webmail services as well? Or do 
they tend to do server-side transcoding from the received encoding to 
something like UTF8?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship

2015-08-26 Thread Adam Roach

On 8/26/15 08:36, Ehsan Akhgari wrote:
Have you considered the implications of making the alias falsey in 
conditions, similar to document.all?


The issue with doing so is that we see code in the wild that looks like 
this:


|var NativeRTCPeerConnection = (window.webkitRTCPeerConnection || 
||window.mozRTCPeerConnection);|

And a falsey value would simply make things not work.

For all the cases I can think of (at least, in short order), making the 
alias falsey breaks as many things as simply removing it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: I think XUL overlays should also ignore query strings.

2015-08-16 Thread Adam Moore
Seems like this has more to do with the overlay system than XUL itself.
Losing the ability to add overlays to customize the browser chrome would be
brutal, and a move away from XUL shouldn't be done at the expense of what
the ecosystem provides today for people who need to customize the browser.

I think the ignorequery proposal would be really useful -- today, these
customizations are bypassed if the user adds an arbitrary param to the uri,
which rarely is what was intended.

On Sat, Aug 15, 2015 at 10:31 PM, Anne van Kesteren ann...@annevk.nl
wrote:

 On Sat, Aug 15, 2015 at 9:48 PM, Philip Chee philip.c...@gmail.com
 wrote:
  The first question that occurs to me is what is the rationale? Can we
  revisit this in 2015 to see if the original reason still holds?

 Well, we want to get rid of XUL. I'm not sure it makes much sense to
 revisit any of its design decisions at this point.


 --
 https://annevankesteren.nl/
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Busy indicator API

2015-07-13 Thread Adam Roach

On 7/13/15 10:36, smaug wrote:

On 07/13/2015 01:50 PM, Richard Barnes wrote:

Obligatory: Will this be restricted to secure contexts? 
But given that web pages can already achieve something like this using 
document.open()/close(), at least on Gecko, perhaps exposing the API 
to certainly-not-secure-contexts wouldn't be too bad. 


Going by memory, we're only considering new features to be those 
things that can't be achieved with a polyfill. Since we can polyfill a 
busy indicator, I don't think it qualifies as a new feature under our 
all new features should be on secure origins only policy.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use-case for consideration, which will be difficult post-NPAPI

2015-06-26 Thread Adam Roach
I would look over the discussion in 
https://bugzilla.mozilla.org/show_bug.cgi?id=988781 regarding future SC 
support via the WebCrypto JS APIs. I would hope that having a W3C spec 
for a smartcard API would encourage a common, cross-browser way to do 
this without plugins or addons.


/a

On 6/25/15 22:29, James May wrote:

Have you considered using a local web server? That way you can use any
native code you want, and it's a reasonably common approach.

On many platforms you can even use socket activation to avoid the need for
a always running server process.



On 25 June 2015 at 21:04, Alex Taylor alex.tay...@referencepoint.co.uk
wrote:


Good morning.

I have a use-case which will be difficult to reproduce in the post-NPAPI
world:

The use-case is a Java/NPAPI applet which uses the javax.smartcardio
library to communicate with USB-connected contactless smartcard readers,
from a web-page. Extremely useful functionality for our customers.

Currently the applet will work in Firefox, Chrome and IE.

With the deprecation of NPAPI, we are looking into ways to continue
offering that functionality, and need to continue to target all three of
those browsers if possible.


For Chrome, I have looked into re-implementing the Java applet as a Chrome
App, or using NaCl/PPAPI etc. I have not found any equivalent technology
for Firefox as yet.

Chrome Apps can connect to USB ports via the chrome.usb API, but there is
currently no implementation of PC/SC for it (the smartcard access
specifications that javax.smartcardio is also built on). Due to time
constraints, re-implementing PC/SC ourselves is an option we would only
choose as a last resort. In any case, that would only solve the problem for
Chrome, not Firefox.

Unfortunately, no technology I have looked into so far to solve this
problem is able to offer the cross-browser support that Java/NPAPI enjoyed,
and has an available PC/SC library.


I flag this use-case for consideration in a future web-platform. I am sure
we are not the only company who have combined smartcard io functionality
with the web, and wish to continue doing so.


If anyone knows of any technology or open-source project which might be
useful for this situation, please let me know.


Alex Taylor | Lead Developer

[logo-291px]

T: +44 (0)1753 27 99 27tel:+441753279927 | DD: +44 (0)1753 378 144tel:
+441753378144
E: alex.tay...@referencepoint.co.ukmailto:
alex.tay...@referencepoint.co.uk | Lync: alex.tay...@referencepoint.co.uk
sip:alex.tay...@referencepoint.co.uk
W: www.referencepoint.co.ukhttp://www.referencepoint.co.uk/

A: Reference Point Limited, Technology House, 2-4 High Street, Chalfont
St. Peter, Gerrards Cross, SL9 9QA

Right People. Right Skills. Right Place. Right Time.

Registered in England No. 02156356

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: WebRTC Working Group

2015-06-12 Thread Adam Roach

On 6/12/15 13:27, L. David Baron wrote:

The W3C is proposing a revised charter for:

   Web Performance Working Group
   http://www.w3.org/2015/05/webperf-charter.html
   https://w3c.github.io/charter-webperf/
   https://lists.w3.org/Archives/Public/public-web-perf/2015Jun/0066.html



I think the subject line may have confused things here. Do you mean 
WebRTC or WebPerf?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Voting in BMO

2015-06-09 Thread Adam Roach

On 6/9/15 17:00, Justin Dolske wrote:

On 6/9/15 2:24 PM, Chris Peterson wrote:


I vote for bugs as a polite (sneaky?) way to watch a bug's bugmail
without spamming all the other CCs by adding myself to the bug's real
CC list.


I think if Bugzilla, with its long and complex history, ever has a 
hope of being untangled into something better, we can't keep every 
feature because of all the possible ways it might be used. :) 

OBxkcd: http://xkcd.com/1172/

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Replacing PR_LOG levels

2015-05-22 Thread Adam Roach

On 5/22/15 15:51, Eric Rahm wrote:

I agree, we shouldn't make it harder to turn on logging. The easy solution is 
just to add a separate logger for verbose messages (if we choose not to add 
Verbose/Trace).


I don't know why we wouldn't just add a more verbose log level (Verbose, 
Trace... I don't care what we call it). The presence of DEBUG + 1 in 
our code is evidence of a clear, actual need.


Making this a separate mechanism implies that the means of controlling 
these more verbose messages is going to change at some point, and it 
would be a change with no clear benefit. This means that, for example, 
web pages intended to facilitate bug reporting [1] will need to be 
updated to have variant instructions depending on the version of the 
browser; and some such instructions are virtually guaranteed to be missed.



[1] See, for example, https://wiki.mozilla.org/Media/WebRTC/Logging: 
For frame-by-frame logging, use mediamanager:6


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing the style guide's preference for loose over strict equality checks in non-test code

2015-05-14 Thread Adam Roach

On 5/14/15 16:33, Gijs Kruitbosch wrote:
Can you give a concrete example where you had to change a 
contributor's patch in frontend gaia code to prefer === to prevent 
real bugs? 


From what I've seen, it's typically a matter of making the results 
unsurprising for subsequent code maintainers, because the rules of what 
gets coerced to what are not intuitive.


I'll crib from Crockford's examples (cf. Appendix B: The Bad Parts 
from JavaScript: The Good Parts). How many of these can you correctly 
predict the result of?


1. '' == '0'
2. 0 == ''
3. 0 == '0'

4. false == 'false'
5. false == '0'

6. false == undefined
7. false == null
8. null == undefined

9. '\t\r\n' == 0


I've posted the answers at https://pastebin.mozilla.org/8833537

If you had to think for more than a few moments to reach the right 
conclusion about any of these -- or, heaven forbid, actually got one 
wrong -- then I think you need to ultimately concede that the use of == 
is more confusing than it needs to be.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-07 Thread Adam Roach
 On May 6, 2015, at 22:51, Eric Shepherd esheph...@mozilla.com wrote:

 would have been nice to have more notice


The plan that has been outlined involves a staged approach, with new
JavaScript features being withheld after some date, followed by a
period during which select older JavaScript features are gradually
removed. I'll note that actually turning off http isn't part of the
outline.

Most importantly, all of these steps are to be taken at dates that are
still under discussion. You can be part of that discussion.

Which leaves us with a conundrum regarding your plea for more notice:
it's a bit hard to seriously consider complaints that at some future
date yet to be determined is too soon.

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-06 Thread Adam Roach

On 5/6/15 13:32, Jonas Sicking wrote:

Like Ehsan, I don't see what advantages limiting this to https brings?


In some ways, that depends on what we decide to define new features to 
mean, and the release date of this feature relative to the date we 
settle on in the announced security plan [1] of  Setting a date after 
which all new features will be available only to secure websites.


If we use the example definition of new features to mean features 
that cannot be polyfilled, then this would qualify.


Keep in mind the thesis of that plan isn't that we restrict 
security-sensitive features to https -- it's that /all new stuff/ is 
restricted to https. If this falls under the definition of a new 
feature, and if it's going to be released after the embargo date, then 
the security properties of clipboard manipulation don't really enter 
into the evaluation.



[1] 
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-06 Thread Adam Roach

On 5/6/15 13:13, Gervase Markham wrote:

On 06/05/15 18:36, Tom Schuster wrote:

I think the ribbon would be really useful if it allowed the user to
restore the previous clipboard content. However this is probably not
possible for all data that can be stored in clipboards, i.e. files.

Which is why we wouldn't overwrite the clipboard until the permission
was granted :-)



Well, that makes it scantly better than a doorhanger, which is what 
Martin was objecting to (and I agree with him). The model that we really 
want here is this thing happened, click here to undo it rather than 
this think is about to happen, but won't unless you take additional 
action. I think this position is pretty strongly bolstered by Dave 
Graham's message about GitHub behavior: Although IE 11 supports this 
API as well, we have not enabled it yet. The browser displays a popup 
dialog asking the user for permission to copy to the clipboard. 
Hopefully this popup is removed in Edge so we can start using JS there too.


Basically, requiring the extra step of requiring the user to click on an 
okay, do it button is high enough friction that the function loses its 
value.


In any case, we should have a better technical exploration of the 
assertion that restoring a clipboard isn't possible in all cases before 
we take it as given. A cursory examination of the OS X clipboard API 
leads me to believe that this would be trivially possible (I believe we 
can just store the array of pasteboardItems from the NSGeneralPBoard off 
somewhere so that they can be moved back if necessary). I'd be a little 
surprised if this weren't also true for Linux and Windows.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-06 Thread Adam Roach

On 5/6/15 10:49, Martin Thomson wrote:

On Wed, May 6, 2015 at 8:42 AM, Doug Turnerdo...@mozilla.com  wrote:

This is important.  We could mitigate by requiring https, only allowing the top 
level document access these clipboard apis, and doorhangering the API.  
Thoughts?

A doorhanger seems like overkill here.  Making this conditional on an
engagement gesture seems about right.  I don't believe that we
should be worry about surfing - and interacting with - strange sites
while there is something precious on the clipboard.

Ask forgiveness, not permission seems about the right balance here.
If we can find a way to revoke permission for a site that abuses the
privilege, that's better.  (Adding this toabout:permissions  with a
default on state seems about right, which leads me to think that we
need the same for the fullscreen thing.)


Going fullscreen also gives the user UI at the time of activation, 
allowing them to manipulate permissions in an obvious way:


https://www.dropbox.com/s/c0sbknrlz4pbybk/Screenshot%202015-05-06%2011.33.42.png?dl=0

Perhaps an analogous yellow ribbon informing the user that the site has 
copied data onto their clipboard, with buttons to allow them to prevent 
it from happening in the future, would be a good balance (in particular 
if denying permission restored the clipboard to its previous state) -- 
it informs the user and provides clear recourse without *requiring* 
additional action.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-06 Thread Adam Roach

On 5/6/15 10:49, Martin Thomson wrote:

On Wed, May 6, 2015 at 8:42 AM, Doug Turner do...@mozilla.com wrote:

This is important.  We could mitigate by requiring https, only allowing the top 
level document access these clipboard apis, and doorhangering the API.  
Thoughts?

A doorhanger seems like overkill here.  Making this conditional on an
engagement gesture seems about right.  I don't believe that we
should be worry about surfing - and interacting with - strange sites
while there is something precious on the clipboard.

Ask forgiveness, not permission seems about the right balance here.
If we can find a way to revoke permission for a site that abuses the
privilege, that's better.  (Adding this to about:permissions with a
default on state seems about right, which leads me to think that we
need the same for the fullscreen thing.)


Going fullscreen also gives the user UI at the time of activation, 
allowing them to manipulate permissions in an obvious way.




Perhaps an analogous yellow ribbon informing the user that the site has 
copied data onto their clipboard, with buttons to allow them to prevent 
it from happening in the future, would be a good balance (in particular 
if denying permission restored the clipboard to its previous state) -- 
it informs the user and provides clear recourse without *requiring* 
additional action.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: document.execCommand(cut/copy)

2015-05-06 Thread Adam Roach

On 5/6/15 20:32, Ehsan Akhgari wrote:

If this falls under the definition of a new
feature, and if it's going to be released after the embargo date, then
the security properties of clipboard manipulation don't really enter
into the evaluation.


I admit that I didn't real the entire HTTP deprecation plan thread 
because of the length and the tone of some of the participants, so 
perhaps I missed this, but reading 
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/ 
seems to suggest that there is going to be a date and criteria for 
what new features mean, but I see no mention of what that date is, or 
what the definition of new features is. 


That's why there were two predicates qualifying the statement.

My point is that the answer to Jonas' question may -- and I'll emphasize 
may -- turn on an overarching strategic security policy, rather than 
the security properties of the feature itself.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-04 Thread Adam Roach

On 5/2/15 05:25, Florian Bösch wrote:
I now mandate that you (and everyone you know) shall only do ethernet 
trough pigeon carriers. There are great advantages to doing this, and 
I can recommend a number of first rate pigeon breeders which will sell 
you pigeons bred for that purpose. I will not discuss with you any 
notion that pigeons shit onto everything and that cost might rise 
because pigeons are more expensive to keep than a copper line. 
Obviously you're a pigeon refusenik and preventer of great progress. 
My mandate for pigeons is binding will come into effect because I 
happen to have a controlling stake in all utility companies and come 
mid 2015 copper lines will be successively cut. Please refrain from 
disagreeing my mandate in vulgar terms, also I refuse any notion that 
using pigeons for ethernet by mandate is batshit insane (the'yre 
pigeons, not bats, please).


It's clear you didn't see it as such, but Nicholas was trying to do you 
a favor.


You obviously have input you'd like to provide on the topic, and the 
very purpose of this thread is to gather input. If you show up with 
well-reasoned arguments in a tone that assumes good faith, there's a 
real chance for a conversation here where people reach a common 
understanding and potentially change certain aspects of the outcome.


If all you're willing to do is hurl vitriol from the sidelines, you're 
not making a difference. Even if you have legitimate and 
well-thought-out points hidden in the venom, no one is going to hear 
them. Nicholas, like I, would clearly prefer that the time of people on 
this mailing list be spent conversing with others who want to work for a 
better future rather than those who simply want to be creatively 
abusive. You get to choose which one you are.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-04 Thread Adam Roach

On 5/4/15 11:24, Florian Bösch wrote:
On Mon, May 4, 2015 at 3:38 PM, Adam Roach a...@mozilla.com 
mailto:a...@mozilla.com wrote:


others who want to work for a better future

A client of mine whom I polled if they can move to HTTPs with their 
server stated they do not have the time and resources to do so. So the 
fullscreen button will just stop working. That's an amazing better 
future right there.


You have made some well-thought-out contributions to conversations at 
Mozilla in the past. I'm a little sad that you're choosing not to 
participate in a useful way here.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Adam Roach

On 5/1/15 05:03, Matthew Phillips wrote:

All mandatory https will do is discourage people from participating in
speech unless they can afford the very high costs (both in dollars and
in time) that you are now suggesting be required.


Let's be clear about the costs and effort involved.

There are already several deployed CAs that issue certs for free. And 
within a couple of months, it will take users two simple commands, zero 
fiscal cost, and several tens of seconds to obtain and activate a cert:


https://letsencrypt.org/howitworks/

There is great opportunity for you to update your knowledge about how 
the the world of CAs has changed in the past decade. Seize it.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-05-01 Thread Adam Roach

On 5/1/15 02:54, 王小康 wrote:

P.S.:And finally, accept Cacert or a easy to use CA.


CAs can only be included at their own request. As it stands, CACert has 
withdrawn its request to be included in Firefox until they have 
completed an audit with satisfactory results. If you want CACert to be 
included, contact them and ask what you can do to help.


In the meanwhile, as has been brought up many times in this thread 
already, there are already deployed or soon-to-be-deployed easy to use 
CAs in the world.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-16 Thread Adam Roach

On 4/16/15 07:16, david.a.p.ll...@gmail.com wrote:

For example:
- You say there is only secure/not secure.  Traditionally, we have things like defense 
in depth, and multiple levels of different sources of authentication.  I am hearing: You will 
either have a Let's Encrypt certificate or you don't.  Heck, let's get rid of EV certificate 
validation too while we are at it: we don't want to have to do special vetting for banking and 
medical websites, because that doesn't fit in with Let's Encrypt's business model.


You're pretty far off in the weeds here. I'll try to help you with some 
of your misconceptions.


First, no one is proposing that Let's Encrypt should become the sole 
source of TLS certificates. Let's Encrypt was started to solve a 
specific set of valid complaints about the complexity and financial 
issues surrounding acquiring a TLS certificate for certain individuals.


Second, Let's Encrypt is run by ISRG, not Mozilla -- Mozilla is one of 
several supporters for ISRG, but we are separate entities.


Finally, ISRG is a 501(c)(3) non-profit public benefit corporation. 
There's no business model in the traditional sense, since the goal is 
not profit. The goal is to fulfill its mission, which is to reduce 
financial, technological, and education barriers to secure communication 
over the Internet. Accusing ISRG of having a pro-TLS agenda is akin to 
accusing a soup kitchen of having a pro-soup agenda: it shows a 
fundamental misunderstanding of what they're doing and why.



- You don't want to hear about non-centralized security models.  DANE...


...is a centralized security model. The difference is that you're 
trading a set of predominantly commercial CA entities for a different 
set of governmental or governmentally-contracted entities. It is 
arguably more centralized than the current CA system.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 10:53, justin.kru...@gmail.com wrote:

Dynamic DNS might be difficult to run on HTTPS as the IP address needs to 
change when say your cable modem IP updates.  HTTPS only would make running 
personal sites more difficult for individuals, and would make the internet 
slightly less democratic.


I'm not sure I follow. I have a cert for a web site running on a dynamic 
address using DynDNS, and it works just fine. Certs are bound to names, 
not addresses.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 16:32, northrupthebandg...@gmail.com wrote:

*By logical measure*, the [connection] that is encrypted but unauthenticated is 
more secure than the one that is neither encrypted nor authenticated, and the 
fact that virtually every HTTPS-supporting browser assumes the precise opposite 
is mind-boggling.


That depends on what kind of resource you're trying to access. If the 
resource you're trying to reach (in both circumstances) isn't demanding 
security -- i.e., it is an http URL -- then your logic is sound. 
That's the basis for enabling OE.


The problem here is that you're comparing:

 * Unsecured connections working as designed

with

 * Supposedly secured connections that have a detected security flaw


An https URL is a promise of encryption _and_ authentication; and when 
those promises are violated, it's a sign that something has gone wrong 
in a way that likely has stark security implications.


Resources loaded via an http URL make no such promises, so the 
situation isn't even remotely comparable.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: Insecure HTTP

2015-04-14 Thread Adam Roach

On 4/14/15 15:35, emmanueldeloge...@gmail.com wrote:

Will Mozilla start to offer certificates to every single domain name owner ?


Yes [1].

https://letsencrypt.org/



[1] I'll note that Mozilla is only one of several organizations involved 
in making this effort happen.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: persistent permissions over HTTP

2015-03-12 Thread Adam Roach

On 3/12/15 12:26, Aryeh Gregor wrote:

Because unless things have changed a lot in the last three years or
so, HTTPS is a pain for a few reasons:

1) It requires time and effort to set up.  Network admins have better
things to do.  Most of them either are volunteers, work part-time,
computers isn't their primary job responsibility, they're overworked,
etc.

2) It adds an additional point of failure.  It's easy to misconfigure,
and you have to keep the certificate up-to-date.  If you mess up,
browsers will helpfully go berserk and tell your users that your site
is trying to hack their computer (or that's what users will infer from
the terrifying bright-red warnings).  This is not a simple problem to
solve -- for a long time,https://amazon.com  would give a cert error,
and I'm pretty sure I once saw an error on a Google property too.  I
think Microsoft too once.

3) Last I checked, if you want a cert that works in all browsers, you
need to pay money.  This is a big psychological hurdle for some
people, and may be unreasonable for people who manage a lot of small
domains.

4) It adds round-trips, which is a big deal for people on high-latency
connections.  I remember Google was trying to cut it down to one extra
round-trip on the first connection and none on subsequent connections,
but I don't know if that's actually made it into all the major
browsers yet.

These issues seem all basically fixable within a few years


As an aside, the first three are not just fixable, but actually fixed 
within the next few months: https://letsencrypt.org/



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS

2014-11-19 Thread Adam Roach

On 11/19/14 04:50, Patrick McManus wrote:

There are basically 2 arguments against OE here: 1] you don't need OE
because everyone can run https and 2] OE somehow undermines https

I don't buy them because [1] remains a substantial body of data and [2] is
unsubstantiated speculation and borders on untested FUD.


I agree, and find the assertion of [2] to be further perplexing: it 
completely discounts the fact that OE can (and ideally will) be opt-out 
for most server configurations, while HTTPS remains opt-in -- even for 
the Let's Encrypt setup.


There's a radical difference in penetration between opt-in and opt-out, 
and we base substantial portions of our privacy decisions on this fact. 
I'm a bit baffled that it's not immediately obvious to everyone in this 
conversation that this distinction translates to the deployment of 
encryption.


I'm all for the drive to have authenticated encryption everywhere, and 
am very excited about the Let's Encrypt initiative. But there's no 
reason to leave traffic gratuitously unencrypted while we drive towards 
100% HTTPS penetration.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-29 Thread Adam Roach

On 9/29/14 03:02, Anne van Kesteren wrote:

On Mon, Sep 29, 2014 at 2:02 AM, Adam Roach a...@mozilla.com wrote:

Yes, I saw that. Your proposal didn't see a lot of support in that venue.

So far for geolocation there is nobody that is opposed.


I'm responding on the topic of gUM, but I'll point out that a response 
of this is nonsense as stated (Richard Barnes) does sound like an 
objection to me. I also read Karl Dubost's response as being less than 
favorable, and Chris Peterson 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1072859#c2) seems to be 
worried about how your proposal breaks existing websites.


Based on your statement, I do wonder what constitutes opposition in your 
mind. For clarity, I am opposed to your proposal for gUM.



For getUserMedia() there are claims of extensive discussion that is
not actually recorded in text. There was also a lot of pointing to
geolocation which does not seem like a valid argument. I don't think
they've made their case.


Sure, but determination of consensus is the job of the chairs. What goes 
into a spec is based on direction of the working group, not on the 
criteria of Anne van Kesteren thinks the working group has made its case.


Fundamentally, the problem is that you're coming to the conversation 
late, and you're not willing to do the work to catch up on the 
conversations that have already occurred. There's no WG historian whose 
job it is to research decision rationale for newcomers.


The reason you're not seeing anyone responding with compelling reasons 
on the working group mailing list is that the issue has been discussed 
and closed. No one has much energy to relitigate old issues, especially 
when objections are brought forth in the rather weak form of I'm not 
sure this was a good idea. If you wish to challenge the status quo, 
then the burden of proof is on you to come up with arguments that are 
both compelling and lucid enough to change the minds of the working 
group participants, not on the working group to justify its past work to 
you.


I note that one of the chairs has taken the exceptionally gracious step 
of inviting you to make your arguments in a more sensible, 
threat-analysis-based form. There's your opening -- take his suggestion, 
and make your case.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-28 Thread Adam Roach

On 9/27/14 02:24, Anne van Kesteren wrote:

On Fri, Sep 26, 2014 at 11:11 PM, Adam Roach a...@mozilla.com wrote:

This is a matter for the relevant specification, not some secret cabal.

I was not proposing doing anything in secret.

I also contacted the relevant standards lists.




Yes, I saw that. Your proposal didn't see a lot of support in that 
venue. And that's why taking it to a Mozilla mailing list rather than 
continuing the discourse that you already started feels like an 
attempted end-run around the standards process. Surely you understand 
why that appears unseemly, right?


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Deprecate geolocation and getUserMedia() for unauthenticated origins

2014-09-26 Thread Adam Roach

On 9/26/14 14:58, Anne van Kesteren wrote:

Exposing geolocation on unauthenticated origins was a mistake. Copying
that for getUserMedia() is too. I suggest that to protect our users we
make some noise about deprecating this practice.


There have already been extensive discussions on this specific topic 
within the W3C, and the conclusion that has been reached does not match 
what you are proposing. I would be extremely loathe to propose that we 
implement outside the spec on a security issue that's already received 
adequate discussion in the relevant venue.



More immediately we should make it impossible to make persistent
grants for these features on unauthenticated origins.


Our implementation of getUserMedia already does this, and the 
getUserMedia spec has RFC 2119 MUST strength language requiring such 
behavior.



I can reach out to Google (and Apple  Microsoft I suppose, though I
haven't seen much from them on the pro-TLS front) to see if they would
be on board with this and help us spread the message.



The email address you're looking for is public-media-capt...@w3.org. 
This is a matter for the relevant specification, not some secret cabal.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-15 Thread Adam Roach

On 9/15/14 11:08, Anne van Kesteren wrote:

Google seems to have the right trade off
and the IETF consensus seems to be unaware of what is happening
elsewhere.


You're confused.

The whole line of argumentation that web browsers and servers should be 
taking advantage of opportunistic encryption is explicitly informed by 
what's actually happening elsewhere. Because what's *actually* 
happening is an overly-broad dragnet of personal information by a wide 
variety of both private and governmental agencies -- activities that 
would be prohibitively expensive in the face of opportunistic encryption.


Google's laser focus on preventing active attackers to the exclusion of 
any solution that thwarts passive attacks is a prime example of 
insisting on a perfect solution, resulting instead in substantial 
deployments of nothing. They're naïvely hoping that finding just the 
right carrot will somehow result in mass adoption of an approach that 
people have demonstrated, with fourteen years of experience, significant 
reluctance to deploy universally.


This is something far worse than being simply unaware of what's 
happening elsewhere: it's an acknowledgement that pervasive passive 
monitoring is taking place, and a conscious decision not to care.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: http-schemed URLs and HTTP/2 over unauthenticated TLS (was: Re: WebCrypto for http:// origins)

2014-09-12 Thread Adam Roach

On 9/12/14 10:07, Trevor Saunders wrote:

[W]hen it comes to the NSA we're pretty much just not going to be able
to force everyone to use something strong enough they can't beat it.


Not to get too far off onto this sidebar, but you may find the following 
illuminating; not just for potentially adjusting your perception of what 
the NSA can and cannot do (especially in the coming years), but as a 
cogent analysis of how even the thinnest veneer of security can temper 
intelligence agencies' overreach into collecting information about 
non-targets:


http://justsecurity.org/7837/myth-nsa-omnipotence/

While not the thesis of the piece, a highly relevant conclusion the 
author draws is: [T]hose engineers prepared to build defenses against 
bulk collection should not be deterred by the myth of NSA omnipotence.  
That myth is an artifact of the post-9/11 era that may now be outdated 
in the age of austerity, when NSA will struggle to find the resources to 
meet technological challenges.


(I'm hesitant to appeal to authority here, but I do want to point out 
the About the Author section as being important for understanding 
Marshall's qualifications to hold forth on these matters.)


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: WebCrypto for http:// origins

2014-09-11 Thread Adam Roach

On 9/11/14 11:08, Anne van Kesteren wrote:

On Thu, Sep 11, 2014 at 5:56 PM, Richard Barnes rbar...@mozilla.com wrote:

Most notably, even over non-secure origins, application-layer encryption can 
provide resistance to passive adversaries.

See https://twitter.com/sleevi_/status/509723775349182464 for a long
thread on Google's security people not being particularly convinced by
that line of reasoning.



The brief detour into discussing opportunistic encryption in that 
rambling thread [1] highlights a place where Ryan differs from the 
growing consensus, at least within the IETF, that something is better 
than nothing. He is out of step with the recognition that our historic 
stance of perfect or absent is counterproductive. Theodore actually 
puts it pretty succinctly in one of the IETF mailing list messages that 
Henri cites:  For too long, I think, we've let the perfect be the enemy 
of the good.


When you force people into an all or nothing situation regarding 
security, nothing is the easy choice. If you provide tools for much 
easier incremental improvement, people will be far more likely to deploy 
something. Absolutism isn't the way to make progress: a transition path 
with small, incremental steps that yield small, incremental improvements 
at gets you to where you want to be eventually.


By contrast, forcing people to swallow everything all at once only 
serves to discourage adoption of any security at all.



[1] Which is now my favorite example of Twitter's shortcomings as a 
communications medium.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to land: Voice/video client (Loop)

2014-05-30 Thread Adam Roach

Summary:

The Loop project aims to create a user-visible real-time communications 
service for existing Mozilla products, leveraging the WebRTC platform. 
One version of the client will be integrated with Firefox Desktop. It is 
intended to be interoperable with a Firefox OS application (not part of 
Gaia) that will be shipped by a third party on the Firefox OS 2.0 platform.


The implementation of the client has already reached a proof-of-concept 
stage in a set of github repositories. This announcement of intent is 
being sent in advance of integration into the mozilla-central 
repositories. This integration will be a two-step process. The first 
step, which has already taken place, is to land the existing 
implementation in the Elm tree to validate proper integration into the 
RelEng systems. After testing, the code will be merged into m-c, and 
subsequent development will take place in the m-i/m-c trees.


The code is currently controlled by the MOZ_LOOP preprocessor 
definition, which is only enabled for Nightly builds. The feature will 
iterate on Nightly until it is considered complete enough to ride the 
trains out.



For more details:

https://wiki.mozilla.org/Loop
https://wiki.mozilla.org/Media/WebRTC
https://blog.mozilla.org/futurereleases/2014/05/29/experimenting-with-webrtc-in-firefox-nightly/

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=loop_mlp

Link to standard: N/A

Platform coverage: Firefox on Desktop.

Estimated or target release: Firefox 33 or 34 (33 is a stretch goal for 
the team, 34 is a committed date).


Preference behind which this will be implemented: For initial landing on 
Nightly, none. Will be behind loop.enabled before riding the trains.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to land: Voice/video client (Loop)

2014-05-30 Thread Adam Roach

On 5/30/14 10:14, Anne van Kesteren wrote:

On Fri, May 30, 2014 at 5:03 PM, Adam Roach a...@mozilla.com wrote:

Link to standard: N/A

I take it this means there's no web-exposed API?




That is correct. This is a browser feature, not accessible from content.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-08 Thread Adam Roach

On 1/8/14 12:03, Martin Thomson wrote:

On 2014-01-08, at 09:57, Adam Roach a...@mozilla.com wrote:


Automated wrapping to a column width is less than optimal. If you look back at 
bz's example about how he would chose to wrap a specific conditional, it's 
based on semantic intent, not the language syntax. By and large, this goes to 
author's intent and his understanding of the problem at hand. It's not the kind 
of thing that can be derived mechanically.

 From that I infer that you would prefer to leave wrapping choices to 
individuals.  That’s in favour of the latter option: enforce line length, but 
don’t reformat to it.


My observation has two key implications. That's the first.

The second is that we need to be careful if we decide to run a 
reformatter over the code wholesale, since you can actually lose useful 
information about author's intent. I'm not the first to raise that point 
in this discussion; I'm simply agreeing with it.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Adam Roach

On 1/7/14 03:07, Jason Duell wrote:
Yes--if we jump to 80 chars per line, I won't be able to keep two 
columns open in my editor (vim, but emacs would be the same) on my 
laptop, which would suck.


(Yes, my vision is not what it used to be--I'm using 10 point font. 
But that's not so huge.) 


I'm not just sympathetic to this argument; I've made it myself in other 
venues. Put me down as emphatically agreeing.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Adam Roach

On 1/7/14 12:16, Martin Thomson wrote:

On 2014-01-06, at 19:28, Patrick McManus mcma...@ducksong.com wrote:


I strongly prefer at least a 100 character per line limit. Technology
marches on.

Yes.  I’ve encountered too many instances where 80 was not enough.


Since people are introducing actual research information here, let's run 
some numbers. According to Paterson et. al. [1], reading comprehension 
speed is actively hindered by lines that are either too short or too 
long, which they define as 9 picas (1.5 inches) and 43 picas (~7 
inches), respectively. Comprehension is significantly faster at 19 picas 
(~3 inches).


Using the default themes that ship with the OS X Terminal app, an 
80-character-wide terminal is on the order of 4 inches wide on a 15-inch 
monitor. 100 columns pushes this to nearly 5 inches.


Now, I'm not arguing for a 60-character line length here. However, it 
would seem that moving from 80 to 100 is going in the wrong direction 
for comprehension speed.



[1] http://psycnet.apa.org/journals/xge/27/5/572/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mozilla style guide issues, from a JS point of view

2014-01-07 Thread Adam Roach

On 1/7/14 14:23, Boris Zbarsky wrote:
One reason I've seen 2 preferred to 4 (apart from keeping line lengths 
down)...


Thanks. I was just about to raise the issue that choosing four over two 
has no identified benefits, and only serves to exacerbate *both* sides 
of the argument over line length limits.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A proposal to reduce the number of styles in Mozilla code

2014-01-06 Thread Adam Roach

On 1/6/14 12:22, Axel Hecht wrote:
In the little version control archaeology I do, I hit breaks blame 
for no good reason pretty often already. I'd not underestimate the 
cost for the project of doing changes just for the sake of changes. 


Do you have a concrete reason to believe that Martin's workaround for 
retaining blame information doesn't work, or did you just miss that part 
of his message?



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Persistent tabs + Home App Tabs (see bug 551849)

2013-11-18 Thread Adam Baxter
Hi,
I'm wondering if there's any further development going on for persistent tabs 
in Firefox that can't be accidentally closed - it's mentioned here 
https://bugzilla.mozilla.org/show_bug.cgi?id=551849 but Pinned Tabs aren't 
quite what I'm looking for.

In fact, it looks like the original project in that bug has stalled - the 
submitter now works at Google.

Does anyone know the status of the project or where I could help move it along?

Thanks,
Adam Baxter
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: java click to run problem on Firefox

2013-10-10 Thread Adam Roach

On 10/10/13 11:09, Benjamin Smedberg wrote:
We encourage you to transition your site away from Java as soon as 
possible. If there are APIs which you need in the web platform in 
order to make that possible, please let me know and we will try to 
make adding those a priority.


I haven't personally seen it done, but it seems to me that you could do 
something like:


[java source] --javac- [java bytecode] --llvm- [llvm bitcode] 
--emscripten- [javascript]


I'm not claiming that this would be possible without some porting 
effort; my point is that one does not need to start from scratch to make 
this transition.


Has anyone here played around with the toolchain I describe above?

/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: java click to run problem on Firefox

2013-10-10 Thread Adam Roach

On 10/10/13 12:06, Thierry Milard wrote:
There are still a few things like speedy 3D That html-javascipt do bot 
do Dell enough


http://www.unrealengine.com/html5/


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing xml:base

2013-09-16 Thread Adam Kowalczyk

On 2013-08-09 15:32, Boris Zbarsky wrote:

There is a proposal in
https://bugzilla.mozilla.org/show_bug.cgi?id=903372 to remove xml:base
support.

Do we actually use this for anything?  I thought we used to set it for
xbl stuff, but I don't see obvious code doing that.

If we can, it would be great to rip this out: it would significantly
simplify a bunch of things.

-Boris


For what it's worth, I find xml:base very useful in my extension. It is 
a feed reader and it displays content from many third-party sources on a 
single page, so there's a need for multiple base URIs in order to 
resolve relative URIs correctly.


The arguments so far have focused on code simplicity, lack of support in 
other browsers, and Mozilla itself not using the feature. I haven't seen 
anyone address the arguably most important question: is the feature 
useful for the web at large? Perhaps we should improve our 
implementation and push for its adoption, rather than jump on the bandwagon?


In principle, functionality provided by xml:base seems useful for web 
applications that deal with third-party content. Maybe someone more 
knowledgeable can estimate how much need there is in practice, though.


- Adam
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing xml:base

2013-09-16 Thread Adam Kowalczyk

On 2013-08-09 15:32, Boris Zbarsky wrote:

There is a proposal in
https://bugzilla.mozilla.org/show_bug.cgi?id=903372 to remove xml:base
support.

Do we actually use this for anything?  I thought we used to set it for
xbl stuff, but I don't see obvious code doing that.

If we can, it would be great to rip this out: it would significantly
simplify a bunch of things.

-Boris


For what it's worth, I find xml:base very useful in my extension. It is 
a feed reader and it displays content from many third-party sources on a 
single page, so there's a need for multiple base URIs in order to 
resolve relative URIs correctly.


The arguments so far have focused on code simplicity, lack of support in 
other browsers, and Mozilla itself not using the feature. I haven't seen 
anyone address the arguably most important question: is the feature 
useful for the web at large? Perhaps we should improve our 
implementation and push for its adoption, rather than jump on the bandwagon?


In principle, functionality provided by xml:base seems useful for web 
applications that deal with third-party content. Maybe someone more 
knowledgeable can estimate how much need there is in practice, though.


- Adam
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing xml:base

2013-09-16 Thread Adam Kowalczyk

On 2013-08-09 15:32, Boris Zbarsky wrote:

There is a proposal in
https://bugzilla.mozilla.org/show_bug.cgi?id=903372 to remove xml:base
support.

Do we actually use this for anything?  I thought we used to set it for
xbl stuff, but I don't see obvious code doing that.

If we can, it would be great to rip this out: it would significantly
simplify a bunch of things.

-Boris


For what it's worth, I find xml:base very useful in my extension. It is 
a feed reader and it displays content from many third-party sources on a 
single page, so there's a need for multiple base URIs in order to 
resolve relative URIs correctly.


The arguments so far have focused on code simplicity, lack of support in 
other browsers, and Mozilla itself not using the feature. I haven't seen 
anyone address the arguably most important question: is the feature 
useful for the web at large? Perhaps we should improve our 
implementation and push for its adoption, rather than jump on the bandwagon?


In principle, functionality provided by xml:base seems useful for web 
applications that deal with third-party content. Maybe someone more 
knowledgeable can estimate how much need there is in practice, though.


- Adam
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing xml:base

2013-09-16 Thread Adam Kowalczyk

On 2013-09-17 02:52, Boris Zbarsky wrote:

On 9/16/13 8:06 PM, Adam Kowalczyk wrote:

and it displays content from many third-party sources on a
single page


You probably want iframes for that


I'm using a resource:// URI loaded in a browser with type=content, so 
the content is unprivileged and untrusted. Putting each feed entry into 
its own iframe would probably carry a significant performance penalty.


Websites don't have the means to do it safely, though...


I haven't seen anyone address the arguably most important question: is
the feature
useful for the web at large?


It's not if we're the only one who ever supports it...


Alright then, *would* be useful if supported more widely, is what I 
should have said.





Perhaps we should improve our
implementation and push for its adoption


The other UAs have flat our refused to ever implement something like
this.  I can understand why.  I wouldn't implement it in a new UA either
(e.g. servo).


If there's no hope for getting traction with other vendors, then it 
pretty much settles it. But what were their motivations? It it was lack 
of good use cases, then see below.





In principle, functionality provided by xml:base seems useful for web
applications that deal with third-party content.


I think using seamless/sandboxed iframes is the right way to deal with
third-party content.  Certainly pulling in untrusted third-party content
directly is a security hole.


Unless something like Content Security Policy is implemented, which 
would make it possible to inject untrusted content without XSS risks, 
thus making the above use case more legitimate.


- Adam
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Firefox and Notes and Productivity Software

2013-09-10 Thread Adam Sobieski
Firefox Developers,

 

Greetings.  I would like to describe some ideas pertaining to software 
interoperability between Web browsers and notes and productivity software.  The 
ideas are described at the new e-Governance Community Group at the W3C 
(http://www.w3.org/community/egovernance/), specifically Digital forms, 
questionnaires, surveys and opinion polls 
(http://www.w3.org/community/egovernance/2013/08/22/digital-forms-questionnaires-surveys-and-opinion-polls/)
 and The Web, notes and civic participation 
(http://www.w3.org/community/egovernance/2013/09/03/the-web-notes-and-civic-participation/).

 

Abstractly, the ideas pertain to the use of notes, in human multitasking, 
including notes to surf the Web with regard to certain topics, notes made in 
meetings or with software applications, and the use of browsers to make notes, 
to make note of content on the Web.  The ideas pertain to the interoperability 
between desktop applications, including Web browsers, and notes and 
productivity software.

 

 

 

Kind regards,

 

Adam Sobieski
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-06 Thread Adam Roach

On 9/6/13 04:25, Henri Sivonen wrote:

We do surface such UI for https deployment errors
inspiring academic papers about how bad it is  that users are exposed
to such UI.


Sure. It's a much trickier problem (and, in any case, the UI is 
necessarily more intrusive than what I'm suggesting). There's no good 
way to explain the nuanced implications of security decisions in a way 
that is both accessible to a lay user and concise enough to hold the 
average user's attention.



On Thu, Sep 5, 2013 at 6:15 PM, Adam Roach a...@mozilla.com wrote:

As to the why, it comes down to balancing the need to let the publisher
know that they've done something wrong against punishing the user for the
publisher's sins.

Two problems:
  1) The complexity of the platform increases in order to address a fringe case.
  2) Making publishers' misdeeds less severe in the short term makes it
more OK for publishers to engage in the misdeeds, which in the light
of #1 leads to long-term problems. (Consider the character encoding
situation in Japan and how HTML parsing in Japanese Firefox is worse
than in other locales as the result.)


To the first point: the increase in complexity is fairly minimal for a 
substantial gain in usability. Absent hard statistics, I suspect we will 
disagree about how fringe this particular exception is. Suffice it to 
say that I have personally encountered it as a problem as recently as 
last week. If you think we need to move beyond anecdotes and personal 
experience, let's go ahead and add telemetry to find out how often this 
arises in the field.


Your second point is an argument against automatic correction. Don't get 
me wrong: I think automatic correction leads to innocent publisher 
mistakes that make things worse over the long term. I absolutely agree 
that doing so trades short-term gain for long-term damage. But I'm not 
arguing for automatic correction.


But it's not our job to police the web.

It's our job to... and I'm going to borrow some words here... give users 
the ability to shape their own experiences on the Internet. You're 
arguing _against_ that for the purposes of trying to control a group of 
publishers who, for whatever reason, either lack the ability or don't 
care enough to fix their content even when their tools clearly tell them 
that their content is broken.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-05 Thread Adam Roach

On 9/5/13 09:10, Henri Sivonen wrote:

Why should we surface this class of authoring error to the UI in a way 
that asks the user to make a decision considering how rare this class 
of authoring error is?


It's not a matter of the user judging the rarity of the condition; it's 
the user being able to, by casual observation, look at a web page and 
tell that something is messed up in a way that makes it unusable for them.



Are there other classes of authoring errors
that you think should have UI for the user to second-guess the author?
If yes, why? If not, why not?


In theory, yes. In practice, I can't immediately think of any instances 
that fit the class other than this one and certain Content-Encoding issues.


If you want to reduce it to principle, I would say that we should 
consider it for any authoring error that is (a) relatively common in the 
wild; (b) trivially detectable by a lay user; (c) trivially detectable 
by the browser; (d) mechanically reparable by the browser; and (e) has 
the potential to make a page completely useless.


I would argue that we do, to some degree, already do this for things 
like Content-Encoding. For example, if a website attempts to send 
gzip-encoded bodies without a Content-Encoding header, we don't simply 
display the compressed body as if it were encoded according to the 
indicated type; we pop up a dialog box to ask the user what to do with 
the body.


I'm proposing nothing more radical than this existing behavior, except 
in a more user-friendly form.


As to the why, it comes down to balancing the need to let the 
publisher know that they've done something wrong against punishing the 
user for the publisher's sins.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-09-04 Thread Adam Roach

On 9/2/13 13:36, Joshua Cranmer  wrote:
I don't think there *is* a sane approach that satisfies everybody. 
Either you break UTF8-just-works-everywhere, you break legacy 
content, you make parsing take inordinate times...


I want to push on this last point a bit. Using a straightforward UTF-8 
detection algorithm (which could probably stand some optimization), it 
takes my laptop somewhere between 0.9 ms and 1.4 ms to scan a _Megabyte_ 
buffer in order to check whether it consists entirely of valid UTF-8 
sequences (the speed variation depends on what proportion of the 
characters in the buffer are higher than U+007F). That hardly even rises 
to the level of noise.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 14:11, Adam Roach wrote:
...helping the user understand why the headline they're trying to read 
renders as Ð' ÐоÑ?дÑfме пÑEURедложили 
оÑ,обÑEURаÑ,ÑOE Ð?обелÑ? Ñf Ðz(Ð±Ð°Ð¼Ñ  rather than ? 
??? ??  ?? ? ?.


Well, *there's* a heavy dose of irony in the context of this thread. I 
wonder what rules our mailing list server applies for character set 
decimation.


When I sent that out, the question marks were a perfectly readable 
string of Cyrillic characters.


Which provides a strong object lesson in the fact that character set 
configuration is hard. If we can't get this right internally, I think 
we've lost the moral ground in saying that others should be able to, and 
tough luck if they can't.


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 12:24, Mike Hoye wrote:

On 2013-08-30 11:17 AM, Adam Roach wrote:

It seems to me that there's an important balance here between (a) 
letting developers discover their configuration error and (b) 
allowing users to render misconfigured content without specialized 
knowledge.


For what it's worth Internet Explorer handled this (before UTF-8 and 
caring about JS performance were a thing) by guessing what encoding to 
use, comparing a letter-frequency-analysis of a page's content to a 
table of what bytes are most common in which in what encodings of 
whatever languages.

...
From both the developer and user perspectives, it was amounted to 
something went wrong because of bad magic.


I'd like to clarify two points about what I'm proposing.

First, I'm not proposing that we do anything without explicit user 
intervention, other than present an unobtrusive bar helping the user 
understand why the headline they're trying to read renders as Ð' 
ÐоÑ?дÑfме пÑEURедложили оÑ,обÑEURаÑ,ÑOE Ð?обелÑ? 
Ñf Ðz(Ð±Ð°Ð¼Ñ  rather than ? ??? ??  ?? ? 
?. (No political statement intended here -- that's just the leading 
headline on Pravda at the moment).


If the user is happy with the encoding, they do nothing and go about 
their business.


If the user determines that the rendering is, in fact, not what they 
want, they can simply click on the Yes button and (with high 
probability), everything is right with the world again.


Also note that I'm not proposing that we try to do generic character set 
and language detection. That's fraught with the perils you cite. The 
topic we're discussing here is UTF-8, which can be easily detected with 
extremely high confidence.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Detection of unlabeled UTF-8

2013-08-30 Thread Adam Roach

On 8/30/13 13:41, Anne van Kesteren wrote:

Where did the text file come from? There's a source somewhere... And
these days that's hardly how people create content anyway.


Maybe not for the content _you_ consume, but the Internet is a bit 
larger than our ivory tower.


Check out, for example:

https://www.rfc-editor.org/rse/wiki/lib/exe/fetch.php?media=design:future-unpag-20130820.txt

In particular, when you look at that document, tell me what you think 
the parenthetical phrase after the author's name is supposed to look 
like -- because I can guarantee that Firefox isn't doing the right thing 
here.



And again, it has already been pointed out we cannot scan the entire byte stream


Sure we can. We just can't fix things on the fly: we'd need something 
akin to a user prompt and probably a page reload. Which is what I'm 
proposing.



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rethinking build defaults

2013-08-16 Thread Adam Roach
I think the key argument against this approach is that system components 
are never truly isolated. Sure, some of them can be compiled out and 
still produce a working system. That doesn't mean that testing without 
those components is going to have good test coverage.


What I'm worried about, if we start disabling various modules, is that 
we're going to have regressions that go unnoticed on developer systems, 
blow up on m-i, and then take a _long_ time to track down. We already 
have m-i closed for about four hours a day as it is, frequently during 
prime working hours for a substantial fraction of Mozilla's 
contributors. Further varying developers' local build environments from 
those of the builders will only make this problem worse.


/a

On 8/16/13 04:32, Mike Hommey wrote:

Hi everyone,

There's been a lot of traction recently about our builds getting slower
and what we could do about it, and what not.

Starting with bug 904979, I would like to change the way we're thinking
about default flags and options. And I am therefore opening a discussion
about it.

The main thing bug 904979 does is to make release engineering builds (as
well as linux distros, palemoon, icecat, you name it) use a special
--enable-release configure flag to use flags that we deem necessary for
a build of Firefox, the product. The flip side is that builds without
this flag, which matches builds from every developer, obviously, would
use flags that make the build faster. For the moment, on Linux systems,
this means disabling identical code folding and dead code removal (which,
while they make the binary size smaller, increase link time), and
forcing the use of the gold linker when it's available but is not system
default. With bug 905646, it will mean enabling -gsplit-dwarf when it's
available, which make link time on linux really very much faster (4s
on my machine instead of 30s). We could and should do the same kind
of things for other platforms, with the goal of making linking
libxul.so/xul.dll/XUL faster, making edit-compile-edit cycles faster.
If that works reliably, for instance, we should for instance use
incremental linking. Please feel free to file Core::Build Config bugs
for what you think would help on your favorite build platform (and if
you do, for better tracking, make them depend on bug 904979).

That being said, this is not the discussion I want to have here, that
was merely an introduction.

The web has grown in the past few years, and so has our code base, to
support new technologies. As Nathan noted on his blog[1] disabling
webrtc calls for great build time improvements. And I think it's
something we should address by a switch in strategy.

- How many people are working on webrtc code?
- How many people are working on peripheral code that may affect webrtc?
- How many people are building webrtc code they're not working on and
   not using?

I'm fairly certain the answer to the above is that the latter population
is much bigger than the other two, by probably more than an order of
magnitude.

So here's the discussion opener: why not make things like webrtc (I'm
sure we can find many more[2]) opt-in instead of opt-out, for local,
developer builds? What do you think are good candidates for such a
switch?

Mike

1. 
https://blog.mozilla.org/nfroyd/2013/08/15/better-build-times-through-configury/
2. and we can already start with ICU, because it's built and not even
used. And to add injury to pain, it's currently built without
parallelism (and the patch to make it not do so was backed out).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


OS X: deprecate Apple clang 4.1?

2013-08-14 Thread Adam Roach
Over the past few weeks, I've had the build completely break three time 
due to issues with Apple clang 4.1, which tells me that we're not doing 
any regular builds with Apple clang 4.1 (c.f. Bug 892594, Bug 904108, 
and the fact that the current tip of m-i won't link with Apple clang 4.1).


I'll note that the bugs I mention above are both working around actual 
bugs in clang, not missing features.


Any time I ask in #developers, the answer seems to be that our minimum 
version for Apple clang is still 4.1. I would propose that (unless we're 
adapting some of our infra builders to check that we can at least 
compile and link with 4.1), we formally abandon 4.1 as a supported compiler.


--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Making proposal for API exposure official

2013-06-21 Thread Adam Roach

On 6/21/13 15:45, Andrew Overholt wrote:
I'd appreciate your review feedback.  Thanks. 



I'm having a hard time rectifying these two passages, which seem to be 
in direct contradiction:


1. Note that at this time, we are specifically focusing on /new/ JS
   APIs and not on CSS, WebGL, WebRTC, or other existing
   features/properties

2. This policy is new and we have work to do to clean up previous
   divergences from it.


I expect that the first statement is the correct one, given that the 
goal here is to prevent wholesale breaking of deployed sites. It also 
aligns with what I've heard from parties who have been involved in the 
conversations to date.


Of course, I could just be misreading the second statement, in which 
case I'd think that a clarification might be in order.


/a

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Standalone GTests

2013-05-08 Thread Adam Roach

On 5/8/13 12:10, Gregory Szorc wrote:
I think this is more a question for sheriffs and people closer to 
automation. Generally, you need to be cognizant of timeouts enforced 
by our automation infrastructure and the scorn people will give you 
for running a test that isn't efficient. But if it is efficient and 
passes try, you're generally on semi-solid ground.


The issue with the signaling tests is that there are necessarily a lot 
of timers involved, so they'll always take a long time to run. They're 
pretty close to optimally efficient inasmuch as there's really no 
faster way to do what they're doing. I suspect you mean runs in a very 
short amount of time rather than efficient.


It should be noted that not running the signaling_unittests on checkin 
has bitten us several times, as we'll go for long period of times with 
regressions that would have been caught if they'd been running (and then 
it's a lot more work to track down where the regression was introduced).


/a
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


RE: Firefox Social API

2013-01-06 Thread Adam Sobieski
Firefox Developers and Community,

There's also a class action lawsuit about Facebooksponsored stories:

http://epic.org/privacy/vppa/Fraley%20v.%20Facebook%20(appropriation%20case)%20Order%2012-16.pdf


http://docs.fraleyfacebooksettlement.com/docs/notice.pdf

http://www.fraleyfacebooksettlement.com/



Kind regards,

Adam Sobieski 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


RE: Firefox Social API

2012-11-24 Thread Adam Sobieski
Firefox Developers and Community,

Furthermore, there are concerns that Facebook's controversial privacy policies, 
major news events, are occurring in the weeks approaching the upcoming and 
contentious 2012 International Telecommunication Union World Conference on 
International Telecommunications, a treaty-level conference.  There are 
concerns that Facebook's privacy policies, related news events and news 
discussions, as we approach the conference, could be contributive to a certain 
reactionary mood at the conference 
(http://en.wikipedia.org/wiki/International_Telecommunication_Union#World_Conference_on_International_Telecommunications_2012_.28WCIT-12.29).
  Telecommunications ministers from 193 countries will be attending the 
conference from December 3rd to December 14th.  As you might be aware, a large 
number of technologists and organizations have serious concerns about Internet 
regulation, about precedent, including with regard to the International 
Telecommunications Union.

Ralph Giles, thank you for the link to the project.
 
David Dahl, informationally, WebRTC includes and abstracts key NAT and firewall 
traversal technology using STUN (http://tools.ietf.org/html/rfc5389), ICE 
http://tools.ietf.org/html/rfc5245), TURN (http://tools.ietf.org/html/rfc5766), 
RTP-over-TCP and support for proxies.  The P2PSIP WG at the IETF is exploring 
NAT traversal topics.  With regard to the REsource LOcation And Discovery 
(RELOAD) protocol, NAT traversal is a fundamental service and the 
interoperation of ICE and STUN with RELOAD are described in the RELOAD base 
protocol draft (http://tools.ietf.org/html/draft-ietf-p2psip-base-23).  
Additionally, WebRTC and STUN extensions are described in another IETF draft 
document: 
http://tools.ietf.org/html/draft-reddy-rtcweb-stun-auth-fw-traversal-00 .
 
P2P video calling and conferencing as well as P2P messaging services, or P2P 
answering machines, are upcoming topics and technologies.

Combinations of upcoming P2P architectures, a number of software options for 
video calling and conferencing, advancements in distributed messaging 
technologies, P2P answering machines, the interoperability of browsers with 
components from multiple providers, and numerous other technological topics 
could each contribute to diffusing what might have been otherwise a climate of 
reactionary policy, a climate of reaction to instantaneous and ephemeral topics 
specific to one or a few popular websites, a climate of reaction concurrent to 
discussions of durative and substantive matters, matters of tremendous 
importance, matters which could have impacted the nature of the Internet, of 
the Web, and of freedom for generations to come.



Kind regards,

Adam Sobieski 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Firefox Social API

2012-11-22 Thread Adam Sobieski
Mozilla Firefox Team and Community,
 
Greetings.  I would like to comment on the new Firefox Social API 
(http://blog.mozilla.org/blog/2012/11/20/firefox-introduces-new-social-api-and-previews-integration-with-facebook/).
 
I would like to list a few events that might indicate some concerns that a 
number of scientists and technologists might have about centralized 
socialization, or a socialization industry.

November 2, 2010. The 2010 United States Elections.

November 22, 2010. Tim Berners-Lee in Scientific American indicates that some 
large networking sites are not congruent with the principles of the Web. 
http://www.scientificamerican.com/article.cfm?id=long-live-the-web .

June 14, 2011. Iceland makes use of Facebook for e-democracy. 
http://www.zdnet.com/blog/facebook/iceland-taps-facebook-to-rewrite-its-constitution/1600
 .
 
January 12, 2012. Facebook gives Politico, and possibly others, deep access to 
users' political sentiments. http://mashable.com/2012/01/12/politico-facebook/ 
, 
http://allthingsd.com/20120112/facebook-gives-politico-deep-access-to-users-political-sentiments/
 .
 
November 6, 2012. The 2012 United States Elections.
 
November 22, 2012. Facebook proposes to end voting on privacy issues. 
http://abcnews.go.com/Technology/wireStory/facebook-proposes-end-voting-privacy-issues-17787954
 .

November 22, 2012. Nordic countries express frustration with Facebook. 
Facebook should stop unsolicited advertising to users in Nordic countries or 
face legal action, the Norwegian consumer agency said on Thursday. 
http://phys.org/news/2012-11-nordic-countries-facebook-ads.html .
 
Those events indicate some of the concerns that a number of scientists and 
technologists might have about large social networking websites, centralized 
socialization, or a socialization industry.

Onto technical topics, the Firefox Social API could be scalable for modular 
components, scalable for P2P solutions.  WebRTC is a contemporary technology 
and pertains to video calls, conferences, and potentially videos forums.  
WebRTC does include P2P technologies.  I would like to describe a scenario with 
P2P distributed storage for hypertext, audio and video calls, some new 
features, towards some P2P social networking technology topics.

Scenario: Person A calls Person B; Person A might know whether Person B is 
online or offline before they commence a communication activity. If Person B is 
online, the data motion is simplified. If Person B is offline, they could have 
an answering machine multimedia clip available on a group of nodes which they 
have designated, e.g. per social network graphs. Person A can watch Person B's 
streaming answering machine clip or skip to leaving a message. If Person A 
leaves a message, that streaming video message is stored on a group of nodes, 
possibly the union of the two groups of nodes designated by both Person A and 
Person B. When Person B comes online, within a system-specific duration of 
time, e.g. 90 days or 1 year, the portions of data are downloaded by them, 
segmented downloading, and possibly with something like a BITS 4.0+ technology. 
If Person B chooses to view any of the streamable media during that initial 
phase, which might not be uncommon, a log on and check message
 s pattern, the segmented downloading can toggle to a streaming variety of 
download, including variable bitrate streaming. Even after Person B might watch 
real-time segmented downloads of variable-bitrate streaming multimedia, the 
entirety of their high-bitrate messages could be downloaded and stored by 
Person B unless or until Person B indicated otherwise.

We can envision and develop features for P2P video communication systems, P2P 
hypertext, audio and video systems, multimedia systems, including those 
described.  Video calls and Video conferencing have been illustrated with 
WebRTC; video forums may be realized upcoming.  Other social media features, 
P2P multimedia social networking features, could be implemented as modular 
components on scalable platforms.  A scalable Social API can facilitate the 
capability for more developers to create Firefox-integrated solutions, 
Web-based solutions, including with decentralized and distributed P2P social 
networking solutions.



Kind regards,

Adam Sobieski 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform