Re: mach logspam

2020-08-03 Thread Andrew Sutherland

On 8/3/20 6:45 PM, Eric Rahm wrote:

*What's next?*
If folks want to improve the tool I'm happy to land patches in my 
repo, or if someone wants to support it officially in the mozilla repo 
I'm happy to transfer ownership.


This is a most excellent tool!

I wonder how hard it would be to get it in-tree and create a taskcluster 
job that produces JSON artifacts searchfox could use to badge source 
lines/warning invocations with the number of times the warning was seen, 
possibly reporting incidences as low as 1 since the violation of a weak 
invariant is potentially notable and useful.  Interacting with the the 
badge[1] could use the tool's "most_common" mechanism to list the tests 
that trigger the problem the most and provide direct log links so people 
can directly investigate the context.  (Most useful would be if we could 
then request/lookup a pernosco trace of the given test, but that would 
presumably be more involved since pernosco's self-serve API doesn't 
currently think of these warnings as failures.)


Andrew

1: UX would be TBD and likely subject to iteration.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Searchfox now gives info about tests at the top of test files!

2020-07-29 Thread Andrew Sutherland

On 7/29/20 6:28 AM, James Graham wrote:
As an aside/reminder for people, for web-platform-tests in particular 
the dashboard at https://jgraham.github.io/wptdash/ can give you 
information about all tests in a component and is designed to answer 
questions like "which tests are failing in Firefox but passing in both 
Chrome and Safari", since knowing that can help prioritise issues that 
are likely web-compat hazards.


Apologies for not mentioning this invaluable resource in the email!  
I've filed https://bugzilla.mozilla.org/show_bug.cgi?id=1656010 on 
exposing wptdash from searchfox in a contextually useful way.


And, as less of an aside, if there's more information people want, we 
can definitely add it to the summary task. The current featureset was 
based on the requirements of wptdash, but if there's more things that 
would be useful we should go ahead and add them.


Yes, to be clear, this initial landing is basically an MVP (Minimal 
Viable Product) for information about tests.  It would be fantastic if 
we could add more directly useful information and links out to other 
existing (or new) tools.  I would be very happy to mentor anyone through 
the searchfox aspects of any enhancements here, but am not an expert in 
the underlying production of the taskcluster artifacts.


Aside: For those reading this and concerned about more information being 
added which directly displaces the contents of the source file, please 
be aware:


- The searchfox contributors are aware of these concerns and concerned 
about them too.  There are delicate trade-offs to be made and none of 
the existing searchfox contributors are UX experts!  We appreciate input 
and especially specific proposals for how to improve things, especially 
if a mockup can be made by permuting the existing searchfox UI with 
devtools and a screenshot taken!
  - As an example of the power of teamwork and iteration, the current 
cool looking position:sticky styling you see on searchfox is thanks to 
:heycam iterating on my initial styling with the help of :kats!
  - As another example of the power of teamwork and iteration, a 
related effort stemming from changes in the HTML hierarchy of the 
searchfox code listing was improving the accessibility tree produced by 
the searchfox source listing thanks to :Jamie!
- https://bugzilla.mozilla.org/show_bug.cgi?id=1655952 tracks current 
discussion related to this (meta-)issue, but there's also the #searchfox 
chat.mozilla.org channel



Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Searchfox now gives info about tests at the top of test files!

2020-07-28 Thread Andrew Sutherland
Have you ever been looking at a test file in Searchfox and wondered how 
the test could possibly work?  Then you do some more investigation and 
it turns out that the test, in fact, does not work and is disabled?  
Perhaps you even melodramatically threw your hands up in the air in 
frustration?


Well, prepare to throw your hands up in excitement, because 
https://bugzilla.mozilla.org/show_bug.cgi?id=1653986 has landed, 
bringing with it the gift of info boxes at the top of every test page!


Now you can know:

- How long your tests take to run (on average)!
- How many times your tests were run in the preceding 7 days[1]!
- How many times your tests were skipped in the preceding 7 days!
- What skip-patterns govern the skips from your test's 
mochitest.ini-style files!
- What WPT disabled patterns govern the skips from your WPT test's meta 
files!

- How many WPT subtests defy expectations!
- The wpt.fyi URL of the test you're looking at... by clicking on the 
"Web Platform Tests Dashboard" link in the "Navigation" panel on the 
right side of the searchfox page!


This information is brought to you by:

- The "source-test-file-metadata-test-info-all" taskcluster job defined 
at 
https://searchfox.org/mozilla-central/source/taskcluster/ci/source-test/file-metadata.yml 
that provides statistics on runs, skipped runs, and (non-WPT) skip 
conditions.
- The "source-test-wpt-metadata-summary" taskcluster job defined at 
https://searchfox.org/mozilla-central/source/taskcluster/ci/source-test/wpt-metadata.yml 
that derives its data from `mach wpt-metadata-summary`.  (Thanks :jgraham!)
- The people who make taskcluster and the taskcluster jobs and the 
testing infrastructure and the test data pipeline go! Having been around 
for the tail end of the days of the tinderbox waterfall, it is so 
incredibly fantastic that adding this data to searchfox is just a matter 
of expressing a few task dependencies, adding some calls to curl using a 
normalized URL structure that is exposed by taskcluster/treeherder's 
introspection capabilities, some JSON plumbing, and some HTML formatting!


Do you think these info boxes are good, but not yet worth putting your 
hands up for?  But you'd like to put your hand up to volunteer to help 
improve the info boxes so that you can put both your hands up in 
excitement?  Then put your hands up in excitement at the prospect of 
putting your hand up to volunteer because that is a thing you can do!  
Because:


- Anyone can contribute to Searchfox!
- Bugs and enhancements are tracked at bugzilla.mozilla.org under 
Webtools::Searchfox.  You can see existing open bugs at 
https://bugzilla.mozilla.org/buglist.cgi?product=Webtools=Searchfox_status=__open__ 
and file new bugs at 
https://bugzilla.mozilla.org/enter_bug.cgi?product=Webtools=Searchfox
- The source can be found at https://github.com/mozsearch/mozsearch and 
the production configurations at 
https://github.com/mozsearch/mozsearch-mozilla.
- You can spin up a Searchfox VM easily[2] with Vagrant and then build 
the test repo quickly with a single `make build-test-repo` that indexes 
the repo and starts the web server in as little as 3.4 seconds!
- There's a cool "Searchfox" chatroom on https://chat.mozilla.org/ where 
you can discuss potential enhancements to searchfox, how you use 
searchfox, how you'd like to use searchfox, and where the point of 
diminishing returns was probably located on the hands up shtick.



Andrew

1: Searchfox indexing jobs run once a day for branches that are expected 
to change and only on demand for historical branches. Infrequently, the 
indexing jobs fail, which could put Searchfox a day or two behind the 
current state of things.  More info at 
https://github.com/mozsearch/mozsearch-mozilla#how-searchfoxorg-stays-up-to-date
2: Fingers crossed, but we tried really hard to sand down all the rough 
edges.  Computers are still involved though so YMMV, but the members of 
the "Searchfox" chatroom are happy to help you out if you run into problems!


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-03-06 Thread Andrew Sutherland

On 3/6/20 2:24 AM, Gabriele Svelto wrote:

Now for the long answer: we're leveraging the Sentry crates to replace
our current crash reporting machinery. Not only they're faster and
better than what we have now but their functionality is far richer. So
it will be possible - and possibly even desirable - to fold all of this
functionality into a single tool that knows how to do both stack-walking
and symbolication.


Thank you both very much for the clarifications and your excellent work 
here!


In terms of the Sentry crates, I presume that means the crates in 
https://github.com/getsentry/symbolic repo?  Are there still reasons to 
use/pay attention to Ted's https://github.com/luser/rust-minidump repo?  
For example, I use slightly modified versions of 
`get-minidump-instructions` and `minidump_dump` from the latter, but I 
want to make sure I'm hitching my metaphorical tooling wagon to the 
right metaphorical tooling horse.


Thanks again!
Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-03-05 Thread Andrew Sutherland
Does this eliminate the need documented at 
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Mochitest#stacks 
to acquire a `minidump_stackwalk` binary and then expose it via 
MINIDUMP_STACKWALK environment variable to get symbolicated backtraces 
when local test runs crash?  Or is that part of the future work to "use 
`fix-stacks` on test outputs"?


Thanks!
Andrew

On 3/5/20 5:52 PM, Nicholas Nethercote wrote:

Hi all,

I have written a new stack-fixing tool, called `fix-stacks`, which 
symbolizes stack traces produced by Firefox.


`fix-stacks` is intended to replace our existing stack-fixing scripts, 
`fix_{linux,macosx}_stack.py` and `fix_stack_using_bpsyms.py`, which 
are used (a) on many test outputs, both locally and on automation, and 
(b) by DMD, Firefox's built-in heap profiler.


`fix-stacks` is now installed by `mach bootstrap` on Windows, Mac, and 
Linux. It is usable from Python code via the `fix_stacks.py` wrapper 
script. Its code is at https://github.com/mozilla/fix-stacks/.


In bug 1604095 I replaced the use of `fix_{linux,macosx}_stack.py` 
with `fix_stacks.py` for DMD, with the following benefits.


* On Linux, stack-fixing of DMD output files (which occurs when you 
run `dmd.py`) is roughly 100x faster. On my Linux box I saw reductions 
from 20-something minutes to ~13 seconds.


* On Mac, stack-fixing of DMD output files on Mac is roughly 10x 
faster. On my Mac laptop I saw reductions from ~7 minutes to ~45 seconds.


* On Windows, stack-fixing of DMD output files now occurs. (It 
previously did not because there is no `fix_window_stacks.py` script.) 
This means that DMD is now realistically usable on Windows without 
jumping through hoops to use breakpad symbols.


There is more work to be done. Soon, I plan to:

* use `fix-stacks` on test outputs (in `utils.py` and 
`automation.py.in `);


* re-enable stack fixing on Mac test runs on local builds, which is 
currently disabled because it is so slow;


* add breakpad symbol support to `fix-stacks`;

* remove the old scripts.

The tree of relevant bugs can be seen at
https://bugzilla.mozilla.org/showdependencytree.cgi?id=1596292_resolved=1.

The stack traces produced by `fix-stacks` are sometimes different to 
those produced by the old stack-fixers. In my experience these 
differences are minor and you won't notice them if you aren't looking 
for them. But let me know if you have any problems.


Nick

___
firefox-dev mailing list
firefox-...@mozilla.org
https://mail.mozilla.org/listinfo/firefox-dev

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Searchfox at All-Hands: Who's interested?

2020-01-29 Thread Andrew Sutherland
Following up, there are now 2 identical Searchfox sessions scheduled 
hosted by some permutation of myself and Emilio and any other Searchfox 
experts who show up!  If you are interested, hopefully you can attend 
part of one of the sessions.  It's okay to show up late, the goal is to 
help people get more involved in Searchfox, not lecture you about it, so 
you won't be missing much if you're late[1].  You also don't have to 
stay for the whole time!


- Thursday: 4pm-6pm: 
https://berlinallhandsjanuary2020.sched.com/event/ZWHz/searchfox-session-1-learncontributediscuss
- Friday: 10:30am-12pm: 
https://berlinallhandsjanuary2020.sched.com/event/ZDRs/searchfox-session-2-learncontributediscuss


All subsequent coordination will be happening in #searchfox-berlin-2020 
on https://chat.mozilla.org/  which I think is 
#searchfox-berlin-2020:mozilla.org canonically.  (Note that previously I 
had mentioned an identically named channel on Slack.  We'll simulcast 
any important announcements there too, but I'd suggest bailing on the 
Slack room in favor of the Matrix room.)


Andrew

1: I will prepare a small slide deck to provide a quick overview of 
things which I'll make available by Thursday morning which should help 
you come up to speed pre-emptively or just-in-time if you miss the 
not-a-lecture at the beginning of the session.



On 1/9/20 1:43 PM, Andrew Sutherland wrote:
Are people interested in a session(s) at the All-Hands on Searchfox? 
If you're interested in any of the following things, please email me 
here or at as...@mozilla.com or let me know via other channels, and 
let me know which of the following you'd be interested in.  My goal is 
to get a rough count so I can try and book a room if there's interest.



1. Contributing to Searchfox.  Want to improve something about 
Searchfox?  You can!


We can help you get set up with a local VM and credentials to try your 
changes on mozilla-central in the cloud without your laptop melting 
down!  Already tried contributing and tried to melt your own laptop 
down out of frustration with setting up VirtualBox?  We can help with 
that too!  (Also, you can now use libvirt and save yourself a bundle 
in new laptops!)


Already have the VM setup and appreciate the extensive Searchfox 
documentation at https://github.com/mozsearch/mozsearch/ and 
https://github.com/mozsearch/mozsearch-mozilla/ but want some guidance 
on how to implement the thing you want to do?  We can help with that 
double too!



2. Talking Searchfox UX, especially as it relates to upcoming 
features/possibilities on the "fancy" branch.


I've been doing some hacking to support a more structured 
representation of data to support creating diagrams[1] for both 
documentation purposes and to make code exploration and understanding 
easier.


This potentially opens up a bunch of new features like 
https://clicky.visophyte.org/files/screenshots/20190820-174954.png 
demonstrates, providing both the type of a thing you've clicked on, 
plus being able to see its documentation or uses without having to 
click through.  But the more features you try and cram into something, 
the more potential for them to get in the way of what the user 
actually wanted to do.  For example, the helpful popup also probably 
hides the code you were trying to look at. Should the information be 
in a big box at the bottom of the screen like cs.chromium.org?  The 
top?  Configurable?


Also, for the diagrams, how to make them most accessible.  My current 
approach[2] attempts to leverage the inherent hierarchy into a ul/li 
tree-structure that directly mirrors the clustering used in the 
graphviz diagram, with in and out edges indicated at each node.  
Planned work includes figuring out how to best get NVDA to make those 
edges traversable so that the traversal is possible with more than 
manually using ctrl-f.



3. Talking Searchfox data exposure for your own tools, especially as 
it relates to the new data available on the "fancy" branch.


Do you have a tool that uses Searchfox and wish its result format 
wasn't clearly just a data structure pre-baked for presentation 
purposes that the receiving JS perform a light HTML-ization on?



Andrew


1: Here are some examples of diagrams created during prototyping:

- Manually creation by clicking on calls/called-by edges in iterative 
search results exploration: 
https://clicky.visophyte.org/files/screenshots/20190503-133733.png
- Automatic diagram from heuristics based on local same-file control 
flow: https://clicky.visophyte.org/files/screenshots/20190821-165907.png
- Blockly based diagramming without rank overrides or colors applied: 
https://clicky.visophyte.org/files/screenshots/20191231-214320.png


2: 
https://github.com/asutherland/mozsearch/blob/00a60f899936559ed4d158999278660eb5c98df5/ui/src/grokysis/frontend/diagramming/class_diagram.js#L480


___
firefox-dev mailing list
firefox-.

Re: Visibility of disabled tests

2020-01-11 Thread Andrew Sutherland

On 1/8/20 12:50 PM, Geoffrey Brown wrote:

Instead of changing the reviewers, how about:
 - we remind the sheriffs to needinfo
 - #intermittent-reviewers check that needinfo is in place when 
reviewing disabling patches.


To try and help address the visibility issue, we could also make 
searchfox consume the test-info-disabled-by-os taskcluster task[1] and:


- put banners at the top of test files that say "Hey!  This is 
(sometimes) disabled on [android/linux/mac/windows]"


- put collapsible sections at the top of the directory listings that 
explicitly call out the disabled tests in that directory. The idea would 
be to avoid people needing to scroll through the potentially long list 
of files to know which are disabled and provide some ambient awareness 
of disabled tests.


If there's a way to get a similarly pre-built[2] mapping that would 
provide information about the orange factor of tests[3] or that it's 
been marked as needswork, that could also potentially be surfaced.


Andrew

1: 
https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/taskcluster/ci/source-test/file-metadata.yml#62


2: Emphasis on pre-built in the sense that searchfox's processing 
pipeline really doesn't want to be issuing a bunch of dynamic REST 
queries that would add to its processing latency.  It would want a 
taskcluster job that runs as part of the nightly build process so it can 
fetch a JSON blob at full network speed.


3: I guess test-info-all at 
https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/taskcluster/ci/source-test/file-metadata.yml#91 
does provide the "failed runs" count and "total runs" which could 
provide the orange factor?  The "total run time, seconds" scaled by the 
"total runs" would definitely be interesting to surface in searchfox.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Searchfox at All-Hands: Who's interested?

2020-01-11 Thread Andrew Sutherland
Are people interested in a session(s) at the All-Hands on Searchfox?  If 
you're interested in any of the following things, please email me here 
or at as...@mozilla.com or let me know via other channels, and let me 
know which of the following you'd be interested in.  My goal is to get a 
rough count so I can try and book a room if there's interest.



1. Contributing to Searchfox.  Want to improve something about 
Searchfox?  You can!


We can help you get set up with a local VM and credentials to try your 
changes on mozilla-central in the cloud without your laptop melting 
down!  Already tried contributing and tried to melt your own laptop down 
out of frustration with setting up VirtualBox?  We can help with that 
too!  (Also, you can now use libvirt and save yourself a bundle in new 
laptops!)


Already have the VM setup and appreciate the extensive Searchfox 
documentation at https://github.com/mozsearch/mozsearch/ and 
https://github.com/mozsearch/mozsearch-mozilla/ but want some guidance 
on how to implement the thing you want to do?  We can help with that 
double too!



2. Talking Searchfox UX, especially as it relates to upcoming 
features/possibilities on the "fancy" branch.


I've been doing some hacking to support a more structured representation 
of data to support creating diagrams[1] for both documentation purposes 
and to make code exploration and understanding easier.


This potentially opens up a bunch of new features like 
https://clicky.visophyte.org/files/screenshots/20190820-174954.png 
demonstrates, providing both the type of a thing you've clicked on, plus 
being able to see its documentation or uses without having to click 
through.  But the more features you try and cram into something, the 
more potential for them to get in the way of what the user actually 
wanted to do.  For example, the helpful popup also probably hides the 
code you were trying to look at. Should the information be in a big box 
at the bottom of the screen like cs.chromium.org?  The top?  Configurable?


Also, for the diagrams, how to make them most accessible.  My current 
approach[2] attempts to leverage the inherent hierarchy into a ul/li 
tree-structure that directly mirrors the clustering used in the graphviz 
diagram, with in and out edges indicated at each node.  Planned work 
includes figuring out how to best get NVDA to make those edges 
traversable so that the traversal is possible with more than manually 
using ctrl-f.



3. Talking Searchfox data exposure for your own tools, especially as it 
relates to the new data available on the "fancy" branch.


Do you have a tool that uses Searchfox and wish its result format wasn't 
clearly just a data structure pre-baked for presentation purposes that 
the receiving JS perform a light HTML-ization on?



Andrew


1: Here are some examples of diagrams created during prototyping:

- Manually creation by clicking on calls/called-by edges in iterative 
search results exploration: 
https://clicky.visophyte.org/files/screenshots/20190503-133733.png
- Automatic diagram from heuristics based on local same-file control 
flow: https://clicky.visophyte.org/files/screenshots/20190821-165907.png
- Blockly based diagramming without rank overrides or colors applied: 
https://clicky.visophyte.org/files/screenshots/20191231-214320.png


2: 
https://github.com/asutherland/mozsearch/blob/00a60f899936559ed4d158999278660eb5c98df5/ui/src/grokysis/frontend/diagramming/class_diagram.js#L480


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to hg.mozilla.org access

2019-11-01 Thread Andrew Sutherland

On 11/1/19 4:39 PM, Kim Moir wrote:

On Nov 14, 2019, we intend to change the permissions associated with Level
3 access to revoke direct push access to hg.mozilla.org on mozilla-inbound,
mozilla-central, mozilla-beta, mozilla-release and esr repos.


For mozilla-beta, mozilla-release, and esr... does lando know how to 
land to these, or is it the case that landings on these branches are 
done based on the approval flags by people other than the patch author?


I ask because if I create a branch based on the hg unified repo 
"release" tag and then use `moz-phab` to create a review, I assume what 
happens if I try and land with "lando" is that it will try and land the 
commit against mozilla-central and it may succeed if the file hasn't 
changed too much in central versus where it was on the release branch.


Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Naming parameters in lambda expressions

2019-09-06 Thread Andrew Sutherland

On 9/6/19 7:31 AM, David Teller wrote:

For what it's worth, I recently spent half a day attempting to solve a
bug which would have been trivial if `a` and `m` prefixes had been
present in that part of the code.

While I find these notations ugly, they're also useful.



Is this something searchfox could have helped with by annotating the 
symbol names via background-color, iconic badge, or other means?  Simon 
and I have been discussing an optional emacs glasses-mode style of 
operation which so far would allow for:


- expansion of "auto" to the actual underlying inferred type. "auto" 
would still be shown, and the expanded type would be shown in a way that 
indicates it's synthetic like being placed in parentheses and rendered 
in italics.


- inlining of constants.


Searchfox does already highlight all instance of a symbol when it's 
hovered over, or optionally made sticky from the menu (thanks, :kats!), 
but more could certainly be done here.  The question is frequently how 
to provide the extra information without making the interface too busy.


But of course, if this was all being done from inside an editor or a 
debugger, no matter what tricks searchfox can do, they can't help you 
elsewhere.



Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Help: why won't Thunderbird display mail in mochitest conditions?

2019-08-27 Thread Andrew Sutherland
Can you elaborate on the execution context?  Are these more like 
"browser" mochitests or "plain" mochitests? 
(https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Mochitest#Running_flavors_and_subsuites) 
Is the Thunderbird tabbed UI its own normal XUL window self, or is it 
framed inside some type of iframe that it's not normally framed by?


Andrew

On 8/26/19 6:40 PM, Geoff Lankow wrote:

Hi

Over the past year or so, I've been adding mochitests for new 
Thunderbird features. It's recently occurred to me that in a 
mochitest, Thunderbird does not display mail messages. Not even the 
message header list, just a blank rectangle where the message should be.


Obviously this is quite important as displaying messages is 
Thunderbird's primary function. But I don't understand the reason.


I expect that it has something to do with message URLs, which are of 
the form mailbox:///path/to/mailbox?number=1234.


I know that mochitest does things to network access to prevent tests 
from accessing the internet, but that doesn't seem to be the reason as 
I can load the URL using fetch.


Is there some logic in docshell that behaves differently in a test? As 
far as I can work out, this code [1] is a part of the loading process, 
and if docShell->LoadURI fails that would explain why nothing further 
happens. (IANA core hacker, excuse my ignorance!)


GL

[1] 
https://searchfox.org/comm-central/rev/753e6d9df9d7b9a989d735b01c8b280eef388bab/mailnews/local/src/nsMailboxService.cpp#205

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: navigator.storage on Firefox for Android (Fennec)

2018-06-20 Thread Andrew Sutherland
As of Firefox 63 we're hoping to turn navigator.storage 
(StorageManager[1]) on by default for Firefox for Android (Fennec).  It 
is controlled by pref "dom.storageManager.enabled" and provides both the 
storage.estimate() and storage.persist() API's[2].


We shipped navigator.storage for Desktop in Firefox 57 in 
https://bugzilla.mozilla.org/show_bug.cgi?id=1399038.  We did not enable 
it for Android, probably over concerns about filling up mobile storage 
because navigator.storage.persist() exempts an origin from QuotaManager 
eviction and there is no UI on Fennec to show how much data various 
origins are using and clear data on a per-origin basis.  (The 
permission, however, can be revoked when viewing a page on the origin.)


The previous Intent to Ship for storage.estimate() can be found at 
https://groups.google.com/d/msg/mozilla.dev.platform/-stfRvwwEHU/J5P1mzlHAwAJ


In https://bugzilla.mozilla.org/show_bug.cgi?id=1457973 we plan to 
enable the pref for Android, potentially with mitigation.  The minimal 
mitigating actions I've proposed are:


- Have navigator.storage.persist() resolve with false without prompting 
the user, similar to how an internal error or user disapproval would be 
treated.  We would enable actual prompting and the actual permission 
once there is UI in place to help users manage origin storage.


- Potentially use a lower per-group cap than the 2 GiB currently used, 
at least as reported by storage.estimate(). 
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Browser_storage_limits_and_eviction_criteria#Storage_limits 
covers our existing logic which was a reasonable first-pass at capping 
usage, but is not great as an advisory value for sites that might 
proactively engage in speculative precaching.  There are almost 
certainly smarter ways to handle this if anyone has time to think about 
this and/or gather data!


Your thoughts and other low-effort alternatives are appreciated!

Andrew

1: https://developer.mozilla.org/en-US/docs/Web/API/StorageManager
2: https://developer.mozilla.org/en-US/docs/Web/API/StorageManager/persist
https://developer.mozilla.org/en-US/docs/Web/API/StorageManager/estimate

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: "storage" attribute in options for indexedDB.open()

2018-03-06 Thread Andrew Sutherland
QuotaManager already has a notion of "internal" origins[1] that 
explicitly white-lists devtools' synthetic "indexeddb" fake scheme.  
It's this check that avoids the prompt when persistent storage is requested.


It might make sense to expand the existing logic that acts like all 
system-principaled origins are asking for persistent storage[2] to use 
the check as well, or at least for the IndexedDB protocol until we get 
to the next steps for changing how QuotaManager handles persistent 
data.  We could do this in the same commit that removes devtools' use of 
the storage property.


Andrew

1: 
https://searchfox.org/mozilla-central/rev/bffd3e0225b65943364be721881470590b9377c1/dom/quota/ActorsParent.cpp#5524


2: 
https://searchfox.org/mozilla-central/rev/bffd3e0225b65943364be721881470590b9377c1/dom/indexedDB/IDBFactory.cpp#681



On 03/06/2018 01:18 PM, J. Ryan Stinnett wrote:

DevTools is one chrome caller that might be impacted. We craft a custom
principal and pass `storage: persistent`[1] when using IndexedDB in the
tools.

DevTools uses this storage for developer settings that should be retained
over time. It sounds like with the proposed change here, DevTools storage
might lose the persistent status. If that's true, what's the right approach
to maintain the status quo? I don't think it makes sense to show a
permission prompt when DevTools access its storage, since it is a browser
feature.

Should we add an exception for the DevTools iDB principal? Should DevTools
use the system principal and migrate existing data?

[1]:
https://searchfox.org/mozilla-central/rev/bffd3e0225b65943364be721881470590b9377c1/devtools/shared/indexed-db.js#34

- Ryan

On Tue, Mar 6, 2018 at 9:58 AM, Johann Hofmann  wrote:


I would like to unship the proprietary "storage" attribute in
indexedDB.open()[0]. It allows developers to prevent their indexedDB
storage from being evicted as part of quota management[1]. However,
there is a web standard which specifies a better persistent storage
mechanism and has broader vendor support[2].

There are several issues with the old proprietary version:

- It's only supported by Firefox.
- It can be used over insecure HTTP, which is against the persistent
storage spec.
- Its internal mechanism is only concerned with indexedDB and does not
integrate with other quota managed storage.
- We actually need to maintain two separate permissions that do more or
less the same thing ("indexedDB" and "persistent-storage"). The UI for
managing these looks almost exactly the same and it's impossible to
clarify the difference. It's a pretty annoying security/privacy UX issue
and difficult to justify to users.

The plan is to ignore the "storage" attribute and label all databases
opened from iDB.open as default (i.e. dependent on the persistent
storage mechanism).

Before doing this, we will issue a deprecation warning in the browser
console and write a blog post on Mozilla Hacks. Affected websites could
lose their indexedDB data (equivalent to the user clearing their
cookies), unless they migrate to the new storage model.

We are tracking this work in bug 1354500
(https://bugzilla.mozilla.org/show_bug.cgi?id=1354500).

We have seen very low usage on beta[3], with the exception of a spike in
November which we attribute to about:home usage from a/b testing. After
we stopped counting usage from about: pages on Nightly, the telemetry
probe stopped signaling completely[4].

I personally consider these numbers (prior to any evangelism or console
warnings) low enough to unship within the Firefox 62 timeframe, without
migration.

Chrome callers should not be affected by this, since we upgrade the
system principal to persistent storage automatically.

Please let me know what you think.

Thanks,

Johann

[0] https://developer.mozilla.org/en-US/docs/Web/API/IDBFactory/
open#Syntax
[1]
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_
API/Browser_storage_limits_and_eviction_criteria
[2] https://storage.spec.whatwg.org/
[3] https://mzl.la/2GVyQ7g
[4] https://mzl.la/2FqW0FH
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Change to MOZ_LOG output formatting coming: Thread identifiers will include the PID (process id)

2018-01-10 Thread Andrew Sutherland
Right now MOZ_LOG's default output does not include process identifiers, 
so messages from different processes look misleadingly similar:


[Main Thread]: D/nsBlah Message Message
[Main Thread]: D/nsBlah Message Message

They will soon instead look like:

[4372:Main Thread]: D/nsBlah Message Message
[4452:Main Thread]: D/nsBlah Message Message

If you mainly interact with things through mach, you may be thinking, 
"Hey!  What about those PIDs things get tagged with?":


GECKO(3972) | [Main Thread]: D/nsBlah Message Message
GECKO(3972) | [Main Thread]: D/nsBlah Message Message

AFAICT, mach and/or its helpers are doing a best-effort tagging of the 
stdout/stderr of the spawned parent process.  Since the content 
processes inherit the descriptors of the parent, mach/friends are unable 
to magically figure out the true writer without additional machinery to 
give each content process its own descriptors that are pushed out to 
mach via sendmsg or something else fancy.  So everything gets labeled 
with the parent PID.  It would be cool to have a mechanism so that the 
structured logging output could be properly labeled, but that arguably 
is a different bug.


The change to add the PID is tracked on 
https://bugzilla.mozilla.org/show_bug.cgi?id=1428979 so if you have 
thoughts/proposals/patches, please chime in there.  This change will 
likely land sometime late today or tomorrow otherwise.  This may break 
log parsers that use very specific regexps, although the location was 
chosen to leverage the enclosing "[]" and the one I know about is fine[1].


If you would like to learn more about Gecko logging in general, please 
see 
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Gecko_Logging


If you're wondering why this didn't seem to be a problem for Necko's 
cool about:networking logging as documented at 
https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/HTTP_logging 
it's because MOZ_LOG did already have a mechanism for partitioning log 
files by labeled process, see 
https://searchfox.org/mozilla-central/rev/03877052c151a8f062eea177f684a2743cd7b1d5/xpcom/base/Logging.cpp#136


Andrew

1: https://github.com/mayhemer/logan/blob/master/logan-rules.js#L5


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Changing remote type of xpcshell Content processes from "" to "web"

2017-09-01 Thread Andrew Sutherland
Right now if you have an e10s xpcshell test, the remote type of the 
resulting child (content) process will be NO_DEFAULT_TYPE="". This is 
not a real type, this is not a type you should ever see in Firefox.  In 
https://bugzilla.mozilla.org/show_bug.cgi?id=1395827 I propose changing 
the type to DEFAULT_REMOTE_TYPE="web".  It's not a sure thing, but I 
figure if I make it sound likely to happen and you have an opinion on 
this, you're more likely to speak up. Please chime in on the bug.  Thanks!


Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Shipping Headless Firefox on Linux

2017-06-15 Thread Andrew Sutherland
On Thu, Jun 15, 2017, at 09:37 PM, ISHIKAWA,chiaki wrote:
> Interesting.
> But this covers only modal prompts generated by/in JavaScript modules, 
> and not C++ code?
> If so, maybe I should re-think my previous error/warning dialog to see 
> if the generation can be moved to JavaScript code.
> It was generated in C++ code.

It will work from C++ (and JS).  If you look at the XPCOM bits at the
bottom and the MockFactory it is dependent upon, it provides a JS XPCOM
implementation of nsIPromptService (via XPConnect) that replaces the
"real" implementation that uses UI widgets.  The only limitation of the
helper being implemented in JS is that all calls to the interfaces must
be performed on the main thread.  However, UI widgets already impose a
main-thread-only restriction, so things should be fine.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Shipping Headless Firefox on Linux

2017-06-15 Thread Andrew Sutherland
On Thu, Jun 15, 2017, at 03:27 AM, ishikawa wrote:
> On local machine, when I added a modal error dailog as a response to an
> error condition, which happens to be  artificially created and tested in
> xpcshell test, the test fails because the screen or whatever is not
> available. Fair enough, but it was rather disappointing to see such a
> feature needed to be disabled due to xpcshell test failing.

You may want to check out
https://searchfox.org/comm-central/source/mailnews/test/resources/alertTestUtils.js
which provides for intercepting various modal prompts in Thunderbird
xpcshell tests.  Whatever the test is testing likely wants to also be
verifying the dialog pops up, which that helper library provides.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Avoiding jank in async functions/promises?

2017-05-22 Thread Andrew Sutherland
On Sun, May 21, 2017, at 09:29 PM, Mark Hammond wrote:
> As I mentioned at the start of the thread, in one concrete example we 
> had code already written that we identified being janky - 
> http://searchfox.org/mozilla-central/rev/f55349994fdac101d121b11dac769f3f17fbec4b/toolkit/components/places/PlacesUtils.jsm#2022

Is this a scenario where we could move this intensive logic off the
(parent process) main thread by fulfilling the dream of the "SQLite
interface for Workers" bug[1] by using WebIDL instead of js-ctypes to
let the Sqlite.jsm abstraction operate on ChromeWorkers?  The only
XPConnect leakage on the Sqlite.jsm API surface is
mozIStorageStatementRow[2], and although it's a bit unwieldy in terms of
method count, we never exposed any of the nsIXPCScriptable magic on it
that we did on statements.  (And thankfully SQLite.jsm neither uses or
otherwise exposes the API.)

It wouldn't be a trivial undertaking, but it's also not impossible
either.  And if Sync is chewing up a lot of main thread time both
directly (processing) and indirectly (generating lots of of garbage that
lengthens parent-process main-thread GC's), it may be worth considering
rather than trying to optimize the time-slicing of Sync.  This does, of
course, assume that Sync can do meaningful work without access to
XPConnect and that there aren't major gotchas in coordinating with
Places' normal operation.

Note: I'm talking exclusively about using the existing asynchronous
Sqlite.jsm API on top of the existing async mozStorage API usage.

Andrew

1: https://bugzilla.mozilla.org/show_bug.cgi?id=853438
2:
https://searchfox.org/mozilla-central/source/storage/mozIStorageRow.idl
subclasses
https://searchfox.org/mozilla-central/source/storage/mozIStorageValueArray.idl
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please do NOT hand-edit web platform test MANIFEST.json files

2017-03-18 Thread Andrew Sutherland
On Sun, Mar 19, 2017, at 12:36 AM, Nils Ohlmeier wrote:
> Wouldn’t it make more sense to let the build system detect and
> reject/warn about (?) such a manual modification?
> My assumption here is that mailing list archives are not a good place to
> document processes or systems.

I think some additional forward progress may now have been made. 
There's some extra IRC discussion of this issue at:
http://logs.glob.uno/?c=mozilla%23content=17+Mar+2017=17+Mar+2017#c430993

In the discussion, https://bugzilla.mozilla.org/show_bug.cgi?id=1333433
"Consider making the wpt manifest a product of the build system" was
referenced.  And as a result of the discussion,
https://bugzilla.mozilla.org/show_bug.cgi?id=1333433#c8 was logged on
trying to use build artifacts to deal with the current 1+ minutes
slowness of manifest generation (which is why it gets checked into the
tree and leads to conflicts in the first place).

Andrew



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen output?

2017-02-27 Thread Andrew Sutherland
On Tue, Feb 21, 2017, at 04:42 PM, Andrew Sutherland wrote:
> A sketch first steps implementation would be:

I took a shot at this.  The pull request with details is at
https://github.com/bill-mccloskey/searchfox/pull/11 and hopefully
self-explanatory screenshots are here:
https://clicky.visophyte.org/examples/searchfox/20170227/static-field-decl-comments.png
https://clicky.visophyte.org/examples/searchfox/20170227/static-field-uses.png
https://clicky.visophyte.org/examples/searchfox/20170227/method-def-decl-comment.png

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Doxygen output?

2017-02-21 Thread Andrew Sutherland
On Tue, Feb 21, 2017, at 03:13 PM, Bill McCloskey wrote:
> I've been thinking about how to integrate documentation into Searchfox.
> One
> obvious thing is to allow it to display Markdown files and
> reStructuredText. I wonder if it could do something useful with Doxygen
> comments though? Is this something people would be interested in?

I think an amazing thing searchfox could do would be to make it easier
to find the useful documentation that exists.  Having joined the
platform team with a somewhat pessimistic guess at the amount of useful
documentation, I've been pleasantly surprised by what's available. 
However, it's not always easy to know where the documentation is for
code until you've spent a lot of time with it, especially with the
amount of plumbing code we have with IPC.  That arguably defeats some of
the benefit of the docs.

For example, when looking at a C++ method and trying to understand what
its purpose is/why it's doing something, there could be useful comments:
- Preceding the C++ definition or just inside the method.
- Preceding the C++ declaration.
- Preceding the .idl definition of the method.
- Preceding the webidl definition of the method.
- In the spec that gives us the webidl definition.
- Related to another method that's just a thin wrapper around the method
we're looking at.  (And which can be pierced by hints or following the
call-graph with heuristics like noticing the callee's name is within
some edit distance of the caller's name and the caller does very little
else, or that the caller just seems to set up a runnable, etc.)
- Referenced by in-tree overview markdown docs like devtools has in
http://searchfox.org/mozilla-central/source/devtools/docs
- Referenced by in-tree overview pandoc docs like SpiderMonkey has in
http://searchfox.org/mozilla-central/source/js/src/doc (and which are
reflected to MDN)
- Referenced by in-tree sphinx-friendly docs that are reflected to
https://gecko.readthedocs.io/en/latest/
- Referenced/documented by something on wiki.mozilla.org
- Referenced/documented by something on developer.mozilla.org (ignoring)

A sketch first steps implementation would be:
- Have the file indexers use heuristics to detect block comments and
associate them with the following definition/declaration.  (Maybe the
clang AST already does this?)  No need to try and parse the doxygen at
first, just be able to display it in a human-readable fashion.  Log the
comment plus its kind (decl, def, idl, etc.).  Well, maybe bless a few
things like @see/@sa or @related to allow explicit doc-references to
other identifiers to exist.
- Aggregate a "docs" file similar to "crossrefs" that stores
per-identifier doc links and excerpts.
- Do not bake the comments into the syntax-highlighted source to avoid
combinatorial explosion.
- Have router.py expose an API to lookup the documentation associated
with one or more identifiers, possibly following inheritance (when
implemented in searchfox) and override relationships as appropriate.
- Have hovering over a symbol prefetch/cache the docs for a symbol.
- Transform the menu displayed when clicking on a symbol to a mega menu
(a la
http://bjk5.com/post/44698559168/breaking-down-amazons-mega-dropdown). 
Have a top-level entry displaying hints about what docs are available,
like "Docs (h idl webidl...)" or greyed out "No Docs".  Have the
expanded submenu be the excerpts displayed right there so they can be
read in their entirety if they're terse.  The goal is that the user
doesn't need to click through to a full docs search or to follow the
links to the docs to read them unless they want to.  This idiom could
also be used for inheritance information and overrides.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Would "Core :: Debugging: gdb" be a good bugzilla component?

2016-09-16 Thread Andrew Sutherland
Right now the gecko gdb pretty-printers have been tracked using bugs in 
"Core :: Build Config" because that's where the bug that first added 
them lived.  That component currently has 1773 bugs in it, 3 of which 
involve gdb, and with daily activity of ~10 bugs a day.  I think a more 
targeted component could be useful since gdb users could watch the 
component without getting the comparative fire hose of Build Config.  
Right now notifications would need to come via explicit CC or watching 
for changes in dependencies of the existing bugs.


The component would presumably cover not just the gdb pretty printers 
but also other gdb related issues.


The triggering factor is that I made a foolish mistake with the 
nsTHashtable pretty-printer.  It has been telling lies, see 
https://bugzilla.mozilla.org/show_bug.cgi?id=1303174 (fix on inbound, 
with apologies to anyone misled by the broken pretty-printer).


Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


netwerk and media experts; feedback requests for background upload/download WebAPI video download use-case

2016-09-07 Thread Andrew Sutherland
There's a proposal at https://github.com/jakearchibald/background-cache 
for enabling persistently-tracked uploads/downloads in the background.  
It might change names to background-fetch.


I'm interested in getting some informed feedback about the following 
use-case on https://github.com/jakearchibald/background-cache/issues/3


A motivating use-case is for a site that wants to download 
movies/podcasts in the background and keep them around for offline 
purposes.  Once the file is downloaded, it seems clear that the 
(ServiceWorker [DOM]) Cache API[1] is a great place to store the 
result.  What's less clear is how best to handle allowing the user to 
begin watching the movie when the download is still in-progress.  A 
potential straw-man is:


* Start the download with background-fetch
* Use the background-fetch API to track the status of the download.  
When there's "enough" downloaded, start playing the video via its online 
URL with the confidence that the netwerk cache2 can unify the two 
requests as much as possible.
* When the download completes, stick the movie in the DOM Cache API and 
use Cache-furnished Responses for all future playback requests.


The greatest concern right now is around range requests and redundantly 
downloaded data.  We've seen interest in the Service Worker repo issues 
for the ability to track/match in-flight fetch requests[2] that I hope 
should be entirely mooted by the HTTP cache.  Depending on the HTTP 
cache until the download is complete ideally avoids redundant downloads, 
but it would be great to have reassurance and an understanding of how a 
background-fetch download mechanism might interact with the HTTP cache.  
Could the download entirely use the HTTP cache for storage with the 
cache entries pinned to avoid eviction until the download notification 
extendable event completes?


Andrew

1: 
https://w3c.github.io/ServiceWorker/spec/service_worker/index.html#cache-objects

2: https://github.com/w3c/ServiceWorker/issues/959

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DXR: How to encourage people to link to perma-links instead of plain ones?

2016-08-08 Thread Andrew Sutherland
On Mon, Aug 8, 2016, at 09:35 AM, Boris Zbarsky wrote:
> When you click a link to a particular line in dxr, that would normally 
> navigate from https://dxr/foo to https://dxr/foo#lineno.  Instead, have 
> it navigate to https://dxr/rev/foo#lineno.

It would be great if DXR could expose a truncated revision hash by
default (and of course be able to use that too).  I intentionally use
non-revision URLs because while the following is a little bit unwieldy
to paste in a bug (99 chars):
https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/HttpChannelParentListener.h#30
the permalink definitely feels unwieldy and sometimes causes my brain to
want to skip the URL because the lengthy hash looks a lot like line
noise to it (137 chars):
https://dxr.mozilla.org/mozilla-central/rev/d42aacfe34af25e2f5110e2ca3d24a210eabeb33/netwerk/protocol/http/HttpChannelParentListener.h#30

It might also be feasible to have bugzilla try and do something like
github does where it recognizes the raw link and exposes it as mozilla-central/netwerk/protocol/http/HttpChannelParentListener.h
to help with that.  Now that we have a "preview" tab, the rewriting
wouldn't be so bad.

I think the truncated hash would still be nice for other venues that
don't specialize for dxr.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Gecko gdb pretty printers now understand nsTHashtable/friends

2016-07-18 Thread Andrew Sutherland
https://bugzilla.mozilla.org/show_bug.cgi?id=1286467 has landed, so if 
you have the following in your .gdbinit:

add-auto-load-safe-path ~/some/parent/dir/of/where/you/keep/gecko

You're now going to see stuff like the following for a hashtable with 
entries:


mRegistrationInfos = nsClassHashtable = {
["https://www.pokedex.org^userContextId=2;] = 
[(mozilla::dom::workers::ServiceWorkerManager::RegistrationDataPerPrincipal *) 
0x7f4a6a4a4c80],
["https://googlechrome.github.io;] = 
[(mozilla::dom::workers::ServiceWorkerManager::RegistrationDataPerPrincipal *) 
0x7f4a57ea0c80],
["https://jakearchibald.github.io;] = 
[(mozilla::dom::workers::ServiceWorkerManager::RegistrationDataPerPrincipal *) 
0x7f4a4f3f3100],
["https://offline-news-service-worker.herokuapp.com;] = 
[(mozilla::dom::workers::ServiceWorkerManager::RegistrationDataPerPrincipal *) 
0x7f4a4f7fef00]
  }


and for an empty hashtable:

mControlledDocuments = nsRefPtrHashtable,


Where you previously saw something like (this is without "set print 
pretty on"):


  mRegistrationInfos = {> = 
{ >> = {mTable = {
  mOps = 0x7fffecfe9e80  
>::Ops()::sOps>, mHashShift = 29, mEntrySize = 32, mEntryCount = 5, mRemovedCount = 0, 
mEntryStore = {
mEntryStore = 0x7fffd51d6200 "", mGeneration = 1}, static 
kMaxCapacity = 67108864, static kMinCapacity = 8,
  static kMaxInitialLength = 33554432, static kDefaultInitialLength = 
4, static kHashBits = 32, static kGoldenRatio = 2654435769,
  static kCollisionFlag = 1}}, }, },


I mention this:
- Because if you're already using the pretty printers, the change might 
otherwise worry you if it's not immediately obvious what's going on.
- As a reminder that you really want that "add-auto-load-safe-path PATH" 
in your .gdbinit because python-powered pretty printers are 
alliteratively awesome.



If you would like to help add pretty printers or make the existing 
pretty printers more awesome:


- The gecko ones live at:
https://dxr.mozilla.org/mozilla-central/source/python/gdbpp/gdbpp
And get into your gdb session via .gdbinit_python at either the root of 
the source tree or the root of your dist/bin tree:

https://dxr.mozilla.org/mozilla-central/source/.gdbinit_python
https://dxr.mozilla.org/mozilla-central/source/build/.gdbinit_python.in

- The SpiderMonkey ones (with tests!) live at:
https://dxr.mozilla.org/mozilla-central/source/js/src/gdb
And get into your gdb session on dynamic load of libxul:
https://dxr.mozilla.org/mozilla-central/source/toolkit/library/libxul.so-gdb.py.in

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Service Worker - Offline fallback not working

2016-05-31 Thread Andrew Sutherland
Stack Overflow or other programmer discussion Q sites are probably
better venues for questions like this, such as under the stack overflow
service-worker tag, see:
http://stackoverflow.com/questions/tagged/service-worker .  Being able
to browse the questions and answers of others may also help shed insight
on your problem.  dev-platform, as described at
https://www.mozilla.org/en-US/about/forums/#dev-platform, is for
discussion of the implementation of the underlying functionality in the
context of Gecko.  I had responded previously because I thought it might
be a simple thing where having someone else look at it helps and writing
these "sorry, you're in the wrong place" messages can take a while to
write and be frustrating to read.
 
When posting there, I'd suggest being very explicit about the exact
situation you're testing and what you're expecting in terms of success
or failure.  For example, seeing that you're using the gh-pages branch,
I tried accessing the page via the github.io url of
https://mbj36.github.io/My-Blog/ but that redirected to
http://mohitbajoria.com/ which, as a non-https origin (and apparently
not listening on the https port), is not eligible to use service workers
at all.  That may be your problem, or it could be something else.
 
It's probably worth experimenting with the devtools on a service-worker
enabled site such as https://www.pokedex.org/ that is known to work and
then comparing with your (test) site.  Also, instrumenting your service
worker with console.log/etc.
 
Andrew
 
 
On Tue, May 31, 2016, at 09:45 PM, Mohit Bajoria wrote:
> Hello
>
> I added "offline.html" in filesToCache but still the same problem
> arises. When i refresh the page in absence of internet connection
> "offline.html" doesn't show up.
>
> Github project link -
> https://github.com/mbj36/My-Blog/commits/gh-pages
>
> Please help
> Thanks
> Mohit
>
> On 31 May 2016 at 23:45, Andrew Sutherland
> <asutherl...@asutherland.org> wrote:
>> "offline.html" does not appear to be in filesToCache that is
>> passed to
>>  cache.addAll().
>>
>>
>> On Mon, May 30, 2016, at 03:02 PM, Mohit Bajoria wrote:
>>  > Hello
>>  >
>>  > There is offline.html page in repository. When there is no
>>  > internet
>>  > connection then it should show up reading from cache.
>>  > Right now it doesn't read from cache and doesn't show up.
>>  >
>>  >
>>  >
>>  > On 31 May 2016 at 00:20, Ben Kelly <bke...@mozilla.com> wrote:
>>  >
>>  > > On May 30, 2016 1:55 PM, "Mohit Bajoria" <mohitbaj...@gmail.com>
>>  > > wrote:
>>  > > > Offline fallback event is not working.
>>  > > > Can anyone please let me know the error and help me solving
>>  > > > the issue ?
>>  > >
>>  > > Can you describe what you expect and what you are actually
>>  > > seeing happen?
>>  > >
>>  > > There is no "offline fallback event", so not sure exactly what
>>  > > you mean.
>>  > >
>>  > > Thanks.
>>  > >
>>  > > Ben
>>  > >
>> > ___
>>  > dev-platform mailing list
>>  > dev-platform@lists.mozilla.org
>>  > https://lists.mozilla.org/listinfo/dev-platform
 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Service Worker - Offline fallback not working

2016-05-31 Thread Andrew Sutherland
"offline.html" does not appear to be in filesToCache that is passed to
cache.addAll().

On Mon, May 30, 2016, at 03:02 PM, Mohit Bajoria wrote:
> Hello
> 
> There is offline.html page in repository. When there is no internet
> connection then it should show up reading from cache.
> Right now it doesn't read from cache and doesn't show up.
> 
> 
> 
> On 31 May 2016 at 00:20, Ben Kelly  wrote:
> 
> > On May 30, 2016 1:55 PM, "Mohit Bajoria"  wrote:
> > > Offline fallback event is not working.
> > > Can anyone please let me know the error and help me solving the issue ?
> >
> > Can you describe what you expect and what you are actually seeing happen?
> >
> > There is no "offline fallback event", so not sure exactly what you mean.
> >
> > Thanks.
> >
> > Ben
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: ISO-2022-JP-2 support in the ISO-2022-JP decoder

2015-11-30 Thread Andrew Sutherland
On Mon, Nov 30, 2015, at 01:24 PM, Adam Roach wrote:
> Does this mean it might interact with webmail services as well? Or do 
> they tend to do server-side transcoding from the received encoding to 
> something like UTF8?

They do server-side decoding.  It would take a tremendous amount of
effort to try and expose the underlying character set directly to the
browser given that the MIME part also has transport-encoding occurring
(base64 or quoted-printable), may have higher level things like
format=flowed going on, and may need multipart/related cid-protocol
transforms going on.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Merging comm-central into mozilla-central

2015-10-23 Thread Andrew Sutherland
On Fri, Oct 23, 2015, at 01:02 PM, Joshua Cranmer  wrote:
> Actually, the b2g email app does reuse JSMime (or at least will be
> shortly).

Clarifying: Yes, library reuse is happening and it's good and awesome
and we want more of it.

But: b2g gaia email does not now and is unlikely to ever care about the
canonical source location of any of those libraries[1].

Andrew

1: b2g gaia email checks in its library dependencies from their
upstreams.  It of course would be great to have changes to JSMime (and
other mail libraries consumed by gaia mail) tested against gaia mail's
tests, but this would likely be accomplished by having taskcluster spin
up gaia mail with the candidate JSMime versions and run its tests
against that, not by having gaia mail build against trunk JSMime.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to avoid mozilla firefox remote debugger to ignore specific files

2015-08-22 Thread Andrew Sutherland

On 08/22/2015 10:57 AM, 罗勇刚(Yonggang Luo) wrote:

I wanna to ignore some files when debugging .


https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_to/Black_box_a_source

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Web API equivalent of nsIEffectiveTLDService / publicsuffix.org database?

2015-08-08 Thread Andrew Sutherland
Are there any plans to surface the contents of 
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIEffectiveTLDService 
from https://publicsuffix.org/ via a web-facing URI?  Or is the long 
term expectation that web content will just embed the list in the app, 
possibly using one of the libraries at https://publicsuffix.org/learn/


Thanks,
Andrew

PS: I believe this is the most appropriate Mozilla list after the 
intentional deprecation of the dev-webapi list into this list.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Web API equivalent of nsIEffectiveTLDService / publicsuffix.org database?

2015-08-08 Thread Andrew Sutherland

On 08/08/2015 10:00 PM, Andrew Sutherland wrote:
Are there any plans to surface the contents of 
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Interface/nsIEffectiveTLDService 
from https://publicsuffix.org/ via a web-facing URI?


And of course I meant API here.  Most specifically, content-facing API.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switch to Google C++ Style Wholesale (was Re: Proposal to remove `aFoo` prescription from the Mozilla style guide for C and C++)

2015-07-15 Thread Andrew Sutherland
Would it be crazy for us to resort to a poll on these things?  I propose
abusing the mozillans.org skills field in profiles.

For example, I have created the following sets of skills on
mozillians.org by question, and which should autocomplete if you go to
the edit page for your profile at
https://mozillians.org/en-US/user/edit/

a prefixing of arguments:
* style-args-no-a
* style-args-yes-a
* style-args-dont-care

Switching wholesale to the google code style:
* style-google-no
* style-google-yes
* style-google-dont-care

My rationale is:
* Everyone should have a mozillians.org account and if you don't and
this provides the motivation... hooray!
* This avoids vote stuffing, more or less
* I guess someone could easily filter it down to valid committer
accounts?
* This requires no work on my part after this point.
* The autocomplete logic should let people add other options if they're
quick on their feet.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Strange Error: Reentrancy error: some client, attempted to display a message to the console while in a console,listener

2015-07-01 Thread Andrew Sutherland
This is probably due to
https://dxr.mozilla.org/comm-central/source/mailnews/test/resources/logHelper.js
that registers itself as a console listener and its low-tech feedback
loop prevention.  (NB: The quippy file-level comment should be
s/aweswome/Andrew/ for the third instance of awesome.)  See
http://www.visophyte.org/blog/tag/logsploder/ for context on the various
logsploder references and the purpose of most of the machinery in there.
 It might be easier to remove the functionality than to try and make it
smarter since I rather doubt anyone uses any of the fancy logging stuff
anymore.

Andrew


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing the style guide's preference for loose over strict equality checks in non-test code

2015-05-14 Thread Andrew Sutherland

On 05/14/2015 04:22 PM, Gijs Kruitbosch wrote:
== is not any less explicit than ===. Both versions have an exact, 
specified meaning.


They both have exact meanings.  But people, especially new contributors 
new to JS, frequently use == without understanding the nuances and 
footguns involved.  In Firefox OS/Gaia I think the emergent 
convention[1] is that you should always use === unless you have a 
specific need to use == and in that case you should have a comment 
explaining exactly why you want the coercing behaviour.  I, at least, do 
this in all my reviews.  And I find this disambiguates things nicely and 
helps me as a reviewer in understanding the expected data-flow of the 
code I'm reviewing.


Andrew

1: I could swear we have a style-guide for Gaia but I can't find it.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ES6 features (let, yield) and DOM workers and B2G

2015-04-29 Thread Andrew Sutherland

On 04/29/2015 03:10 PM, Jonas Sicking wrote:

But has anyone reached out to the JS
team and let them know that we really need let and yield in workers to
see if A is an option?


This is the reaching-out stage now* :).  When I filed bug 1151739 the 
only impacted app was the email app, and that was for a long-lived 
branch that is unlikely to be merged for 3 months. FxOS Calendar is now 
undergoing refactoring/cleanup in an iterative fashion where a 
long-lived branch is not required/desired and so the issue is forced.



F) Just use the Babel transpiler for now for workers at the cost of not
being able to provide feedback to the JS engine in cases where it's not
practical to flip the preference.  Many FxOS apps already use AMD-style
module loaders that support loader plugins
(http://requirejs.org/docs/plugins.html) that support both dynamic
transpiling and optimized build-time transpiling, and one exists for Babel
already (https://github.com/mikach/requirejs-babel).

I don't have a sense for how much extra work this is for you guys, so
I don't have a strong opinion either way on this.


It's not particularly a lot of extra work; all the tooling exists 
already.  My main concerns would be:
- Arguably it's better if we're using SpiderMonkey's ES6 features that 
are already in there so we can help find and file bugs, etc.
- There will be a temptation to use fancy proposed ES7 features like 
async/await, potentially resulting in the code never actually being pure 
ES6 code.  (And ES7 transpilers are a gateway transpiler to 
coffeescript/dart/etc!. ;)


Andrew

* I trust the JS team reads dev-platform and I think dev-tech-js-engine 
would potentially leave out other important voices.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ES6 features (let, yield) and DOM workers and B2G

2015-04-29 Thread Andrew Sutherland
Update/data after talking with sfink/jorendorff/Waldo/shu of the JS team 
and some follow-up investigation with thanks to sfink to initiating the 
conversation:


- yield inside function* is working fine inside a worker with 
JSVERSION_DEFAULT; we suspect the error I was relaying was not inside a 
function* usage during the initial debugging cycle where this all came 
up.  My apologies to anyone temporarily misled.


- lexicals (both let and const) are expected to be in good shape 
at the end of Q2 (end of June) if all goes well, although these things 
are complex and it would not be surprising for complications to arise 
that cause it to take longer.


- Aspects of let/const seem like they work with JS1.7/JS1.8 enabled 
do not in fact work yet and this constitutes a bit of a foot-gun that 
would be a bad idea to enable until they work.


For example, given the function:
  function danger(bob) { for (let foo of bob) { window.setTimeout(() = 
{ console.log(foo) }, 0); } }

and then doing:
  danger([1, 2])
prints: 2 followed by 2, just like if you had used var.

But the older-school for(let;;) that you may be used to from JS1.7/1.8 
works and latches appropriately:
  function safe(bob) { for (let iFoo=0; iFoo  bob.length; iFoo++) { 
window.setTimeout(() = { console.log(bob[iFoo]); }, 0); } }

so doing:
  danger([1, 2])
prints: 1 followed by 2

- The recommendation for now is that if one must use the lexicals 
let/const, then transpiling probably is the way to go.  However, if you 
can get by with var (which is faster/more optimized for now) and all 
the stuff that works on main thread without specifying version=1.7/1.8, 
then you should be fine and just stick with var.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


ES6 features (let, yield) and DOM workers and B2G

2015-04-29 Thread Andrew Sutherland
All content DOM workers use JSVERSION_DEFAULT unless the 
dom.workers.latestJSVersion pref is set to true.  (Note: you need to 
manually add the pref in about:config.)  Chrome workers get 
JSVERSION_LATEST.  There is no way to tell the worker to use a specific 
JS version like we can do in page content by using the version parameter 
on the MIME type for script tags.


JSVERSION_DEFAULT on trunk currently makes it impossible to use nice 
things like let and apparently at least some uses of yield. This is 
problematic for some Firefox OS/B2G work where we are faced with needing 
to use a transpiler like Babel or not having nice things.  (I am 
speaking for the email and calendar apps at this time on work targeted 
for v3 or whatever the next major release ends up being.)


I'm writing for guidance on how to address this problem in the near 
term.  It seems like the things that can happen are:


A) The JS concept of versions goes away, at least for web content and 
ES6 features (in the next few weeks).  This covers things like 
https://bugzil.la/855665 on making let work without requiring JS 1.7.


B) Have DOM workers use some JS version other than DEFAULT.  It seems 
like there could be web-compat isues with JS1.7/JS1.8 in their entirety, 
but perhaps a stop-gap version could be introduced.


C) Let DOM workers inherit the version of their calling context as 
proposed in https://bugzil.la/1151739 so the version parameter is 
effectively propagated.


D) Some other means of allowing explicit DOM worker versions to be 
specified.


E) B2G-Specific:
E1) Set dom.workers.latestJSVersion for only B2G's trunk, which has no 
releases planned for the foreseeable future.
E2) On B2G have certified apps get JSVERSION_LATEST since that's where 
we hide dangerous things where there's an explicit burden on the apps to 
stay up-to-date with potentially breaking changes.


F) Just use the Babel transpiler for now for workers at the cost of not 
being able to provide feedback to the JS engine in cases where it's not 
practical to flip the preference.  Many FxOS apps already use AMD-style 
module loaders that support loader plugins 
(http://requirejs.org/docs/plugins.html) that support both dynamic 
transpiling and optimized build-time transpiling, and one exists for 
Babel already (https://github.com/mikach/requirejs-babel).


Thanks,
Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: May I execute mozIStorageStatement.executeAsync() at the same time in a single thread?

2015-04-24 Thread Andrew Sutherland

On 04/24/2015 12:48 PM, Yonggang Luo wrote:

I am currently using executeAsync to do async sqlite operation
in main thread, and running multiple executeAsync  in parallel, and it's 
crashed,
I am not so sure if multiple executeAsync can be executed at the same time.


This is fine.  The executeAsync calls aren't run in parallel.  Each 
database connection gets its own async thread.  All async operations for 
that connection are run serially on the async thread.


I'd suggest getting a backtrace from any crash to determine where things 
are actually going wrong.  But especially if you're using an older 
version of gecko/firefox, if mozStorage is the source of the crashes, 
you'll want to make sure:

- You're correctly using asyncClose to shutdown the connection.
- You're not trying to invoke things after using asyncClose

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: IndexedDB transactions are no longer durable by default, and other changes

2015-04-01 Thread Andrew Sutherland
On Wed, Apr 1, 2015, at 05:12 PM, Andrew Sutherland wrote:
 On Wed, Apr 1, 2015, at 03:02 PM, ben turner (bent) wrote:
  If a crash or power loss occurs at just
  the right moment then the transaction will be lost/rolled back. It should
  still be impossible to ever see database corruption though. This will
  mean faster delivery of complete events, and more closely aligns with
  the performance vs. stability tradeoffs other browser vendors have
  chosen.
 
 Can you clarify the risk profile a little more?  Specifically:

And I forgot one other thing:

- Wake lock interaction.  Could there be a problem if an application
drops its wake-lock in the oncomplete notification for the transaction? 
(Or does IndexedDB hold a wake-lock to cover this and/or the kernel take
care to flush dirty pages to disk before suspending/etc.?)

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: IndexedDB transactions are no longer durable by default, and other changes

2015-04-01 Thread Andrew Sutherland
On Wed, Apr 1, 2015, at 03:02 PM, ben turner (bent) wrote:
 In the meantime, if the dom.indexedDB.experimental pref is set
 (defaults to |false| in Firefox and |true| in B2G, I think)

I don't think it's set to true for B2G right now.  I don't see such a
mapping in
https://dxr.mozilla.org/mozilla-central/source/b2g/app/b2g.js, or
elsewhere with a search of
https://dxr.mozilla.org/mozilla-central/search?q=dom.indexedDB.experimentalcase=false.
 And there doesn't seem to be code in gaia/build that messes with the
pref either.  I also attached the WebIDE to my trunk Flame device and
did Runtime...Preferences and searched for the pref, and I found it to
have the value false there.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Evaluating the performance of new features

2015-02-01 Thread Andrew Sutherland
On Sun, Feb 1, 2015, at 11:13 PM, Jonas Sicking wrote:
 On Sun, Feb 1, 2015 at 3:28 AM, Kyle Huey m...@kylehuey.com wrote:
  Do we have actual evidence that indexeddb performance is a problem?  I've
  never seen any.
 
 Yup. I think this is a very good question.
 
 Did the people writing
 https://wiki.mozilla.org/Performance/Avoid_SQLite_In_Your_Next_Firefox_Feature
 gather any data to substantiate the complaints about IndexedDB? Or is
 the performance problems inherent due to using SQLite? If so, it would
 be good to understand why.

It looks like Vladan wrote the original draft of the page that included
the only comment on IndexedDB in there, IndexedDB is implemented on top
of SQLite and has additional issues
(https://wiki.mozilla.org/index.php?title=Performance/Avoid_SQLite_In_Your_Next_Firefox_Featureoldid=953848),
so he can probably speak better, but there is a known current schema
deficiency that the awesome :bent is planning to address (as I
understand it).

IndexedDB's lexicographically ordered keys potentially provide for
locality benefits of keys/values located close-together in lexicographic
key space.  However, older versions of SQLite only supported ordering
tables by numeric rowids (which could be a user-provided integer primary
key.)  This meant that btree locality for small keys/values is/was
time-of-insert-based rather than lexicographic-key-based.

SQLite has sinced added support for ordering table btrees by any type of
primary key you want with the WITHOUT ROWID feature
(http://www.sqlite.org/withoutrowid.html).  This will allow the
IndexedDB schema to be overhauled and potentially significant benefits
to be gained for small keys/values.  (Pages in the database are expected
to be randomly scattered, so spinning media does not particularly
benefit when each key is as big as a page.  Similarly, SSD's only care
about page size, so if your row is as big as the page, there is no
wasted I/O that could have been more efficiently utilized.)

Another potential benefit log-structured-merge databases like LevelDB
can take advantage of is being able to compress keys/values
lexicographically close together based on the contents of their
neighbors whereas our current IndexedDB can only compress for patterns
within a value.  But I think SQLite3 has this covered with things like
ZipVFS (http://www.sqlite.org/zipvfs/doc/trunk/www/howitworks.wiki).

Andrew

PS: I understand :bent has other significant cleanups/performance
improvements under way as well.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Sane/possible to implement/standardize opt-in (user and page) mobile show password behavior for HTML input type=password?

2014-12-12 Thread Andrew Sutherland
One of the UI polish issues that is facing Firefox OS apps is inclusion 
of a show password mechanism.  Although the adoption of Web Components 
makes this something that can be addressed in a somewhat unified 
fashion, this seems like an affordance that is probably universally 
desired on (at least) mobile/touch devices, and not just web apps/pages 
explicitly targeted at mobile/touch devices.


Is it reasonable to try and standardize support for an 
allow-show-password boolean attribute and corresponding 
allowShowPassword property on HTML inputs with type=password?  There 
would also be a showPassword property.  When allowShowPassword is true, 
a checkbox with Show password would be displayed.  When showPassword 
is true, the contents of the password field would be displayed.  (Note: 
I've done some preliminary web searches (bugzilla, whatwg lists) and 
haven't seen real discussion on this, but I could be wrong/bad at 
searching.)


Content would need to explicitly set the attribute to true to have the 
UI displayed (if available).  If omitted, long-press context UI for 
enabled input fields could perhaps show a menu option for Show 
password that would make the checkbox visible and automatically check 
it.  The reason for not showing the UI by default is that it's possible 
a lot of existing web content may have already implemented their own 
show password mechanism and there likely would be layout breakage in 
many pages.  Because of the breakage concern, it might also make sense 
to just have the long-press menu option perform the toggle directly but 
not introduce the checkbox UI.  The main UI concern there would be users 
who accidentally turn on show password and don't know how to turn it 
back off again and are in a hostile-ish situation.


The primary security/privacy concern is the case where insecure web 
pages initially populate the password field with the user's actual 
password instead of dummy/placeholder characters.  In that case a 
limited attacker with access to the browser's UI but not any type of 
devtools (including view source) might newly have the ability to see 
the password.  I'm not sure that's a situation worth protecting against, 
but in that case it could make sense to require show password to clear 
the password while making it visible.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sane/possible to implement/standardize opt-in (user and page) mobile show password behavior for HTML input type=password?

2014-12-12 Thread Andrew Sutherland

On 12/12/2014 06:24 PM, Ehsan Akhgari wrote:

On 2014-12-12 6:19 PM, Tanvi Vyas wrote:

A touch event or mouseclick-and-hold on the eye icon could show the
password, and as soon as the user releases the password can go back to
being obfuscated.  That would prevent accidental leakage through screen
sharing.  The tricky part is adding such an icon next to the password
field (same issue with adding the show passwords checkbox).


I'm not sure what's tricky about that, it's very simple implementation 
wise at least.


One small wrinkle for Firefox OS is that we already put an x icon on 
the right side of input and password fields to let the user clear the 
contents of the text-box in an entire go 
(http://buildingfirefoxos.com/building-blocks/input-areas.html). Since 
there needs to be minimum hit size targets, if both were present at the 
same time, I don't think you could smoosh them together.


Having said that, we now have that text-selection UI stuff on trunk/v2.2 
that includes both:

- long pressing logic that selects words nicely
- some type of select all affordance as the first icon on the pop-up 
toolbar from when you long press

so perhaps that dedicated 'x' can go away entirely.

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moratorium on new XUL features

2014-10-13 Thread Andrew Sutherland

On 10/13/2014 07:06 PM, Joshua Cranmer  wrote:

I nominally agree with this sentiment, but there are a few caveats:
1. nsITreeView and xul:tree exist and are usable in Mozilla code 
today. No HTML-based alternative to these are so easily usable.


There are many lazy-rendering infinite tree/table/infinite list 
implementations out there:


For example:
- Dojo: TreeGrind/LazyTreeGrid 
http://dojotoolkit.org/reference-guide/1.10/dojox/grid/TreeGrid.html

- dgrid (Dojo related): http://dojofoundation.org/packages/dgrid/
  - tree examples
  - column hider extension: 
https://github.com/SitePen/dgrid/blob/v0.3.15/doc/components/extensions/ColumnHider.md

- the things that the dgrid website provides in its comparison table
- Sencha (which is GPLv3 but has a weird library style exception it 
seems): http://dev.sencha.com/ext/5.0.1/examples/tree/locking-treegrid.html
- React (just a list, but an example of how easy such a component is in 
React): https://github.com/orgsync/react-list


2. The main rationale for this feature (bug 213945, although I doubt 
the approach currently taken in bug 441414 is actually usable for that 
anyways) involves a tree view that is currently performance-sensitive 
and comprises 13,446 lines of C++ code, so it can't easily move to JS. 
Which means some sort of XPIDL API is going to be necessary anyways, 
and that API would probably look vaguely similar to nsITreeView.


The performance-sensitive parts of the XUL tree view widget is that it 
paints synchronously and does not do any smart caching on its own.  
These are deficiencies of the C++ XUL tree implementation that an HTML 
implementation need not share.  (In fact, I've never seen any that would 
have the same performance problems; you'd have to write your own widget 
from the ground up in Canvas to reproduce them.)


For the bug 213945 Thunderbird use-case, the existing C++ nsITreeView 
interface implemented by the mailnews DBView could easily be used by a 
Dojo Store adapter/etc.  Any calls made by C++ code to update the tree 
view can (and already do) call JS code to cause proper invalidation of 
the JS/HTML-exposed data-model.  I say this as someone with expensive 
experience working with that area of the Thunderbird codebase and who 
considered our options and decided that an HTML-based widget was the way 
to go.


3. I've never done a lot of UI coding myself, so I don't know if my 
recollection is correct anymore or even if it ever was, but I recall 
hearing that XUL and HTML didn't get together terribly well.


So if we had an HTML tree implementation that:
a) supported lazy generation of row data
b) supported column pickers, rearrangeable columns, etc.
c) supported CSS queries similar to what xul:tree does already,


The examples I give above I believe all support these features.


d) could be used or made usable from C++ without too much effort


As noted above, I don't think Thunderbird's use-case actually requires 
C++ code to meaningfully interact with the tree widget in a way that 
can't be compensated for in JS.



e) already existed and were generally maintained in toolkit/


This is a weird, NIH-ish requirement.  Why should Mozilla create and 
maintain an HTML tree widget when there are so many open source 
implementations that already exist?



f) were properly accessible


A good concern and something that should be used for evaluation of any 
library options.  Since the widgets above all use the HTML DOM, it's a 
question of making sure they are using the appropriate HTML and ARIA 
attributes (ex: aria-posinset).


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: Prerendering API

2014-08-12 Thread Andrew Sutherland

On 08/12/2014 04:04 PM, Ilya Grigorik wrote:

In short, seems like this is inventing a derivative single-page app model for 
building pages/apps, and that makes me wonder because it doesn't seem to make 
it any easier. Everything described here can be achieved with current tools, 
but perhaps could be made better with some smart(er) prefetch/prerender 
strategies?


:vingtetun can probably speak to it better than I can, but my 
understanding was that for Firefox OS and the Haida UX effort 
(https://wiki.mozilla.org/FirefoxOS/Haida) targeting mobile phones, 
there were some practical benefits foreseen:


- Consistent in-app page/card transitions.  Each Gaia app has a 
custom/ad-hoc solution right now.  The effort to make more reusable 
components will likely address this, but then a consistent platform UX 
still depends on apps effectively making themself Firefox OS specific 
by using the Firefox building blocks/etc.


- Consistent support of/use of edge gestures inside the app. 
https://wiki.mozilla.org/FirefoxOS/Haida#Edge_Gestures_Between_Open_Content


- Improved resource usage patterns by every page being an iframe with 
only a little stuff in it.  It's harder to accidentally leak memory or 
get burnt by heap fragmentation if you're only doing a few things and 
cleaned up entirely as you are superseded.  However the assumption was 
that these iframes would be talking to SharedWorkers which presumably 
would have the same problem.


While Haida apps designed for the phone form factor would never be 
usable on a desktop device as-is, it's neat that the pre-render 
mechanism makes the same underlying implementation workable for both 
cases, especially since it makes a many simple pages approach viable.  
Right now if you want a performant implementation, you are inevitably 
driven to the single-page app implementation approach.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: webserial api

2014-07-16 Thread Andrew Sutherland

On 07/16/2014 02:03 PM, Dave Hylands wrote:

But phones, and devices like the Raspberry Pi, and BeagleBone Black, also have 
native serial ports (i.e. non-USB, non-Bluetooth), and the people that use 
these types of devices are the very one which are extremely frustrated by the lack of 
support for access to serial.


AIUI the Raspberry Pi/BeagleBone Black/Arduino have GPIO pins that may 
be hooked up to shields or custom things done by the user and not 
generic RS232 ports.  It seems like in the mapping/definition process, 
usable identifiers/metadata could be provided that could in turn be 
surfaced into Firefox/Firefox OS so that authorization could be done in 
terms of specific things.  If there is some emerging serial meta-data 
protocol so that Firefox OS can send some bytes over the serial port and 
have the serial port report back what's connected, that's even better.


For example: Do you want to allow http://superblinkylights.example.org 
access to you NeoPixel Shield?


I do agree that it would make sense to lump this under the auspices of 
WebSerial.  I think my main point is just that the UX and permissions 
needs to be about the devices/endpoints.  This also seems desirable 
since otherwise you potentially have to have every app/webpage being 
smart enough not to use the serial ports that are not hooked up to what 
it actually wants to talk to.


One possibility for this would be for the WebSerial API to have a 
super-dangerous API surface (that requires app store/configuration pain) 
and the friendly/safe API.  A limited utility app with the 
super-dangerous permission helps the user define what the random 
non-self-describing serial ports on their system are.  Then all the 
random show a pretty LED light show on your arduino app can still just 
ask for the NeoPixel serial protocol/etc.


Andrew



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: webserial api

2014-07-15 Thread Andrew Sutherland

On 07/13/2014 11:55 AM, Jonas Sicking wrote:

Sadly I don't think that is very safe. I bet a significant majority of our
users have no idea what a serial port is or what will happen if they allow
a website to connect to it.


Agreed.  It seems like the concept users are most likely to reliably 
understand are physical devices. 
https://github.com/whatwg/serial/issues/23 indicates that that the 
expected supported underlying layers are USB, Bluetooth, and the random 
motherboard that still has RS232 ports.  As noted in a comment on issue 
20 at https://github.com/whatwg/serial/issues/20#issuecomment-28333090 
it seems counterproductive to place so much importance on the legacy case.



I think an important statement for the spec to make is why it needs to 
exist at all?  Specifically, it seems like both the WebUSB 
https://bugzil.la/674718 and WebBluetooth https://bugzil.la/674737 specs 
should both be equally capable of producing the standard stream 
abstractions supported by the protocols.


And then the security and UX can both benefit from the appropriate 
models.  This includes features that the WebSerial API currently can't 
really offer, like triggering a notification/wake-up/load of the app 
when the device is reconnected via USB or comes into range of the 
device, etc.  This is arguably a net UX win.  Additionally, if the 
security model involved enumerating vendor/product, not only would it 
simplify the wake-up notification, but the Firefox OS app marketplace 
could even suggest apps.  (Ex: a system notification could notice you 
plugged in a specific vendor/product pair for the first time and offer 
to launch a search.  Or tell you what it already found, etc.)



Note that I'm not saying the spec/implementation doesn't need to exist.  
However I do think that from a security/user comprehension perspective 
WebUSB/WebBluetooth should handle the friendly/easy-to-use stuff and 
WebSerial needs to be something that needs to be vouched-for by a 
marketplace or requires the user performing a series of manual steps 
that would make most people think twice about why they're doing it.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Andrew Sutherland

On 05/28/2014 09:30 PM, Brian Smith wrote:

On Wed, May 28, 2014 at 5:13 PM, Andrew Sutherland

  I agree this is a safe approach and the trusted server is a significant
complication in this whole endeavor.  But I can think of no other way to
break the symmetry of am I being attacked or do I just use a poorly
configured mail server?


It would be pretty simple to build a list of mail servers that are known to
be using valid TLS certificates. You can build that list through port
scanning, in conjunction with the auto-config data you already have. That
list could be preloaded into the mail app and/or dynamically
retrieved/updated. Even if we seeded this list with only the most common
email providers, we'd still be protecting a lot more users than by doing
nothing, since email hosting is heavily consolidated and seems to be
becoming more consolidated over time.


This is a good proposal, thank you.  To restate my understanding, I 
think the key points of this versus the proposal I've made here or the 
variant in the https://bugzil.la/874346#c11 ISPDB proposal are:


* If we don't know the domain should have a valid certificate, let it 
have an invalid certificate.


* Preload more of the ISPDB on the device or maybe just an efficient 
mechanism for indicating a domain requires a valid certificate.


* Do not provide strong (any?) guarantees about the ISPDB being able to 
indicate the current invalid certificate the server is expected to use.


It's not clear what decision you'd advocate in the event we are unable 
to make a connection to the ISPDB server.  The attacker does end up in 
an interesting situation where if we tighten up the autoconfig mechanism 
and do not implement subdomain guessing (https://bugzil.la/823640), an 
attacker denying access to the ISPDB ends up forcing the user to perform 
manual account setup.  I'm interested in your thoughts here.


Implementation-wise I understand your suggestion to be leaning more 
towards a static implementation, although dynamic mechanisms are 
possible.  The ISPDB currently intentionally uses static files checked 
into svn for implementation simplicity/security, a decision I agree 
with.  The exception is our MX lookup mechanism at 
https://mx.thunderbird.net/dns/mx/mozilla.com


I should note that the current policy for the ISPDB has effectively been 
try and get people to host their own autoconfig entries with an 
advocacy angle which includes rejecting submissions.  What's you've 
suggested here (and I on comment 11) implies a substantiative change to 
that.  This seems reasonable to me and when I raised the question about 
whether such changes would be acceptable to Thunderbird last year, 
people generally seemed to either not care or be on board:


https://mail.mozilla.org/pipermail/tb-planning/2013-August/002884.html
https://mail.mozilla.org/pipermail/tb-planning/2013-September/002887.html
https://mail.mozilla.org/pipermail/tb-planning/2013-September/002891.html

I should also note that I think the automation to populate the ISPDB is 
still likely to require sizable engineering effort but is likely to have 
positive externalities in terms of drastically increasing our autoconfig 
coverage and allowing us to reduce the duration of the autoconfig 
probing process.  For example, we could establish direct mappings for 
all dreamhost mail clusters.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Andrew Sutherland

On 05/29/2014 07:12 PM, Brian Smith wrote:

On Thu, May 29, 2014 at 2:03 PM, Andrew Sutherland 
asutherl...@asutherland.org wrote:

It seems like you would be able to answer this as part of the scan of the
internet, by trying to retrieve the self-hosted autoconfig file if it is
available. I suspect you will find that almost nobody is self-hosting it.


I agree with your premise that the number of people self-hosting 
autoconfig entries is so low as to not be a concern other than not 
breaking them and allowing that to be an override mechanism for the ISPDB.


Also, https://scans.io/ has a number of useful internet scans we can use 
already, so I don't think we need to do the scan ourselves for our first 
round.  While the port 993/995 scans at https://scans.io/study/sonar.cio 
are somewhat out-of-date (2013-03-30), the DNS dumps and port 443 scans 
are modern and should be sufficient to achieve a fairly comprehensive 
database. Especially if we make the simplifying assumption that all 
relevant mail servers have been operational at the same domain name 
since at least then.  (Obviously the IP addresses may have changed so 
we'll need to use a reverse DNS dump from the appropriate time period.)




Autopopulating all the autoconfig information is a lot of work, I'm sure.
But, it should be possible to create good heuristics for deciding whether
to accept certs issued by untrusted issuers in an email app. For example,
if you don't have the (full) autoconfig data for an MX server, you could
try creating an SMTP connection to the server(s) indicated in the MX
records and then use STARTTLS to switch to TLS. If you successfully
validate the certificate from that SMTP server, then assume that the
IMAP/POP/LDAP/etc. servers use valid certificates too, even if you don't
know what those servers are.


Very interesting idea on this!  Thanks!

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-29 Thread Andrew Sutherland

On 05/28/2014 06:30 PM, Andrew Sutherland wrote:

== Proposed solution for exceptions / allowing connections

There are a variety of options here, but I think one stands above the 
others.  I propose that we make TCPSocket and XHR with mozSystem take 
a dictionary that characterizes one or more certificates that should 
be accepted as valid regardless of CA validation state. Ideally we 
could allow pinning via this mechanism (by forbidding all certificates 
but those listed), but that is not essential for this use-case.  Just 
a nice side-effect that could help provide tighter security guarantees 
for those who want it.


Note: I've sent an email to the W3C sysapps list (the group 
standardizing http://www.w3.org/2012/sysapps/tcp-udp-sockets/) about 
this.  It can be found in the archive at 
http://lists.w3.org/Archives/Public/public-sysapps/2014May/0033.html


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Andrew Sutherland
tl;dr: We need to figure out how to safely allow for invalid 
certificates to be used in the Firefox OS Gaia email app.  We do want 
all users to be able to access their email, but not by compromising the 
security of all users.  Read on if you work in the security field / care 
about certificates / invalid certificates.



== Invalid Certificates in Email Context

Some mail servers out there use self-signed or otherwise invalid SSL 
certificates.  Since dreamhost replaced their custom CA with valid 
certificates 
(http://www.dreamhoststatus.com/2013/05/09/secure-certificate-changes-coming-for-imap-and-pop-on-homiemail-sub4-and-homiemail-sub5-email-clusters-on-may-14th/) 
and StartCom started offering free SSL certificates 
(https://www.startssl.com/?app=1), the incidence of invalid certificates 
has decreased.  However, there are still servers out there with invalid 
certificates.  With deeply regrettable irony, a manufacturer of Firefox 
OS devices and one of the companies that certifies Firefox OS devices 
both run mail servers with invalid certificates and are our existing 
examples of the problem.


The Firefox OS email app requires encrypted connections to servers. 
Unencrypted connections are only legal in our unit tests or to 
localhost.  This decision was made based on a risk profile of devices 
where we assume untrusted/less than 100% secure wi-fi is very probable 
and the cellular data infrastructure is only slightly more secure 
because there's a higher barrier to entry to setting up a fake cell 
tower for active attacks.


In general, other email apps allow both unencrypted connections and 
adding certificate exceptions with a friendly/dangerous flow.  I can 
speak to Thunderbird as an example.  Historically, Thunderbird and its 
predecessors allowed certificate exceptions.  Going into Thunderbird 
3.0, Firefox overhauled its exception mechanism and for a short time 
Thunderbird's process required significantly greater user intent to add 
an exception.  (Preferences, Advanced, Certificates, View Certificates, 
Servers, Add Exception.)  At this time DreamHost had invalid 
certificates, free certificates were not available, invalid certificates 
were fairly common, Thunderbird's autoconfig security model already 
assumed a reliable network connection, Thunderbird could only run on 
desktops/laptops which were more likely to have a secure network, etc.  
We relented and Thunderbird ended up where it is now.  Thunderbird 
immediately displays the Add Security Exception UI; the user only 
needs to click Confirm Security Exception.  (Noting that Thunderbird's 
autoconfig process is significantly more multi-step.)



== Certificate Exception Mechanisms in Platform / Firefox OS

Currently, the only UI affordance to add certificate exceptions is 
exposed by the browser app/subystem for HTTPS sites.  I assert that this 
is a different risk profile and we wouldn't want to follow it blindly 
and don't actually want to follow it at all[1].


There are general bugs filed on being able to import a new CA or 
certificate at https://bugzil.la/769183 and https://bugzil.la/867899.  
Users with adb push access also have the potentially to manually import 
certificates from the command line, see 
https://groups.google.com/forum/#!msg/mozilla.dev.b2g/B57slgVO3TU/G5TA-PiFI_EJ


It is my understanding that:
* there is only a single certificate store on the device and therefore 
that all exceptions are device-wide

* only one exception can exist per domain at a time
* the exception is per-domain, not per-port, so if we add an exception 
for port 993 (imaps), that would also impact https.


And it follows from the above points that exceptions added by the email 
app/on the behalf of the email app affect and therefore constitute a 
risk to all other apps on the device.  This is significant because 
creating an email account may result in us wanting to talk to a 
different domain than the user's email address because of the 
autoconfiguration process and vanity domains, etc.



== The email app situation

In bug https://bugzil.la/874346 the requirement that is coming from 
partners is that:

- we need to imminently address the certificate exception problem
- the user needs to be able to add the exception from the account setup 
flow.  (As opposed to requiring the user to manually go to the settings 
app and add an exception.  At least I think that's the request.)


Taking this as a given, our goal then becomes to allow users to connect 
to servers using invalid certificates without compromising the security 
of the users who do use servers with valid certificates or of other apps 
on the phone.


There are two main problems that we need solutions to address:

1) Helping the user make an informed and safe decision about whether to 
add an exception and what exception to add.  I strongly assert that in 
order to do this we need to be able to tell the user with some 
confidence whether we believe the server actually has an 

Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Andrew Sutherland

On 05/28/2014 07:22 PM, Karl Tomlinson wrote:

(I recall having to add an exception for a Mozilla Root CA to
access email at one time.)


It's fairly common that there exist multiple aliases to access a mail 
server but the server does not have certificates available for all of 
them.  In the specific Mozilla case, this was probably 
https://bugzil.la/815771.



Andrew Sutherland writes:

I propose that we use a certificate-observatory-style mechanism to
corroborate any invalid certificates by attempting the connection
from 1 or more trusted servers whose identity can be authenticated
using the existing CA infrastructure.

Although this can identify a MITM between the mail client and the
internet, I assume it won't identify one between the mail server
and the internet.


I understand your meaning to be that we won't detect if the mail 
server's outbound SMTP connections to other domains and inbound SMTP 
connections from other SMTP servers either support, strongly request, or 
require use of TLS (likely via STARTTLS upgrade).


I confirm the above and that the issue is somewhat orthogonal.  This is 
something we would probably want to do as in-app advocacy via 
extension/opt-in either by scraping transmission headers or downloading 
a prepared database and cross-checking.



*** it looks like you are behind a corporate firewall that MITMs
you, you should add the firewall's CA to your device.  Send the
user to a support page to help walk them through these steps if
that seems right.
*** it looks like the user is under attack

I wonder how to distinguish these two situations and whether they
really should be distinguished.


What I imagined here was that the certificate would identify itself as 
allegedly originating from the given vendor.  We could treat that as a 
sufficient hint using RegExps, or analyze the entire chain to cover 
cases where the vendor uses their own trust root that we can add to a 
small database.  In the very bad cases where all of the vendor's devices 
use the same certificate, that's also easy to identify.


I think it's a meaningful distinction to make since we are able to tell 
the user You should be able to talk privately with the mail server, but 
the network you are using won't let you and wants to hear everything you 
say.  Your options are to use a different network or configure your 
device to use the proxy.  For example, you might want to use cellular 
data rather than wi-fi or pick a different wi-fi network, like a guest 
network.


I'm not sure it's a must-have-on-first-landing feature, especially since 
I don't think Firefox OS devices are particularly enterprise friendly 
right now.  For example, in order to actually authenticate MoCo's 
non-guest wi-fi you need to be able to import the certificate (or add an 
exception! :) which are what two of those bugs I linked to are about.  
But I'd want to make sure we could evolve towards supporting that 
direction better.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G, email, and SSL/TLS certificate exceptions for invalid certificates

2014-05-28 Thread Andrew Sutherland

On 05/28/2014 08:37 PM, Karl Dubost wrote:

Le 29 mai 2014 à 09:13, Andrew Sutherland asutherl...@asutherland.org a écrit 
:

My imagined rationale for why someone would use a self-signed certificate 
amounts to laziness.

being one of those persons using a self-signed certificate, let's enrich your 
use cases list ;)
I use a self-signed certificate because the server that I'm managing is used by 
a handful of persons which are part of community. This community can be friends 
and/or family. The strong link here is the *human trust* in between people, 
which is higher than the trust of a third party.


Trusting you as a human doesn't translate into protecting the users of 
your server from man-in-the-middle attacks.  How do you translate the 
human trust into the technical trust infrastructure supported by Firefox 
and Thunderbird and the rest of the Internet?


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using rr to track down intermittent test failures

2014-04-16 Thread Andrew Sutherland

On 04/15/2014 07:14 PM, Gijs Kruitbosch wrote:
1) Is anyone working on something similar that works for frontend code 
(particularly, chrome JS)? I realize we have a JS debugger, but 
sometimes activating the debugger at the wrong time makes the bug go 
away, and then there's timing issues, and that the debugger doesn't 
stop all the event loops and so stopping at a breakpoint sometimes 
still has other code execute in the same context... AIUI your post, 
because the replay will replay the same Firefox actions, firing up the 
JS debugger is impossible because you can't make the process do anything.


While clearly unsuitable for debugging Firefox chrome or 
Firefox-specific issues since it's based on safari/webkit, it's worth 
calling out Brian Burg's timelapse project as prior art in this field:

https://github.com/burg/timelapse/
http://homes.cs.washington.edu/~burg/projects/timelapse/

Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Warning about mutating the [[Prototype]] of an object ?

2014-03-28 Thread Andrew Sutherland
The code should be fixed.  It's my understanding that the existing idiom 
used throughout the Thunderbird tree is still okay to do since the 
prototype chain is created at object initialization time and so there's 
no actual mutation of the chain:


function Application() {
}
Application.prototype = {
  __proto__: extApplication.prototype,
};

Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spring cleaning: Reducing Number Footprint of HG Repos

2014-03-27 Thread Andrew Sutherland

On 03/27/2014 10:10 AM, Joshua Cranmer  wrote:
It's worth noting that hg-git is having some performance issues with 
github right now. A basic clone of a 1MB repository takes well over a 
minute before it starts doing anything.


When I was converting my repositories last night I found that although 
the push to github from hg(-git) was hanging, it in fact had completed 
all of its work already.  After a second or two you could control-C, 
re-push, and it would say there was nothing to do. If you checked on 
github, the commits would in fact be all there, and they would be there 
before the second push attempt or hitting control-C.


Obviously, if you are pushing something huge like a clone of 
mozilla-central, you may need to legitimately wait a long time.  But for 
clones of mozilla-central it's probably most advisable and polite to 
fork the gecko-dev repo and either do a light-weight import of any 
branches using git fast-import or the fancy tooling used to produce 
gecko-dev in the first place.  A very cursory exploration shows 
http://repo.or.cz/w/fast-export.git provides fast-export from hg for 
fast-import to git, but it's probably best to read the blog-posts for 
the gecko-dev conversion instead.


Andrew

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Too many system compartments at start-up

2014-03-21 Thread Andrew Sutherland

On 03/21/2014 12:06 PM, Bill McCloskey wrote:

The problem with doing measurements is that the per-compartment overhead is 
very dependent on what's going on in the compartment. I tried to enable the B2G 
compartment merging stuff in desktop Firefox to get a sense of how much of a 
change there would be [1]. It's hard to tell if it actually saved anything. It 
broke a lot of stuff, and I had a hard time getting consistent measurements of 
memory use at startup since the numbers really fluctuate wildly. My best guess 
is that explicit went from 90MB currently to 70MB with merging. That's a lot, 
but keep in mind that a bunch of exceptions are thrown all over the place, and 
we might just be failing to load some important stuff. Hopefully someone can 
follow up on this.


There may indeed be a lot of breakage.  See https://bugzil.la/980752 
about the the reuseGlobal=true situation with the addon-sdk loader 
Object.freeze()-ing Object.prototype.  (Which is bad for idioms like 
Foo.prototype.toString = function() {}.)


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


HTML sanitization, CSP, nsIContentPolicy, ServiceWorkers (was: Re: How to efficiently walk the DOM tree and its strings)

2014-03-05 Thread Andrew Sutherland

On 03/05/2014 01:52 AM, nsm.nik...@gmail.com wrote:

On Tuesday, March 4, 2014 1:26:15 AM UTC-8, somb...@gmail.com wrote:

While we have a defense-in-depth strategy (CSP and iframe sandbox should
be protecting us from the worst possible scenarios) and we're hopeful
that Service Workers will eventually let us provide
nsIContentPolicy-level protection, the quality of the HTML parser is of
course fairly important[1] to the operation of the HTML sanitizer.

Sorry to go off-topic, but how are ServiceWorkers different from normal Workers 
here? I'm asking without context, so forgive me if I misunderstood.


Context in short: Thunderbird does not use an HTML sanitizer in the 
default case when displaying HTML emails because it can turn off 
JavaScript execution, network accesses, and other stuff via 
nsIContentPolicy.  iframe sandboxes let the Gaia email app which runs in 
content turn off JavaScript but do nothing to stop remote image 
fetches/etc.  We want to be able to stop network fetches by for both 
bandwidth reasons and privacy reasons.


I am referring to the dream of being able to skip sanitization and 
instead just enforce greater control over the iframe through either use 
of CSP or ServiceWorkers.  ServiceWorker's onfetch capability doesn't 
actually work for this purpose because of the origin restrictions, but 
the mooted allowConnectionsTo CSP 1.1 API Alex Russell's blog post 
http://infrequently.org/2013/05/use-case-zero/ (about CSP and an early 
NavigationController/ServiceWorker proposal) would have been perfect.


In the event CSP grew an API like that again in the future, I assume 
ServiceWorker is where it would end up.  It doesn't seem super likely 
since it seems like CSP 1.1 generally covers the required use-cases.  If 
we are (eventually) able to specify a more strict CSP on an iframe than 
the CSP in which the e-mail app already lives, we may able to use 
img-src/media-src/etc. for our fairly simple stop this iframe from 
accessing any resources control purposes.


More context is available at the 
https://groups.google.com/d/msg/mozilla.dev.webapi/wDFM_T9v7Tc/_yTofMrjBk4J 
thread.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Possible to make all (CSS) transitions 0/1ms to speed up FxOS Gaia tests?

2014-02-16 Thread Andrew Sutherland
In Gaia, the system and many of the apps use transitions/animations with 
a non-trivial duration, particularly for card metaphor stuff where logic 
may be blocked on the completion of the animation. (Values seem to vary 
between 0.3s and 0.5s... no one tell UX!)


Our Gaia tests currently take an absurdly long amount of time for what 
is being tested.  There are varying reasons for this (gratuitous 
teardown of the b2g-desktop process, timeouts that time out but don't 
fail, etc.).  I believe one thing we could do to speed things up would 
be to make all transitions super short.  We still want transitionend to 
fire for logic reasons, but otherwise the unit test infrastructure 
probably does not really care to actually watch the transition happen.


Is it possible / advisable to have Gecko support some preference or 
other magical mechanism to cause transitions and non-infinite animations 
to effectively complete in some extremely small time interval / on the 
next turn of the event loop, at least for durations originally less than 
1second/other short time?  I briefly looked for such an existing 
mechanism, but was unable to find one.


The alternative would be to use a build-time mechanism in Gaia to 
transform all CSS to effect this change in that fashion.  Gaia already 
has build steps like this so introducing it would not be particularly 
difficult, but it's always nice if what the tests run is minimally 
different from what the devices run.


(Additionally, an in-Gecko mechanism could produce slightly more correct 
results since it could realistically emulate the ordering of when the 
transitionend events would fire before disregarding those numbers and 
firing them all in succession.  Although an automated mechanism I 
suppose could just map observed values to sequentially ordered small 
timeouts.)


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: #ifdefing out mailnews-only code but keeping it in m-c

2013-11-13 Thread Andrew Sutherland

On 11/12/2013 03:39 AM, Henri Sivonen wrote:

On Mon, Nov 11, 2013 at 10:08 PM, Andrew Sutherland
asutherl...@asutherland.org wrote:

On 11/11/2013 01:33 PM, Joshua Cranmer  wrote:

Actually, I believe you need to keep the x-imap4-modified-utf7 converters
in B2G, if you don't want to break Gaia Email's tests. They use the
fakeservers as well, which specifically use this charset.

This is minor/easy breakage for us to fix.  I wouldn't keep the code around
for that reason.

Do you mean it's something you'd fix reactively or is there something
that needs to be handled proactively before x-imap4-modified-utf7 goes
away from B2G?


Reactively.  It's a small fix but the IMAP fake-server in question is 
slated to be replaced by the node-based 
https://github.com/andris9/hoodiecrow in the near/mid-term, and 
depending on the strategy/timeline for impacting Thunderbird I feel like 
we might win the race to change over to that.  (And we can mitigate 
failures by pinning our build-runner to a b2g-desktop build that 
precedes the change for a few days if we can't fix it immediately.)


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: #ifdefing out mailnews-only code but keeping it in m-c

2013-11-11 Thread Andrew Sutherland

On 11/11/2013 01:33 PM, Joshua Cranmer  wrote:

By far the easiest solution would be leaving the code in m-c but
#ifdefing it out of Firefox builds. Is there a compelling reason not
to do so? If there is no compelling reason against #ifdefing it out in
m-c, what's the right variable to #ifdef on (needs to work in
moz.build and the preprecessor)?


Actually, I believe you need to keep the x-imap4-modified-utf7 
converters in B2G, if you don't want to break Gaia Email's tests. They 
use the fakeservers as well, which specifically use this charset.


This is minor/easy breakage for us to fix.  I wouldn't keep the code 
around for that reason.


(Character-set-wise, all the Gaia e-mail app wants is for 
TextEncoder/TextDecoder to conform to the spec.)


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Best way to make xpcshell provide consistent stack limits?

2013-06-28 Thread Andrew Sutherland

On 06/28/2013 06:52 PM, Benjamin Smedberg wrote:
The JS stack limit should be basically the native stack limit minus a 
little bit of headroom (maybe 512 bytes?). Whether we implement that 
dynamically or just use ifdefs to hardcode the correct values seems 
like just an implementation question.


Unless I misunderstand the questions and you're saying that the 
inconsistency in *native* stack size is part of the problem, in 
which case I think we need to understand why any production code would 
be getting particular close to any of the native thread size stack 
limits.


I conflated 2 issues a little.

1) Primary concern: JS that runs fine in some xpcshell builds dies on 
other xpcshell builds based on platform and build type.  The same JS 
runs totally fine in node.js on all platforms, but use of node.js is 
forbidden by the module owner in this case.  Greater consistency in 
failures is desired, or at least not dying because we are being 
artificially stingy with JS stack size.  This could be accomplished by 
changing stack sizes or making it possible to adjust the stack 
used/supported by xpcshell.


The problem in this case was due to the 1 MiB JS stack limit on linux 
(with 8 MiB native stacks) killing an out-of-date esprima JS parser 
which was a very literal recursive descent parser with separate 
functions for each of the operator precedence hierarchy and which 
bounced all calls through a wrapper that used apply().  See 
https://gist.github.com/asutherland/5871825 for the stack.  Newer 
esprima builds have improved stack behaviour by consolidating binary 
expression parsing into a single function that uses a while loop. 
Besides upgrading, stack usage for our specific use case has also been 
improved by using SpiderMonkey's parsing API instead in cases where 
esprima's extended feature set is not required.  (Comments, I think.)


I don't believe this problem is completely rare however, given the 
existence of bugs like 
https://bugzilla.mozilla.org/show_bug.cgi?id=762618 for one of the 
add-on manager's unit tests.



2) Since xpcshell is just relying on the default XPConnect behaviour, 
there's the question of whether what XPConnect is doing is really what 
we want it to do.  I agree that it seems like the JS stack limit should 
be consistent with the actual stack size limit, especially given that 
there still exist numerous code paths in Firefox that can spin a nested 
event loop and effectively rob the JS stack of arbitrarily capped 
available stack space.


In this case, since the hard-coded values for Windows are out-of-date by 
a factor of two, the linux ones seem arbitrary and unrelated to default 
ulimit -s values (or actual values), it does seem like the #define 
solution might not be perfect.



It seems like going with the dynamic option would address #1 and works 
for #2.  If our linux/OS X scripts want to to ensure that xpcshell can 
run obscenely recursive code, they can set the ulimit before running 
xpcshell and all will be well.  (Windows remains a problem but with a 2 
meg stack for 32-bit builds, there's a lot of head-room.)


I'll file a bug on this and try to provide a patch unless I hear 
otherwise.  (I am very happy for someone else to do all the work!)


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Best way to make xpcshell provide consistent stack limits?

2013-06-26 Thread Andrew Sutherland
For B2G's gaia repository we are currently using xpcshell[1] as our 
command line JS runner.  I was noticing some horrible inconsistencies in 
terms of blowing the JS stack when we were trying to use the esprima JS 
parser[2] that varied by the platform and build type.


The nutshell is that the #defines at 
http://dxr.mozilla.org/mozilla-central/source/js/xpconnect/src/XPCJSRuntime.cpp#l2687 
net us the following JS stack limits.  (These are the limits when the JS 
engine starts throwing InternalError: too much recursion and are 
somewhat separate from actual native stack limits.)


linux (32 bit, non-debug): 512 KiB
windows (all builds): 900 KiB [3]
linux (32 bit, debug): 1 MiB
linux (64 bit, non-debug): 1 MiB
linux (64-bit, debug): 2 MiB
OS X (all): 7 MiB


For gaia's purposes, this inconsistency is not great.  How should we 
resolve this?


a) Make xpcshell try and pick more consistent values itself

b) Make xpcshell take a command line argument and call 
JS_SetNativeStackQuota plus doing whatever it needs to back that up with 
an appropriate platform stack (possibly by re-exec()ing). (Similar to 
node.js supporting --stack_size which defaults to 984 for me right now.)


c) Change xpconnect's hard-coded defaults

d) Change xpconnect to use jsnativestack.cpp's ability to tell us what 
the platform stack size actually is and/or nsThread's knowledge of the 
stack size it was created with.  So on linux we'd use whatever 'ulimit 
-s' is telling us to do, etc.


e) Don't use xpcshell for that!  Are you crazy?!

f) Other

Thanks,
Andrew

1: Although in the future this might end up just being any xulrunner 
-app capable build.
2: Since the esprima JS parser produces a parse-tree that's supposed to 
confirm to what SpiderMonkey's Reflect API produces, we do have a 
work-around, but I expect this stack issue to come up again in the future.
3: The windows limit could be higher by its current rationale; 
https://bugzilla.mozilla.org/show_bug.cgi?id=582910 increased the 
windows stack to 2 MiB from 1MiB.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Code Review Session

2013-05-24 Thread Andrew Sutherland

On 05/24/2013 11:05 AM, Mike Conley wrote:

Sounds like we're talking about code review.


But I want to qualify integration into bugzilla: I explicitly do not
want a tool that is tightly coupled to bugzilla.  In fact, I want a
tool that has as little to do with bugzilla as feasible.


I'm a contributor to the Review Board project[1], which is not coupled 
with Bugzilla whatsoever.


Noting that although it's not coupled, Review Board makes it very 
possible to do things that make it an abandon-able solution.  Back in 
2009 (before there was extension/plugin support) I modified review board 
so it could pull your review queue out of bugzilla and automatically 
generate reviews.  To address long-term archival needs, I added export 
functionality that formatted the review as ASCII text that could be 
copied and pasted into bugzilla.


Here's an example excerpt from my blog post at 
http://www.visophyte.org/blog/2009/06/20/review-board-and-bugzilla-reviews-take-2/:

===

on file: mailnews/base/src/nsMessenger.cpp line 635

// if we have a mailbox:// url that points to an .eml file, we have to read
// the file size as well


what a pretty comment

on file: mailnews/base/src/nsMessenger.cpp line 642

NS_ENSURE_SUCCESS(rv, rv);


please rename rv ARR-VEE

===

I personally worry about introducing another piece into the existing 
bugzilla/github interaction, but even 4 years ago, review board had a 
reviewing experience that is still ahead of both splinter and github.  
(Although I do have high hopes that github will improve their review 
experience.)  Specifically:


- Side-by-side diff (github-only problem)
- Syntax highlighted code based on syntax highlighting the entire file, 
so it's not just a line-centric/keyword centric highlighting.
- Expandable context! Don't get stuck with only 3 or 8 lines of 
context.  Expand to see the whole file!
- Detects when code is just moved around.  This came towards the tail 
end of my use of it, but it was a killer feature for refactored code.  
No trying to play the game of match up the added hunk and the removed 
hunk.  It figured it out for you, greatly reducing complexity.


Also nice (versus github):
- Your review isn't published until you push a button, allowing you to 
make persistent notes to yourself that you can later remove when you see 
they aren't an issue without bothering anyone else.  This also saves you 
from proactive people fixing things and pushing fixes as you type them 
and causing massive confusion.

- Reviews don't get nuked by rebases

Right now, for tricky reviews, I find myself having to checkout the code 
locally then use git difftool with the meld 3-way merge tool to diff 
the branch against its base in order to get syntax-highlighted 
side-by-side diff with full context and smart intra-line diffing.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andrew Sutherland

On 04/26/2013 03:21 PM, Dirkjan Ochtman wrote:

Also, I wonder if SQLite 4 (which is more like a key-value store)


SQLite 4 is not actually more like a key-value store.  The underlying 
storage model used by the SQL-interface-that-is-the-interface changed 
from being a page-centric btree structure to a key-value store that is 
more akin to a log-structured merge implementation, but which will still 
seem very familiar to anyone familiar with the page-centric vfs 
implementation that preceded it.  Specifically, it does not look like 
IndexedDB's model; it still does a lot of fsync's in order to maintain 
the requisite SQL ACID semantics.


Unless we exposed that low level key-value store, SQLite 4 would look 
exactly the same to consumers.  The main difference would be that 
because records would actually be stored in their (lexicographic) 
PRIMARY KEY order, performance should improve in general, especially on 
traditional (non-SSD) hard drives.  Our IndexedDB implementation, for 
one, could probably see a good performance boost from a switch to SQLite4.


Andrew
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Storage in Gecko

2013-04-26 Thread Andrew Sutherland

On 04/26/2013 03:30 PM, Gregory Szorc wrote:

However, before that happens, I'd like some consensus that IndexedDB is
the best solution here. I'd especially like to hear what Performance
thinks: I don't want to start creating a preferred storage solution
without their blessing. If they have suggestions for specific ways we
should use IndexedDB (or some other solution) to minimize perf impact,
we should try to enforce these through the preferred/wrapper API.


I'm not on the performance team, but I've done some extensive 
investigation into SQLite performance[1] and a lot of thinking about how 
to efficiently do disk I/O for various workloads from my work with 
mozStorage for Thunderbird's global database.


I would say that the IndexedDB API has a very good API[2] that allows 
for very efficient back-end implementations.  Our existing 
implementation could do a lot of things to go faster, especially on 
non-SSDs.  But that can be done as an enhancement and does not need to 
happen yet.  I think LevelDB broadly has the right idea, although 
Chrome's IndexedDB implementation has some surprising limitations (no 
File Blob storage) that suggests it's not there yet.


The API can indeed be a bit heavy-weight for simple needs; shims over 
IndexedDB like gaia's asyncStorage helper are the way to go:

https://github.com/mozilla-b2g/gaia/blob/master/shared/js/async_storage.js

Andrew

1: 
http://www.visophyte.org/blog/2010/04/06/performance-annotated-sqlite-explaination-visualizations-using-systemtap/


2: The only enhancement I would like is non-binding hinting of desired 
batches so that IndexedDB could pre-fetch data that the consumer knows 
it is going to want anyways in order to avoid ping-ponging fetch 
requests back and forth to the async thread and the main thread.  (Right 
now mozGetAll can be used to accomplish similar, if non-transparent and 
dangerously foot-gunny results.)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform