Re: [dev-servo] Pre-alpha of libservo available

2017-09-20 Thread Till Schneidereit
On Wed, Sep 20, 2017 at 11:36 PM, Nicholas Nethercote <
n.netherc...@gmail.com> wrote:

> What is the backwards compatibility story? Are the APIs stable?
>
> I ask because, as I understand it, API instability has always been the
> problem with embedding Firefox.
>

I agree that backwards compatibility is important. While we haven't
stabilized the APIs yet, but we plan to do so before anything approaching a
stable release.
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


[dev-servo] Pre-alpha of libservo available

2017-09-20 Thread Till Schneidereit
Hello friends of Servo!

We're happy to announce the availability of a pre-alpha version of
libservo, a Rust embedding API for Servo.

You can find documentation for the API and a tutorial for using it at the
following locations:
https://doc.servo.org/servo/struct.Servo.html
http://github.com/paulrouget/servo-embedding-example

Binaries for demo applications for the three desktop platforms can be found
here:
https://github.com/paulrouget/servoshell/releases


Please take a look, play around with the API and the demo applications, and
let us know what you think.

With the TL;DR out of the way, let's take a look at the details.


## What is this announcement about?

To some extent, Servo was always embeddable. Over the past year a lot of
effort has gone into the design and implementation of a clean low-level API
- libservo. The API is designed for maximal flexibility, with the goal
being to cover all use cases for Servo - ranging from simple webview-like
uses to fully-fledged browsers to server-side uses as a headless utility.

We do not anticipate most embedders to use this API directly: to address
the use cases of the majority of embedders, we intend to build higher-level
APIs on top of this. Those will range from higher-level Rust APIs to an
implementation of CEF[1] to drop-in replacements for the Android WebView
component[2].

Because most embedders won’t be using this low-level API directly, we are
choosing flexibility over usability when adding functionality.We want the
API to be as usable as possible - but not more so. The API requires
embedders to provide a GL rendering context and manually drive the event
loop to ensure rendering and input processing. It also allows fine-grained
influence on behavior such as navigation. Going forward, it'll grow more
capabilities along those lines, such as rendering to buffers or output of
raw display lists to be consumed by compositing systems such as a
customized WebRender, or delegating all I/O to the embedder.



[1] https://en.wikipedia.org/wiki/Chromium_Embedded_Framework
[2] https://developer.android.com/reference/android/webkit/WebView.html


## What's the current status?

At this point libservo is far from complete, but ready for early
experimentation. Crate-level documentation is here:
https://doc.servo.org/servo/struct.Servo.html

A How To guide for setting up a project to use libservo can be found here:

http://github.com/paulrouget/servo-embedding-example

In addition to this example application Paul has built two additional demo
apps: a cross-platform minimalist browser with as reduced a UI as is
possible and a more fully-fledged macOS version with a native tabbed UI and
a proper location bar. These are both implemented as ports of the
ServoShell project:
https://github.com/paulrouget/servoshell

Releases for the three desktop platforms are available as github releases:
https://github.com/paulrouget/servoshell/releases

We're happy with the overall shape of the API, but expect at least cosmetic
changes to most of it. Additionally, there is a lot of functionality
missing, so you should see this as the kernel of an API to be fleshed out.

Additionally, there are a number of issues making it harder to get a
project up and running than it should be. From a build system and
dependency management perspective, embedding Servo should be as simple as
adding it as a dependency to a crate's Cargo.toml. For now Paul's
above-mentioned embedding example has a detailed explanation of the
required steps.


## What's next?

Our highest priority right now is to streamline the process. That means
reducing the build system complexity and doing the above-mentioned cosmetic
changes.

We're also working on more examples and demos. Specifically, Paul is
experimenting with a simple C API and with an integration into a Xamarin
Forms-based C# application.

We'll share more information about those experiments and about our plans
for more high-level ways of embedding Servo, such as bindings for
high-level languages like Java and JavaScript.

For now, please download the demo applications, take a look at the
documentation, play around with the example code, and let us know what you
think: ping Paul or me in #servo on IRC, file issues in the servo/servo or
repository or Paul's ServoShell or Servo Embedding Example repositories, or
simply reply to this email.

We'd be particularly interested in experiments around integrating Servo
into platform-native toolkits. E.g., a Gtk-based Linux equivalent to Paul's
macOS ServoShell demo would be fantastic.


Thank you,
the Servo Embedding team
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


Re: [dev-servo] Zhitin Zhu's intern presentation on Magic DOM

2017-07-26 Thread Till Schneidereit
On Wed, Jul 26, 2017 at 11:03 AM, Peter Hall  wrote:

> Is it still possible to watch these somewhere?
>

They'll be live-streamed on air.mozilla.org at the mentioned time, and then
be available as recordings a short while after that, also on air.mozilla.org
.
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


Re: [dev-servo] Embedding Servo - features of interests

2017-04-12 Thread Till Schneidereit
On Wed, Apr 12, 2017 at 2:56 PM, Boris Zbarsky  wrote:

> On 4/12/17 2:32 AM, Paul Rouget wrote:
>
>> - Prerendering: we will want to prerender documents. A special kind of
>> pipeline with black-listed functionalities.
>>
>
> I suspect that at least for a web browser this is a huge amount of pain.
> Note that Chrome is giving up and removing their implementation, last I
> checked.
>

Interesting. Do you have a link for more information?
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


[dev-servo] Fwd: Progressive Web Metrics (PWM) in Gecko

2016-09-26 Thread Till Schneidereit
Forwarding to dev-servo; seems like adding these metrics might be
interesting to have useful comparisons between servo and other engines.

-- Forwarded message --
From: Dominik Strohmeier 
Date: Mon, Sep 26, 2016 at 3:46 PM
Subject: Progressive Web Metrics (PWM) in Gecko
To: dev-platf...@lists.mozilla.org


Greetings,

in an effort to enable a modern user-centric metrics suite that will allow
us to track measurements that represent the user perceived performance, we
aim to get Progressive Web Metrics (PWM) into Gecko
 (meta bug 1298380
). Beyond improving
our measurements around responsiveness of Gecko, PWM is also meant to
provide developers better performance measurements in the future.

The current efforts around PWM have mainly been driven by the Chromium team
that worked on drafting first specs for measurements and probes. There is a
presentation from Paul Irish here: bit.ly/pwmetricsdeck. Shubhie Panicker
is working on the spec for W3C Web Perf WG.

From a user's point of view, we see four key moments of experience when a
user intends to explore a new web page/app (adopted from here

):

   - *Time to First Meaningful Paint (TTFMP)
   :
   "Is it happening and useful?"* This first part is focusing on measuring
   from a user's perspective if navigation started successfully (the server
   has responded and if the page painted enough critical content that I
could
   engage with it? Technically, TTFMP measures the time from navigation
start
   to the first paint where page’s primary content is visible. Currently it
is
   defined as the the paint that follows the most significant layout
   .
   TTFMP is tracked in bug 1299117
   .
   - *Time to Interaction (TTIx): "Is it working?"* This probe will track
   when users start interacting with pages after navigation via click, touch
   or scroll event. During user research from the UX team, we have learned
   that users use scrolling to test if a page is fully loaded. In other
words,
   user interaction with new content marks the end of the user-perceived
page
   load process and defines the transition to interaction and content
   exploration.
   - *Time to Interactive (TTI)
   : “is it usable?"* This
   defines the transition from page load to ready for user interaction from
   the engine's point of view. TTI celebrates that the thread executing
   javascript is available enough to handle user input. In the current spec
   for TTI, there should be nothing blocking the render process for more
than
   50ms to handle input within a 10sec window. Further discussion on TTI
   calculation can be also found here
   .
   TTI is tracked in bug bug 1299118
   .
   - *Interaction probes: "Is it delightful?"*  Beyond page load, these
   probes track the responsiveness of the engine during users' content
   exploration/browsing.
  - *Estimated Input Latency (EIL)
  
  *aka Risk to Responsiveness (see bug 1303296
  ): Given that
  user input can happen anytime, EIL estimates the input latency
for new user
  input.
  - *Actual Input Latency: *Input latency starts at the event timestamp
  and measures time to glass. In Telemetry, this is currently tracked
  via INPUT_EVENT_RESPONSE_MS
  - *Frame Throughput*
  
  aka Jank-free scrolling and animation (see Chromium working draft
  
  and bug 1303313 
  ).

We are seeking engineering input and hope to get a discussion started here
about specs and implementation.
Thanks,
Harald and Dominik

--
Dominik Strohmeier | Staff Product Manager, Platform Metrics | Mozilla
___
dev-platform mailing list
dev-platf...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


Re: [dev-servo] What should be the unit of debugging in Servo?

2016-09-06 Thread Till Schneidereit
On Tue, Sep 6, 2016 at 4:35 AM, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 9/5/16 8:17 AM, Till Schneidereit wrote:
>
>> I don't think it makes too much sense to be able to pause completely
>> independent browsing contexts that can't possibly interact with each
>> other.
>>
>
> There is no such thing.  They can always interact, just async.
>

That is indeed an important point. While I meant sync interactions, that
doesn't cover worker threads, so it can't be the whole story.
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


Re: [dev-servo] Questions about constellation, sandboxing and multiprocess

2016-08-03 Thread Till Schneidereit
On Wed, Aug 3, 2016 at 5:35 PM, Jack Moffitt <j...@metajack.im> wrote:

> I asked ekr how much this mattered, and he thought it was important. I
> don't think anyone has pointed me to a documented attack, but it
> definitely seems like the kind of thing that could be done somehow.
>

I guess I left out an important point: multiple content processes only
improve security if we can reliably ensure that attacking code is never run
in the same process with potential target content.

That means either really spawning a new process for every origin, at least,
or (for some value of "reliably ensure") separating content based on some
kind of trustworthiness score.

I would argue that the first isn't really feasible. I think (but might be
mistaken) all browsers start combining tabs after a certain amount to not
gobble up too much memory. In the second case, we might just as well use a
single process for a trustworthiness group right away.


> How we allocate domains to content processes is an open question. It's
> not clear whether we want to segregate high value targets or low value
> targets. But the infrastructure required is the same either way pretty
> much. The only strategy we know won't work is round-robin/random,
> since the attacker could just keep creating domains until they land in
> the right process.
>
> To be clear, I don't think there is very much code complexity here
> over the normal 2 process (chrome + content) solution. We already have
> to have process spawning and IPC. The only thing that changes here is
> code to decide where to spawn new pipelines.
>

I'm not concerned about code complexity, but about memory usage. Memory
usage in many-tab scenarios is one of the measures where Firefox is still
vastly superior to the competition, and I think we should aim for roughly
matching that.


> Implementation wise, we currently spawn a new process per script
> thread. I think we should change this to spawn a single, sandboxed
> content process that contains all the pipelines. Later we can expand
> this once it's more clear how we should allocate pipelines to
> processes.
>
> jack.
>
> On Wed, Aug 3, 2016 at 2:53 AM, Till Schneidereit
> <t...@tillschneidereit.net> wrote:
> > I wonder to which extent this matters. I'm not aware of any real-world
> > instances of the mythical cross-tab information harvesting attack. Sure,
> in
> > theory the malvertising ad from one tab would be able to read information
> > from your online banking session. In practice, it seems like attacks that
> > gain control of the machine are so much more powerful that that's where
> all
> > the focus is.
> >
> > Additionally, it seems like two content processes, one for normal sites,
> > one for high-security ones (perhaps based on EV certificates), should
> give
> > much of the benefits. Or perhaps an additional one for low-security ones
> > such as ads (perhaps based on tracking blocking lists).
> >
> > On Wed, Aug 3, 2016 at 5:43 AM, Jack Moffitt <j...@metajack.im> wrote:
> >
> >> Each process is a sandboxing boundary. Without security as a concern
> >> you would just have a single process. A huge next step is to have a
> >> second process that all script/layout threads go into. This however
> >> still leaves a bit of attack surface for one script task to attack
> >> another. How many processes you want is a tradeoff of overhead vs.
> >> security.
> >>
> >> So really it should say "more process more security".
> >>
> >> jack.
> >>
> >> On Tue, Aug 2, 2016 at 9:09 PM, Patrick Walton <pwal...@mozilla.com>
> >> wrote:
> >> > It's not a stupid question :) I actually think we should gather all
> >> script
> >> > and layout threads together into one process. Maybe two, one for
> >> > high-security sites and one for all other sites.
> >> >
> >> > Patrick
> >> >
> >> >
> >> > On Aug 2, 2016 6:47 PM, "Paul Rouget" <p...@mozilla.com> wrote:
> >> >>
> >> >> On Tue, Aug 2, 2016 at 6:47 PM, Jack Moffitt <j...@metajack.im>
> wrote:
> >> >> >> First, is multiprocess and sandboxing actively supported?
> >> >> >
> >> >> > I tested this right before the nightly release, and it was working
> >> >> > fine and didn't seem to have bad performance. Note that you can
> run -M
> >> >> > or -M and -S, but not -S by itself (which doesn't make sense). Also
> >> >> > note that -M and -S probably don't work on Windows or Android
> >> >> > curr

Re: [dev-servo] Proposed work for upcoming sharing of Servo components with Firefox

2016-06-22 Thread Till Schneidereit
On Wed, Jun 22, 2016 at 5:17 PM, Manish Goregaokar 
wrote:

> On Wed, Jun 22, 2016 at 7:38 PM, Boris Zbarsky  wrote:
>
> > As a matter of dealing with test breakage and how it's resolved, do you
> > also prefer doing a bunch of commits and only then running tests, and if
> > they fail closing down the tree until it's all resolved?  If not, how is
> > this substantively different?  I realize it's different in a practical
> > sense because the costs are higher than normal per-commit testing, of
> > course.
> >
> >
> We don't close down the tree except for CI fires (and in the past, large
> bitrot-prone rustups). We currently have tests run on every approved PR
> (one at a time) before they are merged. We like for individual commits to
> pass tests for bisecting to work well but it is not a hard requirement. If
> the tests fail for a PR, it is taken out of the queue until it is fixed and
> reapproved. In the meantime, the next PR in the queue is tested. We also
> have travis perform a reduced set of tests on PRs so you can catch some
> things that may cause it to fail before approving. Additionally, we have
> the ability to "try" a PR so that it will be tested against the full suite
> but not merged. We never have backouts, because master always is green.
>

I'm somewhat surprised about the optimism about keeping this model
indefinitely going forward. I'm not aware of any truly large-scale project
that has been able to do so, and I don't see why Servo would have any
intrinsic advantages over these other projects in this regard.

Hence, I think we'll have to make compromises on this sooner or later
anyway. Whether that means roll-ups or something else, I'm not sure.
Assuming this is true, it might make sense to factor it in when thinking
about the integration with Gecko: if we go out of our way now to set up
something that'd allow us to keep a setup we have to leave for other
reasons in the not-too-distant future anyway, then that might not be worth
it.


Till
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


Re: [dev-servo] Rooting APIs and idiomatic Rust

2015-09-23 Thread Till Schneidereit
CC-ing some SpiderMonkey GC people.

Terrence, I know that you did some Rust stuff recently, so maybe you have
the required bits of information to comment on this?

On Wed, Sep 23, 2015 at 7:15 PM, Josh Matthews 
wrote:

> I have a branch of rust-mozjs where I attempted to implement a Rust
> version of CustomAutoRooter/AutoGCRooter from SpiderMonkey:
> https://github.com/jdm/rust-mozjs/commit/ff89d9609e17e7727c038c8ea8deab88bae4333e
>
> This compiles fine, but I hit problems when I attempted to integrate it
> with my previous work for a safe Rust interface to SM's typed arrays:
> https://github.com/servo/servo/pull/6779
>
> Specifically, my problems stem from this - given that creating a typed
> array is fallible, as is wrapping an existing typed array reflector, we
> decided to make the constructors for the typed array APIs return Result
> values. Unfortunately this interacts poorly with AutoGCRooter, since it
> stores pointers to Rust objects that end up moving when unwrapping the
> Result (
> https://github.com/jdm/rust-mozjs/commit/ff89d9609e17e7727c038c8ea8deab88bae4333e#diff-5f5de62bd671c7658b23fd6be60ce6bfR41
> ).
>
> Can anybody think of a useful way around this problem, or are we doomed to
> non-idiomatic APIs when we're trying to directly integrate with SM's lists
> of rooted values?
> ___
> dev-servo mailing list
> dev-servo@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-servo
>
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo