[dev-servo] Creating a JSObject -> id map for the Debugger IPC server.

2016-11-21 Thread Eddy Bruel
For the Servo debugger, we need some kind of IPC layer:

The debugger server (i.e. the thing that talks the Chrome debugging
protocol) needs to live in the same process (and perhaps even the same
thread) as the constellation; it needs to work closely with the
constellation to figure out which script threads exists, and to route
messages to individual script threads.

At the other end, the debugger API (i.e. the thing we use to examine the
execution state of the JS engine) needs to live in the same thread as the
one it is debugging (i.e. the script thread). In multiprocess mode, there
will be a process boundary between the constellation and the script
threads. Consequently, we need an IPC layer between the debugger server and
the debugger API in each script thread.

Since the debugger API consists of a set of shadow objects (each of which
represents som part of the execution state, such as stack frames,
environments, scripts, etc), the obvious way to implement such an IPC layer
is as a set of proxies to each shadow object: to serialize a shadow object
over ipc, we assign it a unique ID. In the other direction, any message
addressed to that id will be routed to the appropriate object.

To route messages addressed to a specific id to the corresponding object,
we need to maintain some for of id -> object map. Moreover, because the
same shadow object can be obtained via different API calls, we need an
object -> id map; this allows us to ensure that we never create more than
one proxy for the same shadow object.

Creating an object -> id map is problematic; since a *mut JSObject does not
have a stable address, it cannot be used as a key in a HashMap. The obvious
answer here is to use a WeakMap. WeakMaps are available in JSAPI, but as
far as I can tell, not usable with the Rust bindings for JSAPI.

Adding WeakMap support to the Rust bindings for JSAPI is probably
non-trivial, so I'm looking for some kind of temporary workaround. The
simplest option is perhaps to store the id directly on the JSObject. The
Debugger API is designed so that defining arbitrary properties on shadow
objects is well-defined. Since we control how these JSObjects are used
(i.e. they are not exposed to arbitrary client JS), this should not lead to
conflicts.

Another option is to create a WeakMap indirectly, by doing the equivalent
of evaluating "new WeakMap()", storing the resulting JSObject, and then
providing some strongly typed Rust API on top of it. Note that this is very
similar to what we do for the shadow objects in the debugger API.

I am currently leaning towards the first option, but perhaps someone has a
better idea?
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


Re: [dev-servo] Using IPC channels with Tokio?

2016-11-16 Thread Eddy Bruel
After having talked with Alex Crichton, my understanding is that it's not
possible to use IPC channels with Tokio in their current form.

To make IPC channels usable with Tokio, we'd have to take whatever low
level IO primitive IPC channels use under the hood, and then reimplement
the same logic that the existing IPC channels implement on top of this, but
using a streams based API (by streams, I mean the streams from the
futures-rs, crate).

I'm not quite sure how much work this would be, since I don't know how
complex ipc channels are under the hood. In any case, I am skeptical that
this is worth the effort, just so that we can use futures in the IPC
client. Perhaps we should just rethink the client API so that it doesn't
use futures at all.

On Wed, Nov 16, 2016 at 2:47 PM, David Teller <dtel...@mozilla.com> wrote:

> Having worked on e10s, just a quick note on multi-process: it is really
> important to make the default use of IPC resilient to communication
> problems, in particular the other process not responding correctly,
> because it has crashed, frozen or is otherwise too busy. In Firefox,
> e10s fails to provide this and the result is very fragile.
>
> So whichever tool you use for this, please make sure that the API
> encourages the correct behavior.
>
> Cheers,
>  David
>
> On 16/11/16 14:16, Eddy Bruel wrote:
> > For the Servo debugger, now that we've begun landing the Rust interface
> to
> > the Debugger API, we can also started working on the next step, which is
> an
> > IPC client/server on top of that Rust interface. This IPC server will
> allow
> > the main debugger server, which lives in a separate process, to talk to
> the
> > debugger API of one or more script threads.
> >
> > Since the IPC client/server inherently asynchronous in nature, I was
> hoping
> > to implement the client side API with futures. Essentially, for each
> > synchronous function on the server side, there will be a corresponding
> > asynchronous function on the client side that returns a future.
> >
> > The futures on the client side will typically be resolved when we
> receive a
> > message from an IPC channel (which will typically be either a reply to a
> > pending request, or some kind of notification). For that to work, we need
> > to be able to hook IPC channels into a Tokio event loop.
> >
> > Given the above, I wondered if there is currently any support for using
> IPC
> > channels with Tokio? And if not, would it be possible to implement this?
> > ___
> > dev-servo mailing list
> > dev-servo@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-servo
> >
> ___
> dev-servo mailing list
> dev-servo@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-servo
>
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


[dev-servo] Using IPC channels with Tokio?

2016-11-16 Thread Eddy Bruel
For the Servo debugger, now that we've begun landing the Rust interface to
the Debugger API, we can also started working on the next step, which is an
IPC client/server on top of that Rust interface. This IPC server will allow
the main debugger server, which lives in a separate process, to talk to the
debugger API of one or more script threads.

Since the IPC client/server inherently asynchronous in nature, I was hoping
to implement the client side API with futures. Essentially, for each
synchronous function on the server side, there will be a corresponding
asynchronous function on the client side that returns a future.

The futures on the client side will typically be resolved when we receive a
message from an IPC channel (which will typically be either a reply to a
pending request, or some kind of notification). For that to work, we need
to be able to hook IPC channels into a Tokio event loop.

Given the above, I wondered if there is currently any support for using IPC
channels with Tokio? And if not, would it be possible to implement this?
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


[dev-servo] Landing a Rust interface for the SpiderMonkey debugger API.

2016-11-04 Thread Eddy Bruel
Hello!

I am sure most of you are already aware of this, but for those of you who
are not: the devtools team is taking its first steps towards implementing a
JS debugger for Servo!

As part of this process, we started working on a Rust interface for the
SpiderMonkey debugger API a couple of weeks ago. I just finished the
initial prototype of this interface in this repo
 on my personal Github
account.

For our next step, we'd like to gradually pull this code into this repo
 on the Mozilla Github account. During
this process we should also document, test, and review it, so that by the
time we're done we'll have something that's production ready.

Because this code will end up being used in Servo, and because we'd like
the Servo people to be aware of what we're doing, it would be great if
someone from the Servo team could help us review this. I talked to ajeffrey
on irc yesterday, and he agreed to help us with this. Ajeffrey and I
already had many conversations on how the debugger in Servo should work, so
he should be the perfect reviewer for this. Of course, this code should
also be reviewed by someone from the devtools team. Jimb will be taking up
that role.

After we land this, our plan is to create an IPC server on top of this
interface. This will allow us to talk to the debugger API for a script
thread from a separate process. If you have any further questions and/or
comments concerning our plans for the debugger, don't hesitate to ping me
on irc!

Cheers,


Eddy
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo


[dev-servo] Fwd: Maintaining a list of debugging targets in the debugger.

2016-09-19 Thread Eddy Bruel
-- Forwarded message --
From: Eddy Bruel <ejpbr...@mozilla.com>
Date: Mon, Sep 19, 2016 at 5:45 PM
Subject: Maintaining a list of debugging targets in the debugger.
To: Jim Blandy <j...@mozilla.com>, se...@mozilla.com
Cc: Patrick Brosset <pbros...@mozilla.com>


Over the past few days, I've tried to come up with a way to maintain a list
of debugging targets in the debugger server, and how to route messages to
an individual debugging target. This has turned out to be rather
non-trivial, particularly because the notion of what a constitutes
debugging target is complex, and can change over time.

Below you will find an overview of what I've learned so far, partially to
force myself to order my own thoughts, partially to ask for your feedback
on the problem:

The unit of debugging should be a *unit of related similar-origin browsing
contexts
<https://html.spec.whatwg.org/multipage/browsers.html#unit-of-related-browsing-contexts>.*

A unit of related SOB contexts forms a tree of browsing contexts. The root
of this tree is either a tab, a worker, or a cross-origin iframe.

If a browsing context has one or more similar-origin children, these are
part of the same unit of related SOB contexts. For instance, if a tab has
one or more similar-origin iframes, both the tab and the iframes are part
of the same unit of related SOB contexts. Similarly, if one of these
iframes has one or more similar-origin sub-iframes, these too are part of
the same unit of related SOB contexts.

If a browsing context has one or more cross-origin children, these form
their own unit of related SOB contexts, with the child at the root. For
instance, if a tab has a cross-origin iframe, the iframe forms the root of
a separate unit of related SOB contexts. Similarly, if this cross-origin
iframe has one or more similar-origin sub-iframes, these too are part of
the same unit of related SOB contexts.

Navigation can cause a browsing context to move to a different unit of
related SOB contexts: for instance, if a same-origin iframe navigates to a
cross-origin URL, it will become the root of a separate unit of related SOB
contexts. Conversely, if a cross-origin iframe navigates to a same-origin
URL, it becomes part of an existing unit of related SOB contexts.


What does all this mean for the debugger server? First off, the debugger
server needs to maintain a list of units of related SOB contexts. Each of
these are valid debugging target for the debugger.

How do we maintain a list of units of related SOB contexts in the debugger?
Unfortunately, the constellation doesn’t know about units of related SOB
contexts; it only knows about frames. We need some way to group frames that
belong to the same unit of related SOB contexts. One way to accomplish this
could be to group frames that have the same effective top-level domain
together.

When a client connects to the debugger server, it does so using a URL that
determines a particular debugging target (i.e. a unit of related SOB
contexts). Because the debugger API needs to run in the same thread as the
debuggee, the debugger server cannot use the debugger API directly;
instead, it needs to request the script thread for the particular debugging
target to use the debugger API on its behalf.

Here is where things get complicated: so far, I assumed that each unit of
related SOB contexts has a unique script thread, and conversely, that each
script thread corresponds to a unique unit of related SOB contexts. If this
had been true, the debugger could send messages to the script thread for a
particular debugging target by sending a message to an arbitrary frame in
the corresponding unit of related SOB contexts, presumably using the
constellation to do the routing.

However, when a tab navigates, it is apparently assigned a new script
thread; therefore, even though a unit of related SOB contexts has a unique
script thread at any particular moment in time, the current script thread
can change over time. If we use the constellation to do the routing, it can
probably take care of this, but we need to make sure it doesn’t lead to any
nasty races (for instance, when a message is sent to the script thread by
the debugger in the middle of a navigation).

Furthermore, due to a bug in Servo, when a same-origin iframe navigates to
a same origin URL, it is assigned a new script thread. That means there is
no 1-1 relation ship between units of related SOB contexts and script
threads: there can be more than one script threads per unit of related SOB
contexts. This significantly complicates the routing. However, since this
is a problem that’s going to be fixed, we can probably ignore it for now
(simply always select the script thread of the topmost browsing context
within a particular unit of related SOB contexts).

More seriously, according to a conversation on irc I just had with
ajeffrey: “Making document.domain settable means we need to share script
threads among document

[dev-servo] Upgrading HTTP requests to WebSocket with rust-websockets.

2016-09-12 Thread Eddy Bruel
I am currently writing a prototype of a debugger server for Servo that
works with the Chrome Debugging Protocol:
https://github.com/ejpbruel/servo/tree/acceptor

The Chrome Debugging Protocol works by first sending a HTTP request to
http://localhost:/json (where  is the remote debugging
port). This returns a JSON array of objects, each of which describes a
debugging target. Among other things, this description includes a WebSocket
URL for the target.

Both the HTTP URL for the initial request and the WebSocket URLs for the
individual targets, use the same remote debugging port. This is a problem,
since the rust-websocket library that we use in Servo currently does not
support handling HTTP requests over the same port as the WebSocket server.

I did find the following pull request
 that seems to
implement the functionality we need, but for some reason it never landed
(the last comment is over a month old).

The way I see it, we have two options. Personally, I'm leaning towards
option two, but I wanted to get your opinion:

1. Ignore the problem for now. Most debugger clients for the Chrome
Debugging Protocol can attach to a WebSocket URL directly; since Servo only
supports a single tab right now anyway, we don't really need that initial
HTTP request.

2. Poke the author of the rust-websocket library and see if we can get that
pull request landed.
___
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo