Thanks all for the feedback.
Responses are inline, and an updated design document has been posted.

@robertshield:

> Regarding the class naming in the Out of process design, the convention I've
> seen most often is to have FooHost classes run in the browser with Foo classes
> in child processes.

Yeah, I always get confused on which endpoint should be called "Host".

My thinking was from the perspective of the consumer -- the local
(in-process) component we talk to is named XXX. And then XXX vends the
functionality of an out-of-process component named XXXHost. (At least
that is how I always understood the relationship between
ResourceDispatcher / ResourceDispatcherHost).
I could be on crack, in which case I will flip the names.

> Proxy resolver process / crashes - the document mentions fallback if proxy
> config is set to auto-detect. If the proxy config is instead set to use a 
> fixed
> pac url and the proxy resolver process repeatedly crashes or becomes
> unresponsive could we offer a UI toast in the browser along the lines of "Your
> network configuration is bad, would you like me to try a different one?" upon
> which we would switch to auto-detect or direct connections? Or would that be
> requiring too much of the user?

Right now we will silently fall-back to direct, so this isn't an issue.
However, I don't feel that policy is correct: <http://crbug.com/12303>.
Once/if 12303 is address, I agree that on PAC failure we should
display a richer error page, describing that the proxy configuration
is borked.

> IPC messages from proxy resolver process to the browser - What functionality
> does Msg_Alert provide that Msg_OnError doesn't?

Yes, I could combine them into a single IPC message.
I chose to separate them as separate messages, in order to end up with
IPC messages that correspond exactly with the
ProxyResolverV8::JSBindings interface
<http://src.chromium.org/viewvc/chrome/trunk/src/net/proxy/proxy_resolver_v8.h>

> Proxy resolver process / Alive - the proxy resolver process may decide to 
> that a
> PAC script is invalid while doing auto-detect. I interpreted the next step to 
> be
> notifying the browser process of the invalid script. Does it signal to the
> browser that the script is invalid through Msg_OnError or Msg_Alert or through
> some special invocation of Msg_GetProxyForURL_Complete?

The latter. Msg_GetProxyForURL_Complete() will contain a network error
code for |result_code|, rather than |net::OK|.
ProxyService already knows to look for this failure and do its thing.
(We don't have separate error for "the script is invalid", since
running the PAC with different URLs can lead to different runtime
errors, so we can only be valid on a per-URL basis)

> Proxy resolver process / Unresponsive - the section just below on Request
> timeouts seems like it answers the TODO(eroman): How do we measure
> "unresponsive"? question. Request timeouts on the browser process -> proxy
> resolver process requests seem pretty important to have for v1.

D'oh!
Combined those two sections into a single "Unresponsive" section.

> Performance - additional latency is (1 + <number of DNS resolves>) *
> kIpcRoundtripLatency.

Done- reworded as a formula per your suggestion.

In fact, I re-wrote the entire performance section, and added an
"Optimizations" subsection.

> Totally premature question tainted by a dash of sandboxing
> ignorance: is it possible to embed the proxy resolver process in a sandbox 
> that
> _does_ allow network requests (but is otherwise similar to the renderer
> sandbox)? This would chop down the latency although it would require 
> maintaining
> a separate dns resolver cache from the browser.

I would really to avoid this, and keep the sandbox pure computation.
(In addittion to network priviledges, there are other subtle system
dependencies like "/etc/resolv.conf" and
"c:\windows\system32\drivers\etc\host" which would suck to punch holes
for).

If performance of these sequential IPCs ends up being a performance
bottleneck, we have some good options to fix the performance.

I have enumerated these options in a new "Optimizations" section of
the design document.

@jorlow:

> If so, I'd propose making the design for this new process a bit more general
> purpose.  Honestly, I don't think there's much to do.  And I think it'd be OK 
> to
> say that all work done by this process would need to be stateless (so we can
> kill it and spin it back up at will).
>
> I'm not necessarily saying you need to do the work to make it general purpose
> now, but I definitely think it should be kept in mind while working on this.
> That way we don't need to worry about finding ourselves designed into a corner
> (and needing to create another another helper process).

Yeah that's reasonable.

For the existing use-cases its not a big deal, since the utility
processes are very short-lived (whereas the proxy resolver process is
long-lived).

I could do some sort of ref-counted deal to expose a handle to the
"utility" process (which controls creation/destruction of the
process).
So we definitely aren't "designing ourselves into a corner" by adding
a proxy resolver process.

My concerns with merging the utility process(s) into the proxy resolver process:

(1) The proxy resolver process is latency sensitive. Consequently, any
other denizens of it MUST NOT freeze the IPC thread. I haven't
carefully at the current utility process stuff, but my expectation is
it will run code on the IPC process (since that is the simplest thing
for it to do).

(2) The name for this process would suck more. It is easy enough to
guess what a process called "Proxy Resolver Process" does when you see
it in the "about:memory" tab. But if you see an always-resident
process named "Utility" process, you might scratch your head.

@linus:

> I realize this is not a small request, but it would be better if we could move
> to a model where the browser was sandboxed and talked to a much simpler 
> process
> to carry out trusted operations on its behalf.

What I am aiming to do with this design is extract the processing of
*unsanitized user input* (the PAC script) from our priviledged (albeit
not-so-simple) broker process (the browser process).

I believe the direction you are suggesting is to keep moving down this
path of moving functionality out of the browser process to make it
slimmer.
(But as John mentions, a major difficulty is that the UI's use of
HWNDs makes sandboxing somewhat all-or-nothing).

I don't have a better answer to this; I will have to confer with the
wizards of multiprocess and see what else can be done.

On Wed, Jul 29, 2009 at 2:44 PM, Erik Kay<[email protected]> wrote:
> On Wed, Jul 29, 2009 at 1:42 PM, Linus Upson <[email protected]> wrote:
>>
>> Sorry. The point of my sandbox the browser suggestion wasn't to increase
>> robustness in the case of failure. It was only to limit the damage an
>> exploit might cause. The browser process is complicated enough that it is
>> hard to secure. A small broker would be easier to understand and make
>> resistant to attack.
>
> Right. Utility processes and a secure broker for a sandboxed browser solve
> different problems.  My guess is that in an ideal world, we'd wind up with
> both, where places that had direct contact with untrusted data (like Eric's
> PAC proposal here) being put into isolated processes.  Given that this was
> the thread you replied to, it seemed like you were suggesting that we'd be
> better off keeping this in process with a sandboxed browser (assuming we had
> time to do so, etc.).
> Erik
>
>>
>> Linus
>>
>> On Wed, Jul 29, 2009 at 1:40 PM, Erik Kay <[email protected]> wrote:
>>>
>>> On Wed, Jul 29, 2009 at 9:44 AM, Linus Upson <[email protected]> wrote:
>>>>
>>>> I realize this is not a small request, but it would be better if we
>>>> could move to a model where the browser was sandboxed and talked to a much
>>>> simpler process to carry out trusted operations on its behalf.
>>>
>>> While I like the general goal of what you're proposing, the downside of
>>> this approach is that if there's a crash, you lose the browser process.  If
>>> we still wind up storing much of the state of the browser in this main
>>> process, then we lose a lot of state and potentially cause much of the
>>> browser to become unusable (I'm assuming that the whole browser wouldn't
>>> crash since the 'microkernel' would likely keep everything running).  When
>>> one of these utility process crashes, we only lose that operation.
>>> Erik
>>>
>>>>
>>>> Linus
>>>>
>>>> On Wed, Jul 29, 2009 at 3:29 AM, Eric Roman <[email protected]> wrote:
>>>>>
>>>>> Here is a design document for http://crbug.com/11746
>>>>>
>>>>>
>>>>> http://sites.google.com/a/chromium.org/dev/developers/design-documents/out-of-process-v8-pac
>>>>>
>>>>> Feedback welcome.
>>>>>
>>>>>
>>>>
>>>>
>>>> >>>>
>>>
>>
>
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to