Sure.  But lets also put this into perspective.  This is a small bit
of code that is already wrapped by an API to hide the implementation
details (which themselves already differ between platforms).  It
should be difficult neither to implement nor to re-implement as
conditions or requirements change (or in this case, simply become
clearer).  Both the costs and the benefits of either sharing or not
sharing the implementation between Mac OS X and Linux are low.

So we're approaching a bikeshed discussion, where the people with a
lot of Mac experience see one clear answer, and the people with a lot
of Linux experience see another.  A couple people raised questions
about the performance differential that some of us asserted in favor
of mach IPC, so we went at generated some objective data (which was
quite worthwhile).  The other factors you mention also point to using
mach IPC on the Mac--the ease of exchanging shared memory objects is
in fact where its speed advantage comes from for larger messages.  For
waitable events, there's no direct equivalent to a Windows waitable
object, so some other mechanism (semaphores, condition variables,
etc.) will be necessary on both the Mac and Linux, regardless.

I'd like to not get into a long, drawn-out "design by parade"
discussion on such a small module.  If there are known requirements
that haven't made it into the Windows code or design docs that someone
could summarize, that would be great and will help all of us.  In the
absence of that, I think that going on general principles and prior
experience, and doing an implementation bake-off , is a reasonable
course of action.

--Amanda

On Thu, Nov 13, 2008 at 1:12 AM, Darin Fisher <[EMAIL PROTECTED]> wrote:
> Keep in mind that pipes are not really the fastest IPC mechanism for
> windows.  Mike had a much faster shared memory based solution.  However, we
> found that the pipe based solution was easiest to integrate with the
> sandbox, and it was also fast enough such that other factors outweigh the
> performance differential between the two mechanisms.  I guess what I'm
> saying is that we should probably not get too caught up in the performance
> differences here unless we think that is the dominant factor.  Other things
> might be more important such as how easy it is to exchange shared memory and
> waitable events (the equivalent of a windows event object).
> -Darin
>
> On Wed, Nov 12, 2008 at 7:07 PM, Jeremy Moskovich <[EMAIL PROTECTED]>
> wrote:
>>
>> Hi Dan,
>>
>> Looking at the current IPC behavior is definitely on my list of things to
>> do.
>>
>> I don't think that should change the interpretation of the data though.
>> According to our measurements, Mach messages are always faster *.
>>
>> So the question becomes, not is it faster, but by how many orders of
>> magnitude.
>>
>> Best regards,
>> Jeremy
>>
>> * Using inline messages there's a break even at some point as the cost of
>> copying data takes over, but as discussed we can use OOL messages to get
>> really fast (~30uSec) constant time sends for messages >5K.
>>
>> On Wed, Nov 12, 2008 at 5:08 PM, Dan Kegel <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> On Wed, Nov 12, 2008 at 4:46 PM, Jeremy Moskovich <[EMAIL PROTECTED]>
>>> wrote:
>>> > We ran some benchmarks of Mach ports vs FIFOs on OSX, you can find the
>>> > results in the "Performance Considerations" section of the Design doc.
>>>
>>> I don't see any measurements showing what typical Chrome
>>> IPC traffic looks like.  Without that, it's hard to interpret
>>> your results.  If I missed it, please point me to it.
>>>
>>>
>>
>>
>>
>
>
> >
>



-- 
--Amanda

"I have never seen anything fill up a vacuum so fast and still suck."
--Rob Pike on the X Win...

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Chromium-dev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/chromium-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to