On 11/10/06, Roland Kuhn <[EMAIL PROTECTED]> wrote:
Hi Ken!
On 9 Nov 2006, at 21:21, Ken Hornstein wrote:
>>> Hm, I am wondering if that's a win; that means two extra syscalls.
>>
>> Here two extra syscalls (a few microseconds nowadays?) can save
>> processing
>> a packet (or more), and incurring network latency (a hundred
>> microseconds
>> on my Ether).
>
> The syscalls (context switch, a good chunk of cache getting flushed)
> add up. Why do you think pread() and pwrite() were created? My point
> was that the extra syscalls may end up killing the advantage you get
> from sendfile() (the Linux sendfile() ... other sendfiles can add
> in header data). But ... that's just thinking out loud. Maybe it
> will be fine.
>
I'm not a guru, but I think that's not correct. pread/pwrite are
there to prevent some races, not to save time. And at least in Linux
The history of pread/pwrite stretches back before preemptive
multithreading existed on unix platforms. Syscalls weren't always
fast, and some platforms still have slow syscalls. Even when syscalls
are fast, it's still often advantageous to coalesce multiple syscalls
which are frequently called in succession into a new complex
operation.
syscalls are _fast_, because there is no context switch (that term is
reserved for the switch between processes, as that is a _lot_ slower
than entering/exiting kernel space). That's the whole point of
mapping kernel address space into each vm (with proper protection, of
course).
A syscall is of the order of 1usec, which is much shorter than the
network stack latencies which are being talked about in this thread.
I think you may be missing the point Ken was trying to make. With
RxTCP, Ken is profiling code and trying to optimize code path
latencies. With this type of optimization, a few microseconds can be
a huge win. Remember that 1us latency on a modern processor means
losing on the order of 500 to ~15 thousand instruction retires.
--
Tom Keiser
[EMAIL PROTECTED]
_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel