> The first set of fetches start at 11:38:07.890567 and continue until
> 11:41:08.004459.  That's 60.113892 seconds.

This just means that that is how long the entire write of the file took does
it not? What is the accummulated time for each fetchdata RPC?

But apart from that, I couldn't stop thinking about this after I left work
yesterday, because what I was saying didn't quite ring true to me either.
I think we did break this performance hack some time back. I just don't 
think the performance loss is as great as you might think.

11:41:07.986469 janeway.afscb > q.afsfs: rx data fs call fetch-data fid
536872319/16262/19711 offset 409468928 length 16384 (52)
11:41:07.987155 q.afsfs > janeway.afscb: rx data fs reply fetch-data (152)
11:41:08.003740 janeway.afscb > q.afsfs: rx data fs call fetch-data fid
536872319/16262/19711 offset 409534464 length 16384 (52)
11:41:08.004459 q.afsfs > janeway.afscb: rx data fs reply fetch-data (152)
11:41:08.746114 janeway.afscb > q.afsfs: rx ack (65)

This is two of the earlier FetchData RPCs. The first takes .000686 seconds and
the next takes .000719 seconds. So for the sake of argument, say each
takes .001 seconds. Let's say you've got a 64K chunk size and this is 
a 700Meg file, if we did a bogus fetch of every chunk that would be
11,200 bogus fetches. At .001 seconds each that is 11.2 seconds of FetchData
RPC's. 

Now there's also a modest amount of processing in the client for the RPC
which is not accounted for here (all file server time is accounted for).
But that is still a relatively small amount of time.

I'd suggest that the overhead is elsewhere, but not having any idea
how the 700Meg file is generated or the environment of the test I couldn't
even begin to guess.

But I agree that we shouldn't be doing the extra RPCs if possible. One 
problem is that the original hack didn't work once part of the file was
written to the server.

Bill


Reply via email to