Steven,

After reading some of your complaints about the science standards from your website, I felt compelled to elaborate a little more about the problem. You seem
like someone who enjoys the details.

Three time stamps are needed for benchmarking:

1) time of initial request
2) start time of transfer
3) end time of transfer

The initial request for advice was generated from the following situation:

AlertSite, Gomez, Keynote [internet web service benchmarking and monitoring services] all charge approximately $500 per month to periodically hit a single web page (or set of page requests which comprise a "use case") from 10-12 locations around the United States or the World. They generate a graph of the entire transfer time for the page and its resources
with those three time stamps mentioned above.

Requirements:

1) The tool should be easily available from OSS/Free Software repositories (apt-get install etc....) 2) Can recurse through the site and pull all necessary resources (or at least the lion's share). 3) Can produce all three of the previously mentioned time stamps for each resource transferred.

The system would easily be capable of tracking availability, error rates, response times, transfer times, and broken links at a fraction of the cost of the full monitoring services. Multi-threading wget's behaviour would get me closer to approximating the end-user's real loading times for pages. However, I'm not so foolish as to request that. This could save me thousands of dollars a year and at the same time, provide
me with a much more flexible system.

Hopefully i have not embarrassed myself with the details. Thanks again for your advice.


Linwood Ma
CTO

773 435 3210 Voice
773 435 3232 Fax

thinkorswim, inc.
600 West Chicago Ave., Suite 100
Chicago, IL 60610

thinkorswim is a member of the NASD | SIPC
All contents of this transmittal are considered private
and should not be shared, retransmitted or stored without
prior consent


On Mar 28, 2007, at 9:22 PM, Steven M. Schweda wrote:

   Assuming that you're using wget version 1.10.2 (or similar), it
appears (src/ptimer.c) that the program already uses a time resolution
of a millisecond (or better), given underlying run-time library support
at that resolution.  The formatted output (retr_rate(): src/retr.c) is
limited to a form which is more convenient for most users.

   If you want the results to have any meaning, you should examine the
wget code to see exactly what is being timed (at which events the timer
starts and stops), to see if wget is measuring what you want measured.
A term like "response time" is pretty vague all by itself.

   It should be easy enough to modify the formatted output code to
provide more digits than the existing code does.  (Whether these would
be _significant_ figures would depend on the underlying OS timer
resolution.) I don't see how you could get this sort of output without
changing the code, so you'd need to decide whether you wanted to add a
command-line option (or to use some other method) to enable the new
elapsed time format, or if you just wanted to maintain a separate code
stream for a modified wget program which always uses the new format.

   Getting changes like this into the main product code stream is
someone else's decision. If I were you, I'd expect to have to make the
changes and maintain the different code myself into the indefinite
future.

---------------------------------------------------------------------- --

   Steven M. Schweda               [EMAIL PROTECTED]
   382 South Warwick Street        (+1) 651-699-9818
   Saint Paul  MN  55105-2547

Reply via email to