Hi Thane,

On Thursday 02 February 2012 20:39:09 Thane Sherrington wrote:
> At 04:26 PM 02/02/2012, Gaffer wrote:
> >Whilst I would agree that some web pages contain excessive code that
> > can tax a web browser, not all that code is required to render a
> > page.
> >
> >Ever looked at all the stuff that is in there collecting data about
> > who, what and where you browse, not to mention placing code on your
> > machine that is used to record where you have been and how long you
> > were there !
>
> Well, sure a basic page isn't taxing.  But in the real world,
> browsing is snappier on a faster computer.  I'm looking for a way to
> put a number on that so that I can give useful advice to my clients
> about upgrading.
>
> T

I think the problem is there are a number of points where measurements 
can be taken that give misleading results.

1st:
How fast can the remote machine feed data into the pipe at the far end.  
In my case since I only pay for a maximum 1Mbs upload speed, you are 
not going to get data from me faster than that.

2nd:
Because of the way traffic is handled, any data that is lost or damaged 
in transit causes a request for re-transmission of any bad data.  These 
re-tries will slow the overall speed and are a result of the quality of 
the pipe.  This will change constantly depending upon the contention, 
ie the number of connections using the pipe.

3rd:
As touched on above, the quality of the pipe and the contention on that 
pipe.  I'm sure that we are all aware that even on the Ethernet link 
between our own machines the transmission speed is limited the more we 
add machines.

Note I'm not making any distinction between data and content here.  
Simply a stream of data.

At this point we haven't even considered the content of the data !

-- 
Best Regards:
             Derrick.
             Running Open SuSE 11.1 KDE 3.5.10 Desktop.
             Pontefract Linux Users Group.
             plug @ play-net.co.uk

Reply via email to