Carl Youngblood said the following on 01/24/2011 01:28 AM:
> Thanks for the advice Anton. Pages are coming up fast when they are
> accessed with a referer header and when they have recently been
> generated by Radiant. But they are taking 30s when I type the URL in
> directly and when I click refresh in a browser. 

This is not what I observe when I visit your site.
I find it very responsive.

It is for this reason I suggested the browser-side tools for analysis.
When I was developing my own sites I found that the biggest hit I had
was that I had broken my CSS up "logically" to include parts that dealt
with colour, fonts, the sidebar, the footer.  I had none of them, all in
the database.  When I merged them into one external file the load on
radiant decreased dramatically - Apache was now serving the CSS.

When I turned on browser caching (of CSS and images) the server was no
longer hit for the background and logo.

I later change the background to a plain colour rather than an image
since that rendered faster, and make the sizes of the images exact so
that the browser did not have to resize them.  Again, an improvement in

I repeat, I see no initial delay in accessing your site and I see no
delay when I do a refresh or when I click for another page.

> All this 30s is spent by
> Radiant generating the page. No content has yet been sent to the browser
> at that point, so I am certain that it is not a browser rendering issue.

As I said, I see no initial delay when I access your site.
If this were a problem with Radiant rendering the page then it would
surely apply for every page that had be be newly brought out of the
database and rendered?

Have you considered this, which I met with one site:
It ran on a shared service that wasn't terribly Rails-friendly and the
time to start-up Passenger, Ruby then Rails and connect to the (remote)
SQL database was... yes, about 30 seconds.  Once it was up and running
things ran better.

> As soon as the radiant process finished parsing and compiling the
> content, the CPU load drops to 1% and the page is displayed immediately
> in my browser. All of which points to an issue on the server side.

As I said, as a remote user who can only monitor what my browser is
doing, I'm observing something very different.

> The 30 second load time is a secondary problem, but far worse is the
> fact that direct hits and browser refreshes cause Radiant to go through
> all that unnecessary parsing again. I would like them to merely check
> when the page was generated last and not do anything if it's been less
> than an hour. 

Its the browser refresh that seems odd to me, and that is why I
suggested the browser tools.

Let me emphasise one again; as a remote user I an *not* seeing the kind
of poor start-up and poor refresh that you are describing.

Let me also emphasise that if it is a problem with Radiant rendering and
not caching then that would apply to all pages every time under all
circumstances, initial, directly entered URL or stepping though links on
other pages.

Before working on the server I would eliminate the possibility that what
you are observing is due to a browser configuration.   When I'm testing
my sites I use a number of different browsers, different versions of
firefox, different version of IE, Chromium, Opera and Konqueror, to make
sure that even if the results are not quite the same then they are
acceptable.   I run those browsers on different hardware, though I do
high speed cable to eliminate that kind of problem.

On the server side, I'd look to your logs to see how much *work* the
server has to do for each page; how many database hits, file references;
what the database latency is.

The browser tools will tell you the latency of the various components
being served up.  Some browsers will only fetch components (CSS,
graphics, text) one-by-one, others support parallel fetch if your server
can handle that.   THIS IS IMPORTANT.

One point which emerges from that is how your HTML Layout is done.
If the browser knows the size and position of everything beforehand then
it can lay them out and fill them in as they arrive.  If not, it has to
recalculate - and hence delay - until all the parts are available.

I found that with one of my layouts having the sidebar and its embedded
graphics specified BEFORE the main central body eliminated a 3 second
delay.  It allowed the browser to request the components that made up
the sidebar earlier.  That alteration in the scheduling of the parallel
download made a big difference.   Although the "hit" was on the server
and server 'latency', the problem was uncovered by looking at things
from the POV of the browser, NOT the server.

"The greatest of all faults is to be conscious of none. Recognizing our
limitations & imperfections is the first requisite of progress. Those
who believe they have "arrived" believe they have nowhere to go. Some
not only have closed their minds to new truth, but they sit on the lid."
   -- Dale Turner.

Reply via email to