Matt - Very interesting information about squid effectiveness, thanks.
Martin,
You mean your site had no images? No CSS files? No JavaScript files? Nearly
everything is dynamic?
I've found that our CMS spends more time sending a 23KB image to a dial up
user than it does generating and serving dyn
, November 05, 2004 3:50 AM
To: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Restricting Postgres
[...]
Now is there an administrative command in PostgreSQL that will cause it
to move into some sort of maintenance mode? For me that could be
exceedingly useful as it would still allow for an admin
On Thu, Nov 04, 2004 at 23:32:57 +,
Matt Clark <[EMAIL PROTECTED]> wrote:
> >
> >I think in the future there will be a good bit of presentation
> >login in the client...
>
> Not if Bruno has his way ;-)
Sure there will, but it will be controlled by the client, perhaps taking
suggestio
Matt Clark wrote:
Pierre-Frédéric Caillaud wrote:
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
cela m'a fait le sourire :-)
(apologies for bad french)
M
Javascript is not an option for the scripts, one of the mandates of the
project is to su
Pierre-Frédéric Caillaud wrote:
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
cela m'a fait le sourire :-)
(apologies for bad french)
M
---(end of broadcast)---
TIP 7: don't forget to increase yo
A note though : you'll have to turn off HTTP persistent
connections in your server (not in your proxy) or youre back to
square one.
I hadn't considered that. On the client side it would seem to be up to
the client whether to use a persistent connection or not. If it does,
then yeah, a
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
Yup. If you go the JS route then you can do even better by using JS to
load data into JS objects in the background and manipulate the page
content directly, no need for even an Iframe. Ignore t
Javascript is too powerful to turn for any random web page. It is only
essential for web pages because people write their web pages to only
work with javascript.
Hmm... I respectfully disagree. It is so powerful that it is impossible
to ignore when implementing a sophisticated app. And it is
On Thu, Nov 04, 2004 at 22:37:06 +,
Matt Clark <[EMAIL PROTECTED]> wrote:
> >...
>
> Yup. If you go the JS route then you can do even better by using JS to
> load data into JS objects in the background and manipulate the page
> content directly, no need for even an Iframe. Ignore the dul
In your webpage include an iframe with a Javascript to refresh it
every five seconds. The iframe fetches a page from the server which
brings in the new data in form of generated JavaScript which writes
in the parent window. Thus, you get a very short request every 5
seconds to fetch new
I'm guessing (2) - PG doesn't give the results of a query in a stream.
In 1- I was thinking about a cursor...
but I think his problem is more like 2-
In that case one can either code a special purpose server or use the
following hack :
In your webpage include an iframe with a Java
These are CGI scripts at the lowest level, nothing more and nothing
less. While I could probably embed a small webserver directly into
the perl scripts and run that as a daemon, it would take away the
portability that the scripts currently offer.
If they're CGI *scripts* then they just use the
1- You have a query that runs for half an hour and you spoon feed
the results to the client ?
(argh)
2- Your script looks for new data every few seconds, sends a
packet, then sleeps, and loops ?
If it's 2 I have a readymade solution for you, just ask.
I'm guessing (2) - PG do
On Thu, Nov 04, 2004 at 03:30:19PM -0500, Martin Foster wrote:
> This should be my last question on the matter, does squid report the
> proper IP address of the client themselves?That's a critical
> requirement for the scripts.
AFAIK it's in some header; I believe they're called "X-Forwarded
Matt Clark wrote:
Correct the 75% of all hits are on a script that can take
anywhere from
a few seconds to a half an hour to complete.The script
essentially
auto-flushes to the browser so they get new information as it arrives
creating the illusion of on demand generation.
This is more li
On Thu, 4 Nov 2004 18:20:18 -, Matt Clark <[EMAIL PROTECTED]> wrote:
Correct the 75% of all hits are on a script that can take
anywhere from
a few seconds to a half an hour to complete.The script
essentially
auto-flushes to the browser so they get new information as it arrives
creating the
> Correct the 75% of all hits are on a script that can take
> anywhere from
> a few seconds to a half an hour to complete.The script
> essentially
> auto-flushes to the browser so they get new information as it arrives
> creating the illusion of on demand generation.
This is more like a s
Matt Clark wrote:
Apache::DBI overall works better to what I require, even if
it is not a
pool per sey. Now if pgpool supported variable rate pooling like
Apache does with it's children, it might help to even things
out. That
and you'd still get the spike if you have to start the webserver
Matt Clark wrote:
Case in point: A first time visitor hits your home page. A
dynamic page is generated (in about 1 second) and served
(taking 2 more seconds) which contains links to 20 additional
The gain from an accelerator is actually even more that that, as it takes
essentially zero seconds
Myself, I like a small Apache with few modules serving static files (no
dynamic content, no db connections), and with a mod_proxy on a special
path directed to another Apache which generates the dynamic pages (few
processes, persistent connections...)
You get the best of both, static files
> Case in point: A first time visitor hits your home page. A
> dynamic page is generated (in about 1 second) and served
> (taking 2 more seconds) which contains links to 20 additional
The gain from an accelerator is actually even more that that, as it takes
essentially zero seconds for Apache
> Apache::DBI overall works better to what I require, even if
> it is not a
> pool per sey. Now if pgpool supported variable rate pooling like
> Apache does with it's children, it might help to even things
> out. That
> and you'd still get the spike if you have to start the webserver and
>
Kevin Barnard wrote:
I am generally interested in a good solution for this. So far our
solution has been to increase the hardware to the point of allowing
800 connections to the DB.
I don't have the mod loaded for Apache, but we haven't had too many
problems there. The site is split pretty good b
Simon Riggs wrote
All workloads are not created equally, so mixing them can be tricky.
This will be better in 8.0 because seq scans don't spoil the cache.
Apache is effectively able to segregate the workloads because each
workload is "in a directory". SQL isn't stored anywhere for PostgreSQL
to sa
Matt Clark wrote:
I have a dual processor system that can support over 150 concurrent
connections handling normal traffic and load. Now suppose I setup
Apache to spawn all of it's children instantly, what will
...
This will spawn 150 children in a short order of time and as
this takes
"Doct
On Wed, 2004-11-03 at 21:25, Martin Foster wrote:
> Simon Riggs wrote:
> > On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
> >
> >>Is there a way to restrict how much load a PostgreSQL server can take
> >>before dropping queries in order to safeguard the server?I was
> >>looking at the log
> I have a dual processor system that can support over 150 concurrent
> connections handling normal traffic and load. Now suppose I setup
> Apache to spawn all of it's children instantly, what will
...
> This will spawn 150 children in a short order of time and as
> this takes
"Doctor, it h
John A Meinel wrote:
Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
I've seen this behavior before when restarting the web server during
heavy loads.Apache goes from zero connections to a solid 120,
causing PostgreSQL to spawn that many childre
Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
I've seen this behavior before when restarting the web server during
heavy loads.Apache goes from zero connections to a solid 120,
causing PostgreSQL to spawn that many children in a short order of
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
Is there a way to restrict how much load a PostgreSQL server can take
before dropping queries in order to safeguard the server?I was
looking at the login.conf (5) man page and while it allows me to limit
by processor time t
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
> Is there a way to restrict how much load a PostgreSQL server can take
> before dropping queries in order to safeguard the server?I was
> looking at the login.conf (5) man page and while it allows me to limit
> by processor time this seems t
On Tue, Nov 02, 2004 at 11:52:12PM +, Martin Foster wrote:
> Is there a way to restrict how much load a PostgreSQL server can take
> before dropping queries in order to safeguard the server?I was
Well, you could limit the number of concurrent connections, and set
the query timeout to a r
Is there a way to restrict how much load a PostgreSQL server can take
before dropping queries in order to safeguard the server?I was
looking at the login.conf (5) man page and while it allows me to limit
by processor time this seems to not fit my specific needs.
Essentially, I am looking fo
33 matches
Mail list logo