Thanks for the explanation. So what sort of changes need to be made to
the client/server protocol to fix this problem?
On Thu, 23 Sep 2004 18:22:15 -0500 (EST), Kris Jurka <[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 14 Sep 2004, Stephen Crowley wrote:
>
> > Problem solv
uot;
"Total runtime: 201009.000 ms"
So now this in all in proportion and works as expected.. the question
is, why would the fact that it needs to be vaccumed cause such a huge
hit in performance? When i vacuumed it did free up nearly 25% of the
space.
--Stephen
On Fri, 17 Sep 2004 2
Here are some results of explain analyze, I've included the LIMIT 10
because otherwise the resultset would exhaust all available memory.
explain analyze select * from history where date='2004-09-07' and
stock='ORCL' LIMIT 10;
"Limit (cost=0.00..17.92 rows=10 width=83) (actual
time=1612.000..170
ECTED]> wrote:
> Stephen Crowley <[EMAIL PROTECTED]> writes:
> > On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> Stephen Crowley <[EMAIL PROTECTED]> writes:
> >>> Does postgres cache the entire result set before it begins r
On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> Stephen Crowley <[EMAIL PROTECTED]> writes:
> > Does postgres cache the entire result set before it begins returning
> > data to the client?
>
> The backend doesn't, but libpq doe
Does postgres cache the entire result set before it begins returning
data to the client?
I have a table with ~8 million rows and I am executing a query which
should return about ~800,000 rows. The problem is that as soon as I
execute the query it absolutely kills my machine and begins swapping
for