"Gopal" <[EMAIL PROTECTED]> writes:
> Thanks for your suggestions. Here's an output of the explain analyse.
What's the query exactly, and what are the schemas of the tables it
uses (psql \d descriptions would do)?
The actual runtime seems to be almost all spent in the hash aggregation
step:
>
Hi all,
I have a table with statistics with more than 15 million rows. I'd
like to delete the oldest statistics and this can be about 7 million
rows. Which method would you recommend me to do this? I'd be also
interested in calculate some kind of statistics about these deleted
rows, like how ma
On Fri, 24 Nov 2006 09:22:45 +0100
Guido Neitzer <[EMAIL PROTECTED]> wrote:
> > effective_cache_size = 82728 # typically 8KB each
> Hmm. I don't know what the real effect of this might be as the doc
> states:
>
> "This parameter has no effect on the size of shared memory alloca
http://www.tpc.org/tpch/spec/tpch_20060831.tar.gz
- Luke
On 11/24/06 8:47 AM, "Felipe Rondon Rocha" <[EMAIL PROTECTED]> wrote:
> Hi everyone,
>
> does anyone have the TPC-H benchmark for PostgreSQL? Can you tell me where can
> i find the database and queries?
>
> Thks,
> Felipe
>
Hi everyone,
does anyone have the TPC-H benchmark for PostgreSQL? Can you tell me where can
i find the database and queries?
Thks,
Felipe
Hi,
Thanks for your suggestions. Here's an output of the explain analyse.
I'll change the shared_buffers and look at the behaviour again.
"Limit (cost=59.53..59.53 rows=1 width=28) (actual time=15.681..15.681
rows=1 loops=1)"
" -> Sort (cost=59.53..59.53 rows=1 width=28) (actual
time=15.678..
Am 23.11.2006 um 23:37 schrieb Gopal:
hared_buffers = 2# min 16 or
max_connections*2, 8KB each
If this is not a copy & paste error, you should add the "s" at the
beginning of the line.
Also you might want to set this to a higher number. You are setting
about