Guy,
The application is fairly straightforward, but as you say, what is
working okay with BigDBMS isn't working as well under PG. I'm going to
try other configuration suggestions made by others before I attempt
logic changes. The core logic is unchangeable; millions of rows of data
in a sin
> Regarding shared_buffers=750MB, the last discussions I remember on this
> subject said that anything over 10,000 (8K buffers = 80 MB) had unproven
> benefits. So I'm surprised to see such a large value suggested. I'll
> certainly give it a try and see what happens.
>
That is old news :) A
Hi there, we've partioned a table (using 8.2) by day due to the 50TB of
data (500k row size, 100G rows) we expect to store it in a year.
Our performance on inserts and selects against the master table is
disappointing, 10x slower (with ony 1 partition constraint) than we get by
going to the part
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be hurt
by this more than PG. Here are the settings I've modified in
postgresql.conf:
As I said you need to set shared_buffers to at least 750MB this is the
starting point, it can actually go higher. Addition
Craig A. James wrote:
I don't know if you have access to the application's SQL, or the time to
experiment a bit, but unless your schema is trival and your SQL is
boneheaded simple, you're not going to get equal performance from
Postgres until you do some analysis of your application under real-
Daryl Herzmann <[EMAIL PROTECTED]> writes:
> As the months have gone by, I notice many of my tables having *lots* of
> unused item pointers. For example,
> There were 31046438 unused item pointers.
> Total free space (including removable row versions) is 4894537260 bytes.
> 580240 pages are or w
[Daryl Herzmann - Sat at 12:59:03PM -0600]
> As the months have gone by, I notice many of my tables having *lots* of
> unused item pointers. For example,
Probably not the issue here, but we had some similar issue where we had
many long-running transactions - i.e. some careless colleague entering
Greetings,
I've been running Postgresql for many years now and have been more than
happy with its performance and stability. One of those things I've never
quite understood was vacuuming. So I've been running 8.1.4 for a while
and enabled 'autovacuum' when I first insalled 8.1.4 ... So in my
On Fri, 2007-01-05 at 19:28 +0100, Rolf Østvik wrote:
> --- Tom Lane <[EMAIL PROTECTED]> skrev:
>
> > The number-of-matching-rows estimate has gone up by a factor of 10,
> > which undoubtedly has a lot to do with the much higher cost estimate.
> > Do you have any idea why that is ... is the table