On 08/28/2012 12:33 PM, Nimesh Satam wrote:
Hi,
We have been using the current version of postgres i.e. 9.1.4 with
streaming replication on. While vacuuming we noticed that certain dead
rows are not getting removed and following debug information is printed:
"DETAIL: 12560 dead row versions can
On Tue, Aug 28, 2012 at 10:03 AM, Nimesh Satam wrote:
> Hi,
>
> We have been using the current version of postgres i.e. 9.1.4 with
> streaming replication on. While vacuuming we noticed that certain dead rows
> are not getting removed and following debug information is printed:
>
> "DETAIL: 12560
There were no "hot standby" configuration, but the DB has start grow fast after
restoring from a base backup as described in
http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-BASE-BACKUP
The DB has been growing for a while, and now it seems to become stable after
adju
Hi,
We have been using the current version of postgres i.e. 9.1.4 with
streaming replication on. While vacuuming we noticed that certain dead rows
are not getting removed and following debug information is printed:
"DETAIL: 12560 dead row versions cannot be removed yet."
As per suggestion, we ma
On Sun, Aug 26, 2012 at 5:46 AM, Liron Shiri wrote:
> Hi,
>
>
>
> We have a table which its TOAST table size is 66 GB, and we believe should
> be smaller.
>
> The table size is 472 kb. And the table has 4 columns that only one of them
> should be toasted.
>
>
>
> The table has only 8 dead tuples,
Jayadevan M wrote:
> I have a plpgsql function that takes a few seconds (less than 5) when
executed from psql. The same
> function, when invoked from java via a prepared statement takes a few
minutes. There are a few queries
> in the function. Out of these, the first query takes input parameters
f
Hello all,
I have a plpgsql function that takes a few seconds (less than 5) when
executed from psql. The same function, when invoked from java via a
prepared statement takes a few minutes. There are a few queries in the
function. Out of these, the first query takes input parameters for
filter
Hi,
We have a table which its TOAST table size is 66 GB, and we believe should be
smaller.
The table size is 472 kb. And the table has 4 columns that only one of them
should be toasted.
The table has only 8 dead tuples, so apparently this is not the problem.
This table contains a column with b
Hello List,
I've got a system on a customers location which has a XEON E5504 @ 2.00GHz
Processor (HP Proliant)
It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:
The Postgres Performance on this system measured with pgbench is very poor:
transaction type: TPC-B (sort of)
sca