Subject: Re: [PERFORM] Update performance degrades over time
Yes we are updating one of indexed timestamp columns which gets unique
value on every update. We tried setting autovacuum_vacuum_scale_factor =
0.1 from default to make autovacuum bit aggressive, we see bloating on
both table and it
biah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Update performance degrades over time
On Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84
<[EMAIL PROTECTED]> wrote:
> Hi All,
>
> We are doing some load tests with our application running postgres
>
On Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84
<[EMAIL PROTECTED]> wrote:
> Hi All,
>
> We are doing some load tests with our application running postgres 8.2.4. At
> times we see updates on a table taking longer (around
> 11-16secs) than expected sub-second response time. The table in ques
Hi All,
We are doing some load tests with our application running postgres
8.2.4. At times we see updates on a table taking longer (around
11-16secs) than expected sub-second response time. The table in question
is getting updated constantly through the load tests. In checking the
table size incl
On Wednesday 11 February 2004 14:08, stefan bogdan wrote:
> hello
> i have postgres 7.3.2.,linux redhat 9.0
> a database,and 20 tables
> a lot of fields are char(x)
> when i have to make update for all the fields except index
> postgres works verry hard
> what should i've changed in configuration t
On Wed, 11 Feb 2004, scott.marlowe wrote:
> On Wed, 11 Feb 2004, stefan bogdan wrote:
>
> > hello
> > i have postgres 7.3.2.,linux redhat 9.0
> > a database,and 20 tables
> > a lot of fields are char(x)
> > when i have to make update for all the fields except index
> > postgres works verry hard
>
On Wed, 11 Feb 2004, stefan bogdan wrote:
> hello
> i have postgres 7.3.2.,linux redhat 9.0
> a database,and 20 tables
> a lot of fields are char(x)
> when i have to make update for all the fields except index
> postgres works verry hard
> what should i've changed in configuration to make it work
hello
i have postgres 7.3.2.,linux redhat 9.0
a database,and 20 tables
a lot of fields are char(x)
when i have to make update for all the fields except index
postgres works verry hard
what should i've changed in configuration to make it work faster
thanks
bogdan
---(end of b
I have updated my hardware performance documentation to reflect the
findings during the past few months on the performance list:
http://candle.pha.pa.us/main/writings/pgsql/hw_performance/index.html
Thanks.
--
Bruce Momjian| http://candle.pha.pa.us
[EMAIL PR
> shared_buffers = 128# min max_connections*2 or 16, 8KB each
Try 1500.
> sort_mem = 65535# min 64, size in KB
I'd pull this in. You only have 640MB ram, which means about 8 large
sorts to swap.
How about 16000?
> fsync = false
I presume you understand the risks involved with
Thanks to Greg Stark, Tom Lane and Stephan Szabo for their advice on
rewriting my query... the revised query plan claims it should only take
about half the time my original query did.
Now for a somewhat different question: How might I improve my DB
performance by adjusting the various paramet
Erik Norvelle <[EMAIL PROTECTED]> writes:
> Here's the query I am running:
> update indethom
> set query_counter = nextval('s2.query_counter_seq'), -- Just for keeping track
> of how fast the query is running
> sectref = (select clavis from s2.sectiones where
> s2.sectio
Erik Norvelle <[EMAIL PROTECTED]> writes:
> update indethom
> set query_counter =3D nextval('s2.query_counter_seq'), -- Just=
> =20=20
> for keeping track of how fast the query is running
> sectref =3D (select clavis from s2.sectiones where
> s2.sectiones.nomeope
On Tue, 2 Dec 2003, Erik Norvelle wrote:
> ** My question has to do with whether or not I am getting maximal speed
> out of PostgreSQL, or whether I need to perform further optimizations.
> I am currently getting about 200,000 updates per hour, and updating the
> entire 10 million rows thus requi
Folks:
I´m running a query which is designed to generate a foreign key for a table of approx. 10 million records (I've mentioned this in an earlier posting). The table is called "indethom", and each row contains a single word from the works of St. Thomas Aquinas, along with grammatical data about
15 matches
Mail list logo