Hi ALL,
I was wondering if there is a DB performance reduction if there are a
lot of
IDLE processes.
30786 ?S 0:00 postgres: user1 gmadb 10.10.10.1 idle
32504 ?S 0:00 postgres: user1 gmadb 10.10.10.1 idle
32596 ?S 0:00 postgres: user1 gmadb 10.10.1
Hi All
I'm really desparate about this. The problem has occurried in both of my
customers first with cygwin and now with FreeBSD 5.3.
After 2 months, postgres start get down the performance, and simple queries
that should run in 100ms now tooks about 15 secs.
Another behaviour, the data is growi
On Fri, Feb 18, 2005 at 11:54:34AM -0300, Rodrigo Moreno wrote:
> 00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c "vacuum analyze;"
>>>/dev/null 2>&1
Isn't vacuum once a day a bit too little with heavy activity? You should
probably consider autovacuum.
> 00 23 * * 6 /usr/local/pgsql/bin/psql sup
JM <[EMAIL PROTECTED]> writes:
> I was wondering if there is a DB performance reduction if there are a
> lot of
> IDLE processes.
There will be some overhead, but I dunno if anyone's ever tried to
measure it.
regards, tom lane
---(end of br
"Rodrigo Moreno" <[EMAIL PROTECTED]> writes:
> After 2 months, postgres start get down the performance, and simple queries
> that should run in 100ms now tooks about 15 secs.
> Another behaviour, the data is growing to much, with no reason, just like
> the comparision.
Are you vacuuming on a regu
On Fri, Feb 18, 2005 at 09:32:25AM -0500, Tom Lane wrote:
> Are you vacuuming on a regular basis? Do you have the FSM settings high
> enough to cover the database?
He posted his cron settings ;-)
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)
Hi,
this is only max 15 concurrent conections. And is not a heavy performance
database, so i think this is not necessary vacumm more than once a day.
In another customer, has only 5 users and the database have 300mb, small
database, and has the same behaviour (haven't modified postgresql).
My fir
this is only max 15 concurrent conections. And is not a heavy performance
database, so i think this is not necessary vacumm more than once a day.
In another customer, has only 5 users and the database have 300mb, small
database, and has the same behaviour (haven't modified postgresql).
My first ins
00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c "vacuum analyze;"
Also, this is bad - you are not vacuuming all your databases, which will
cause you data loss one day with transaction wraparound. Use the
vacuumdb utility that comes with PostgreSQL instead.
Chris
---(end
"Rodrigo Moreno" <[EMAIL PROTECTED]> writes:
> max_fsm_pages = 4
> max_fsm_relations = 2000
> But why after 2 months the database has 1.3gb and after reimport on 900mb ?
40k pages = 320M bytes = 1/3rd of your database. Perhaps you need a
larger setting for max_fsm_pages.
However, 30% bloat
Thanks to all,
at this moment, can't stop the database and put back the old database, but
at night i will take more analyzes on old database and reimported and i put
here the results.
Thanks a lot
Rodrigo
-Mensagem original-
De: Tom Lane [mailto:[EMAIL PROTECTED]
Enviada em: sexta-feira,
Magnus prepared a trivial patch which added the O_SYNC flag for windows
and mapped it to FILE_FLAG_WRITE_THROUGH in win32_open.c. We pg_benched
it and here are the results of our test on my WinXP workstation on a 10k
raptor:
Settings were pgbench -t 100 -c 10.
fsync = off:
~ 280 tps
fsync on,
Josh Berkus wrote:
> Tatsuo,
>
>
>>Yes. However it would be pretty easy to modify pgpool so that it could
>>cope with Slony-I. I.e.
>>
>>1) pgpool does the load balance and sends query to Slony-I's slave and
>> master if the query is SELECT.
>>
>>2) pgpool sends query only to the master if the
Jim C. Nasby wrote:
> On Thu, Jan 20, 2005 at 10:08:47AM -0500, Stephen Frost wrote:
>
>>* Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote:
>>
>>>PostgreSQL has replication, but not partitioning (which is what you want).
>>
>>It doesn't have multi-server partitioning.. It's got partitioning
>>w
Magnus Hagander wrote:
> I don't think that's correct either. Scatter/Gather I/O is used to SQL
> Server can issue reads for several blocks from disks into it's own
> buffer cache with a single syscall even if these buffers are not
> sequential. It did make significant performance improvements when
15 matches
Mail list logo