hi
i'm not totally sure i should ask on this mailing list - so if you think
i should better ask someplace else, please let me know.
the problem i have is that specific queries (inserts and updates) take a
long time to run.
of course i do vacuum analyze frequently. i also use explain analyze on
-Original Message-
From: Keith Worthington [mailto:[EMAIL PROTECTED]
Sent: Monday, April 11, 2005 7:44 PM
To: Neil Conway
Cc: PostgreSQL Perform
Subject: Re: [PERFORM] 4 way JOIN using aliases
Neil Conway wrote:
Keith Worthington wrote:
- Seq Scan on
hubert lubaczewski [EMAIL PROTECTED] writes:
and it made me wonder - is there a way to tell how much time of backend
was spent on triggers, index updates and so on?
In CVS tip, EXPLAIN ANALYZE will break out the time spent in each
trigger. This is not in any released version, but if you're
Bendik R.Johansen [EMAIL PROTECTED] writes:
I am having a bit of trouble updating a single integer column.
My table has around 10 columns and 260 000 records.
update no.records set uid = 2;
(uid is an integer. It has a btree index)
This update takes more than 20 minutes to execute. Is this
On Tue, Apr 12, 2005 at 10:18:31AM -0400, Alex Turner wrote:
Speaking of triggers...
Is there any plan to speed up plpgsql tiggers? Fairly simple
crosstable insert triggers seem to slow my inserts to a crawl.
plpgsql is quite fast actually. if some triggers slow inserts too much,
i guess you
On Tue, Apr 12, 2005 at 12:46:43PM +0200, hubert lubaczewski wrote:
the problem is that both the inserts and updated operate on
heavy-tirggered tables.
and it made me wonder - is there a way to tell how much time of backend
was spent on triggers, index updates and so on?
like:
total query
On Tue, Apr 12, 2005 at 08:43:59AM -0600, Michael Fuhr wrote:
8.1devel changes frequently (sometimes requiring initdb) and isn't
suitable for production, but if the trigger statistics would be
helpful then you could set up a test server and load a copy of your
database into it. Just beware
Bendik R. Johansen [EMAIL PROTECTED] writes:
Below is the schema for the table i will be using. I tried dropping the
index, but it did not help.
Indexes:
records_pkey PRIMARY KEY, btree (id)
records_category_idx gist (category)
records_cid_idx btree (cid)
Yes, I tried dropping it but it did not make a difference.
Could the table be corrupt or something?
Well, the important thing is that I now know that this is not typical
for PostgreSQL, so I will not have to rethink my whole project.
Thanks, so far.
On Apr 12, 2005, at 17:16, Tom Lane wrote:
Bendik R. Johansen [EMAIL PROTECTED] writes:
Yes, I tried dropping it but it did not make a difference.
Could the table be corrupt or something?
You didn't directly answer the question about whether there were foreign
keys leading to this table. Checking foreign keys could be the problem,
I'd like to create a fail-over server in case of a problem. Ideally, it
would be synchronized with our main database server, but I don't see any
major problem with having a delay of up to 4 hours between syncs.
My database is a little shy of 10 Gigs, with much of that data being in an
archived
So, my question is this: My server currently works great, performance wise.
I need to add fail-over capability, but I'm afraid that introducing a
stressful task such as replication will hurt my server's performance. Is
there any foundation to my fears? I don't need to replicate the archived log
I'm eager to hear your thoughts and experiences,
Well with replicator you are going to take a pretty big hit initially
during the full
sync but then you could use batch replication and only replicate every
2-3 hours.
Sincerely,
Joshua D. Drake
Thanks, I'm looking at your product
On Tuesday 12 April 2005 09:25, Matthew Nuzum wrote:
I'd like to create a fail-over server in case of a problem. Ideally, it
would be synchronized with our main database server, but I don't see any
major problem with having a delay of up to 4 hours between syncs.
My database is a little shy
Matthew Nuzum wrote:
I'm eager to hear your thoughts and experiences,
Well with replicator you are going to take a pretty big hit initially
during the full
sync but then you could use batch replication and only replicate every
2-3 hours.
Sincerely,
Joshua D. Drake
Thanks, I'm looking
On Tue, 12 Apr 2005 08:41:55 -0500, Dave Held wrote
-Original Message-
From: Keith Worthington [mailto:[EMAIL PROTECTED]
Sent: Monday, April 11, 2005 7:44 PM
To: Neil Conway
Cc: PostgreSQL Perform
Subject: Re: [PERFORM] 4 way JOIN using aliases
Neil Conway wrote:
Keith
-Original Message-
From: Keith Worthington [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 12, 2005 1:14 PM
To: Dave Held; PostgreSQL Perform
Subject: Re: [PERFORM] 4 way JOIN using aliases
I run vacuumdb with the analyze option every morning via a
cron job. In my ignorance I
[EMAIL PROTECTED] (Joshua D. Drake) writes:
So, my question is this: My server currently works great,
performance wise. I need to add fail-over capability, but I'm
afraid that introducing a stressful task such as replication will
hurt my server's performance. Is there any foundation to my fears?
Hi all,
I've just noticed an interesting behaviour with PGSQL. My software is
made up of few different modules that interact through PGSQL database.
Almost every query they do is an individual transaction and there is a
good reason for that. After every query done there is some processing
done by
Slavisa Garic [EMAIL PROTECTED] writes:
... Now, the
interesting behaviour is this. I've ran netstat on the machine where
my software is running and I searched for tcp connections to my PGSQL
server. What i found was hundreds of lines like this:
tcp0 0 remus.dstc.monash:43001
Tom Lane [EMAIL PROTECTED] writes:
Slavisa Garic [EMAIL PROTECTED] writes:
... Now, the
interesting behaviour is this. I've ran netstat on the machine where
my software is running and I searched for tcp connections to my PGSQL
server. What i found was hundreds of lines like this:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
This is a network-level issue: the TCP stack on your machine knows the
connection has been closed, but it hasn't seen an acknowledgement of
that fact from the other machine, and so it's remembering the connection
number
Hi Greg,
This is not a Windows server. Both server and client are the same
machine (done for testing purposes) and it is a Fedora RC2 machine.
This also happens on debian server and client in which case they were
two separate machines.
There are thousands (2+) of these waiting around and each
23 matches
Mail list logo