Hi Tom,
I was profiling a case involving UPDATEs into a table with too many
indexes (brought to mind by mysql's sql-bench, about which more later) and
got this rather surprising result for routines costing more than 1% of the
total runtime:
(cut)
I suppose that the bulk of the CPU cycles
There's a requirement for the odbc, and jdbc drivers to be able to
return generated keys. AFAIR implementing insert returning would solve
this problem.
I think it still leaves us with a number of problems
1) the code has to know in advance to issue the insert returning.
2) also has to know in
Mark Cave-Ayland [EMAIL PROTECTED] writes:
Wow, a 64-bit CRC does seem excessive, especially when going back to Zmodem
days where a 50-100k file seemed to be easily protected by a 32-bit CRC. I'm
sure there are some error rates somewhere dependent upon the polynomial and
the types of error
Is anything wrong with anoncvs.postgresql.org (66.98.251.159) or
its connectivity? Attempts to reach it from two different network
locations elicit ICMP Host Unreachable from gphou-66-98-241-125.ev1.net
(66.98.241.125).
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
On Mon, Mar 07, 2005 at 08:40:08AM -0700, Michael Fuhr wrote:
Is anything wrong with anoncvs.postgresql.org (66.98.251.159) or
its connectivity? Attempts to reach it from two different network
locations elicit ICMP Host Unreachable from gphou-66-98-241-125.ev1.net
(66.98.241.125).
I forgot
On Sun, Mar 06, 2005 at 21:12:03 -0800,
Josh Berkus josh@agliodbs.com wrote:
Also, I think you should be recording the compile-time switches used on each
machine and indexing them indivdually. I'd hate to find out that, for
example, we'd broken --with-odbc and didn't know it because
On Mon, 7 Mar 2005, Michael Fuhr wrote:
On Mon, Mar 07, 2005 at 08:40:08AM -0700, Michael Fuhr wrote:
Is anything wrong with anoncvs.postgresql.org (66.98.251.159) or
its connectivity? Attempts to reach it from two different network
locations elicit ICMP Host Unreachable from
Tom Lane wrote:
Mark Cave-Ayland [EMAIL PROTECTED] writes:
Wow, a 64-bit CRC does seem excessive, especially when going back to Zmodem
days where a 50-100k file seemed to be easily protected by a 32-bit CRC. I'm
sure there are some error rates somewhere dependent upon the polynomial and
the
Its there a reason postgresql doesn't record vacuum/analyze and dump times in
pg_class (or another table). This seems
like it would be a very helpful feature.
for pg_dump I would add an option --record=YES|NO
for vacuum and analyze I would add a NORECORD|RECORD option
Jim
On Mon, 7 Mar 2005, Jim Buttafuoco wrote:
Its there a reason postgresql doesn't record vacuum/analyze and dump times in
pg_class (or another table). This seems
like it would be a very helpful feature.
for pg_dump I would add an option --record=YES|NO
for vacuum and analyze I would add a
If there are many identical values in att0, are you sure a sequential
scan isn't more efficient? Also, are you sure the index isn't working
well? It seems to me since you have the table clustered, it might be
fairly efficient as-is (it would get a huge benefit from the spatial
locality of
This is what I was thinking about doing. It would be nicer if the system just
did it for me. I have 100+'s of
databases with 100+'s of tables in each and run pg_autovacuum on them all. I
also do nightly dumps and any database
that has been modified (my application keeps track). I was just
Jim Buttafuoco wrote:
Its there a reason postgresql doesn't record vacuum/analyze and dump times in
pg_class (or another table). This seems
like it would be a very helpful feature.
for pg_dump I would add an option --record=YES|NO
for vacuum and analyze I would add a NORECORD|RECORD option
In that case, sequential scan is faster, but perhaps the planner doesn't
know that ahead of time. Try turning on more statistics if you haven't
already, and then run ANALYZE again. If the planner sees a range,
perhaps it assumes that it is a highly selective range, when in fact, it
consists of
But what happens if I go in and manually vacuum a table (either because I just
deleted a bunch of records or whatever).
This is why I think the backend should record the date in pg_class.
-- Original Message ---
From: Matthew T. O'Connor matthew@zeut.net
To: [EMAIL PROTECTED]
Ioannis Theoharis [EMAIL PROTECTED] writes:
select att0 from tc20 where att1=9 AND att1=0
plan:
Index Scan using inst_id_idx on tc20 (cost=0.00..161603.06
rows=106 width=1004) (actual time=41.21..101917.36 rows=100 loops=1)
Index Cond: ((att1 = 9)
Right, once autovacuum is integrated, then I think vacuum and analyze
should update the autovacuum table this way autovacuum won't redundantly
vacuum tables that were just vacuumed manually.
Jim Buttafuoco wrote:
But what happens if I go in and manually vacuum a table (either because I just
I agree 100%.
-- Original Message ---
From: Matthew T. O'Connor matthew@zeut.net
To: [EMAIL PROTECTED]
Cc: pgsql-hackers@postgresql.org
Sent: Mon, 07 Mar 2005 16:15:45 -0500
Subject: Re: [HACKERS] Recording vacuum/analyze/dump times
Right, once autovacuum is integrated, then I
Hi all,
running a 7.4.5 it happen to me with another table
where a single vacuum full was not freeing enough pages,
here the verbose vacuum full, as you can see only at
the end: truncated 8504 to 621 pages.
I use pg_autovacuum and it's not enough. I'll schedule
again a nightly vacuum full.
On Mon, 2005-03-07 at 09:39 -0500, Tom Lane wrote:
Mark Cave-Ayland [EMAIL PROTECTED] writes:
Wow, a 64-bit CRC does seem excessive, especially when going back to Zmodem
days where a 50-100k file seemed to be easily protected by a 32-bit CRC. I'm
sure there are some error rates somewhere
Hi all,
running a 7.4.5 engine, I'm facing this bad plan:
empdb=# explain analyze SELECT
name,url,descr,request_status,url_status,size_mb,estimated_start,request_time_stamp
empdb-#FROM v_sc_user_request
empdb-#WHERE
empdb-# login =
Simon Riggs [EMAIL PROTECTED] writes:
Well, we're using the CRC in 3 separate places...
(1) for xlog records
(2) for complete blocks copied to xlog
(3) for control files
For (1), records are so short that probably CRC16 would be sufficient
without increasing the error rate noticeably.
I
Tom Lane [EMAIL PROTECTED] writes:
Simon Riggs [EMAIL PROTECTED] writes:
For (1), records are so short that probably CRC16 would be sufficient
without increasing the error rate noticeably.
The control files are so short that CRC16 would be plenty.
It's not really the size of the data
Folks,
Although the SQL:2003 command MERGE has not yet been implemented in
PostgreSQL, I'm guessing that there are best practices for how to
implement the MERGE functionality.
To recap, MERGE means (roughly) INSERT the tuple if no tuple matches
certain criteria, otherwise UPDATE using similar
The correct solution, as far as I can tell, is to acquire a LOCK on
the table IN SHARE MODE at the beginning of the transaction, but this
has (at least for many applications) unacceptable performance
characteristics. Accepting that there is a slight risk of a race
condition when *not* locking the
On Tue, Mar 08, 2005 at 11:45:19AM +0800, Christopher Kings-Lynne wrote:
The correct solution, as far as I can tell, is to acquire a LOCK
on the table IN SHARE MODE at the beginning of the transaction, but
this has (at least for many applications) unacceptable performance
characteristics.
I'm sure this is something simple, but my Linux development machine
running Postgresql 7.4.7 has very different behavior than my FreeBSD
production machine running Postgresql 7.4.2. I've got the same table
definitions, but I do have different data in the databases.
On FreeBSD, indicies do what I
Luckily, PG 8 is available for this. Do you have a short example?
No, and I think it should be in the manual as an example.
You will need to enter a loop that uses exception handling to detect
unique_violation.
Chris
---(end of broadcast)---
TIP
Gaetano,
Hi all,
running a 7.4.5 engine, I'm facing this bad plan:
Please take this to the PGSQL-PERFORMANCE mailing list; that's what that list
exists for.
Or IRC, where I know your are sometimes. But not -Hackers.
Thanks!
--
Josh Berkus
Aglio Database Solutions
San Francisco
On Sat, Mar 05, 2005 at 02:18:37AM -0600, Christopher Farley wrote:
I'm sure this is something simple, but my Linux development machine
running Postgresql 7.4.7 has very different behavior than my FreeBSD
production machine running Postgresql 7.4.2. I've got the same table
definitions, but I
30 matches
Mail list logo