Re: [PERFORM] FOREIGN KEYS vs PERFORMANCE

2006-04-11 Thread Tom Lane
"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > On Wed, Apr 12, 2006 at 08:06:17AM +0900, Michael Glaesemann wrote: >> ... Anything you code in >> your application is more likely to contain bugs or miss corner cases >> that would allow referential integrity to be violated. PostgreSQL has >> been

Re: [PERFORM] Restore performance?

2006-04-11 Thread Christopher Kings-Lynne
Well, your pg_dump command lost your BLOBs since the plain text format doesn't support them. Well, no.. they are stored as BYTEA not Large Objects.. They are encoded in ASCII in the pg_dump output. As a side note: plain text dump format in 8.1 supprts LOBs ---(end of

Re: [PERFORM] FOREIGN KEYS vs PERFORMANCE

2006-04-11 Thread Jim C. Nasby
On Wed, Apr 12, 2006 at 08:06:17AM +0900, Michael Glaesemann wrote: > > On Apr 12, 2006, at 4:13 , Rodrigo Sakai wrote: > > > I think this is an old question, but I want to know if it really > >is well worth to not create some foreign keys an deal with the > >referential integrity at applica

Re: [PERFORM] FOREIGN KEYS vs PERFORMANCE

2006-04-11 Thread Michael Glaesemann
On Apr 12, 2006, at 4:13 , Rodrigo Sakai wrote: I think this is an old question, but I want to know if it really is well worth to not create some foreign keys an deal with the referential integrity at application-level? If I had to choose between one or the other, I'd leave all ref

Re: [PERFORM] freebsd/softupdates for data dir

2006-04-11 Thread Jim C. Nasby
On Thu, Apr 06, 2006 at 09:45:34AM -0400, Vivek Khera wrote: > > On Apr 5, 2006, at 6:07 PM, Jim Nasby wrote: > > > > >More importantly, it allows the system to come up and do fsck in > >the background. If you've got a large database that's a pretty big > >benefit. > > That's a UFS2 feature,

Re: [PERFORM] pgmemcache

2006-04-11 Thread Jim C. Nasby
On Tue, Apr 04, 2006 at 12:24:42AM -0700, C Storm wrote: > I was wondering if anyone on the list has a successful installation of > pgmemcache running > that uses LISTEN/NOTIFY to signal a successfully completed transaction, > i.e., to get around the fact > that TRIGGERS are transaction unaware. O

Re: [PERFORM] Sequencial scan instead of using index

2006-04-11 Thread Tom Lane
"Harry Hehl" <[EMAIL PROTECTED]> writes: > Nested Loop IN Join (cost=21432.32..951981.42 rows=204910 width=217) >Join Filter: ("outer".objectid = "inner".ref_oid) >-> Append (cost=0.00..8454.10 rows=204910 width=217) > -> Seq Scan on omfile (cost=0.00..8428.20 rows=204320 > w

Re: [PERFORM] Sequencial scan instead of using index

2006-04-11 Thread Harry Hehl
Mark, >If you can upgrade to 8.1.(3), then the planner can consider paths that >use *both* the indexes on srcobj and dstobj (which would probably be the >business!). Yes, 8.1.3 resolved this issue. Thanks. However I am still getting seq scans on indexes for other queries For example: select

Re: [PERFORM] Sequencial scan instead of using index

2006-04-11 Thread markir
Quoting "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>: > Hmm - that first query needs to do a sort, so you might want to > experiment with > the sort_mem parameter Oops - I mean work_mem... ---(end of broadcast)--- TIP 5: don't forget to increase your f

Re: [PERFORM] Sequencial scan instead of using index

2006-04-11 Thread markir
Quoting Harry Hehl <[EMAIL PROTECTED]>: > Mark, > > (snippage)However I am still getting seq scans on indexes for other queries > > For example: > > select * from omfile where ( objectid in ( select distinct(ref_oid) > from > ts ) ); > objectid & ref_oid are non-unique indexes > omimagefile

Re: [PERFORM] Encouraging multi-table join order

2006-04-11 Thread Tom Lane
Dan Harris <[EMAIL PROTECTED]> writes: > I wonder why the estimates were so far off the first time? This table > has been ANALYZED regularly ever since creation. Probably just that you need a bigger sample size for such a large table. We've been arguing ever since 7.2 about what the default stat

Re: [PERFORM] Encouraging multi-table join order

2006-04-11 Thread Dan Harris
Tom Lane wrote: What the stats entry is saying is that the most common entries occur about 75000 times apiece (0.0017 * 45e6), which is what's scaring the planner here ;-). I think those frequencies are artificially high though. The default statistics sample size is 3000 rows (300 * statist

Re: [PERFORM] Encouraging multi-table join order

2006-04-11 Thread Tom Lane
Dan Harris <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> What does the pg_stats entry for eventactivity.incidentid >> contain? > {P043190299,P051560740,P052581036,P052830218,P053100679,P053190889,P060370845,P042070391,P042690319,P043290117} > > | > {0.0017,0.0017,0.0017,0.001

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread Merlin Moncure
On 4/11/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote: > Merlin Moncure wrote: > > pl/pgsql procedures are a very thin layer over the query engine. > > Generally, they run about the same speed as SQL but you are not making > > apples to apples comparison. One of the few but annoying limitations > >

Re: [PERFORM] Encouraging multi-table join order

2006-04-11 Thread Dan Harris
Tom Lane wrote: So it's estimating 5775 cost units per probe into eventactivity, which is pretty high --- it must think that a lot of rows will be retrieved by the index (way more than the 20 or so it thinks will get past the filter condition). What does the pg_stats entry for eventactivity

Re: [PERFORM] FOREIGN KEYS vs PERFORMANCE

2006-04-11 Thread Scott Marlowe
On Tue, 2006-04-11 at 14:13, Rodrigo Sakai wrote: > Hi, > > I think this is an old question, but I want to know if it really is > well worth to not create some foreign keys an deal with the > referential integrity at application-level? > Specifically, the system we are developing is a s

[PERFORM] FOREIGN KEYS vs PERFORMANCE

2006-04-11 Thread Rodrigo Sakai
  Hi,     I think this is an old question, but I want to know if it really is well worth to not create some foreign keys an deal with the referential integrity at application-level?   Specifically, the system we are developing is a server/cliente architecture that the server is the datab

Re: [PERFORM] Indexes with descending date columns

2006-04-11 Thread Markus Schaber
Hi, Bruce, Bruce Momjian wrote: >>Ahh. There's a hack to do that by defining a new opclass that reverses < >>and >, and then doing ORDER BY project_id, id, date USING new_opclass. >> >>I think there's a TODO about this, but I'm not sure... > > Yes, and updated: > > * Allow the creation of

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread Alvaro Herrera
Merlin Moncure wrote: > On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote: > > I'm trying to evaluate PostgreSQL as a database that will have to store a > > high volume of data and access that data frequently. One of the features on > > our wish list is to be able to use stored procedures to access

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread H.J. Sanders
Hello At my little machine (pentium 4, 2.8 Ghz, 256 Mb RAM, Suse linux 9) I can process about 10 records a minute using the next setup: begin work begin for processing if 10.000 records processed: commit work begin work end if end for

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread Christopher Browne
[EMAIL PROTECTED] ("Simon Dale") wrote: > Event with the planning removed, the function still > performs > significantly slower than the raw SQL. Is that normal or am I doing something > wrong > with the creation or calling of the > function? I'd expect this, yes. You're doing something via "st

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-11 Thread Tom Lane
Richard Huxton writes: > soni de wrote: >> NOTICE: QUERY PLAN: >> Sort (cost=17.13..17.13 rows=1 width=16) (actual >> time=619140.18..619140.29rows >> =288 loops=1) >> -> Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) >> (actual time=7564.44..619121.61 rows=288 loops=1) >

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread Merlin Moncure
On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote: > I'm trying to evaluate PostgreSQL as a database that will have to store a > high volume of data and access that data frequently. One of the features on > our wish list is to be able to use stored procedures to access the data and > I was wondering

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-11 Thread Merlin Moncure
> pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE ( > > ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate > > >= '2004-07-21' ) AND ( sdate <= '2004-07-21' ) ) ) ORDER BY sdate, stime > ; this query would benefit from an index on pluto, cno, pno, sdate create i

Re: [PERFORM] Restore performance?

2006-04-11 Thread Jesper Krogh
> Well, your pg_dump command lost your BLOBs since the plain text > format doesn't support them. Well, no.. they are stored as BYTEA not Large Objects.. They are encoded in ASCII in the pg_dump output. > But once you use the -Fc format on your dump and enable blob backups, > you can speed up relo

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-11 Thread Richard Huxton
soni de wrote: I have flushed the database, so currently records in the "lan" table are: 665280 but records can be increased more than 1GB and in that case it takes more than 1 hour Below is explain analyze output taken from the table having 665280 records pdb=# explain analyze SELECT sdate, s

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread Richard Huxton
Rajesh Kumar Mallah wrote: On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote: I'm trying to evaluate PostgreSQL as a database that will have to store a high volume of data and access that data frequently. One of the features on our wish list is to be able to use stored procedures to access the d

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread Rajesh Kumar Mallah
On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote: > > > > Hi, > > > > I'm trying to evaluate PostgreSQL as a database that will have to store a > high volume of data and access that data frequently. One of the features on > our wish list is to be able to use stored procedures to access the data and

Re: [PERFORM] Stored Procedure Performance

2006-04-11 Thread hubert depesz lubaczewski
On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote: I'm trying to evaluate PostgreSQL as a database that will have to store a high volume of data and access that data frequently. One of the features on our wish list is to be able to use stored procedures to access the data and I was wond

Re: [PERFORM] Takes too long to fetch the data from database

2006-04-11 Thread soni de
  I have flushed the database, so currently records in the "lan" table are: 665280 but records can be increased more than 1GB and in that case it takes more than 1 hour   Below is explain analyze output taken from the table having 665280 records   pdb=# explain analyze SELECT sdate, stime, rbts fro

[PERFORM] Stored Procedure Performance

2006-04-11 Thread Simon Dale
Hi,   I’m trying to evaluate PostgreSQL as a database that will have to store a high volume of data and access that data frequently. One of the features on our wish list is to be able to use stored procedures to access the data and I was wondering if it is usual for stored procedures to p