Re: [PERFORM] Optimization inner join

2017-01-19 Thread Gustavo Rezende Montesino
Em 19/01/2017 12:13, Tom Lane escreveu: Gustavo Rezende Montesino writes: Being the client in question, I would like to make a little remark: What we thought could be optimized here at first is on the row estimate of the index scan; which could take null_frac

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Phillip Couto
The picture is becoming clearer now. So to recap the issue is in the plan selection not utilizing the null_frac statistic properly to skip what seems to be in your case 99% of the rows which are NULL for the field the join is happening on and would be discarded anyways. For completeness do you

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Tom Lane
Gustavo Rezende Montesino writes: > Being the client in question, I would like to make a little remark: What > we thought could be optimized here at first is on the row estimate of > the index scan; which could take null_frac into account. To put things > into

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Phillip Couto
I apologize my statement about NULL being used to join is incorrect as both Vitalii and Gustavo have both pointed out in their respective replies. - Phillip Couto > On Jan 19, 2017, at 08:30, Vitalii Tymchyshyn wrote: > > > Hi. > > In SQL "null == any value"

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Vitalii Tymchyshyn
Hi. In SQL "null == any value" resolves to false, so optimizer can safely skip nulls from either side if any for the inner join. Best regards, Vitalii Tymchyshyn NULL is still a value that may be paired with a NULL in a.a > > The only optimization I could see is if the a.a column has NOT NULL >

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Gustavo Rezende Montesino
Hello, Em 19/01/2017 11:04, Clailson escreveu: Hi Phillip. Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement? It's the question. In the company I work with, one of my clients asked me: "Why PostgreSQL does not remove rows with null in

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Phillip Couto
Ah ok that makes sense. I am curious if there is actually a performance benefit to doing that. In postgresql as per the execution plan you provided the Merge Join joins both sets after the have been sorted. If they are sorted already then the NULLs will all be grouped at the beginning or end.

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Clailson
Hi Phillip. The only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined. a.a is the primary key on table a and b.b is the foreign key on table b. Tabela "public.a" ++-+---+ | Coluna | Tipo | Modificadores

Re: [PERFORM] Optimization inner join

2017-01-19 Thread Phillip Couto
NULL is still a value that may be paired with a NULL in a.a The only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined. Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?

[PERFORM] Optimization inner join

2017-01-19 Thread Clailson
Hi, Is there something in the roadmap to optimize the inner join? I've this situation above. Table b has 400 rows with null in the column b. explain analyze select * from a inner join b on (b.b = a.a); "Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599

[PERFORM] optimization join on random value

2015-05-03 Thread Anton Bushmelev
Hello guru of postgres, it's possoble to tune query with join on random string ? i know that it is not real life example, but i need it for tests. soe=# explain soe-# SELECT ADDRESS_ID, soe-# CUSTOMER_ID, soe-# DATE_CREATED, soe-# HOUSE_NO_OR_NAME, soe-#

Re: [PERFORM] optimization join on random value

2015-05-03 Thread Heikki Linnakangas
On 05/04/2015 12:23 AM, Anton Bushmelev wrote: Hello guru of postgres, it's possoble to tune query with join on random string ? i know that it is not real life example, but i need it for tests. soe=# explain soe-# SELECT ADDRESS_ID, soe-# CUSTOMER_ID, soe-#

[PERFORM] Optimization idea for long IN() lists

2014-08-08 Thread Josh Berkus
Folks, So one thing we tell users who have chronically long IN() lists is that they should create a temporary table and join against that instead. Other than not having the code, is there a reason why PostgreSQL shouldn't do something like this behind the scenes, automatically? -- Josh Berkus

Re: [PERFORM] Optimization idea for long IN() lists

2014-08-08 Thread Maxim Boguk
On Sat, Aug 9, 2014 at 5:15 AM, Josh Berkus j...@agliodbs.com wrote: Folks, So one thing we tell users who have chronically long IN() lists is that they should create a temporary table and join against that instead. Other than not having the code, is there a reason why PostgreSQL shouldn't

[PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread siva palanisamy
I basically have 3 tables. One being the core table and the other 2 depend on the 1st. I have the requirement to add upto 7 records in the tables. I do have constraints (primary foreign keys, index, unique etc) set for the tables. I can't go for bulk import (using COPY command) as there is no

Re: [PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread Kevin Grittner
siva palanisamy psiv...@gmail.com wrote: I basically have 3 tables. One being the core table and the other 2 depend on the 1st. I have the requirement to add upto 7 records in the tables. I do have constraints (primary foreign keys, index, unique etc) set for the tables. I can't go for

Re: [PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread Tomas Vondra
On 3 Listopad 2011, 16:52, siva palanisamy wrote: I basically have 3 tables. One being the core table and the other 2 depend on the 1st. I have the requirement to add upto 7 records in the tables. I do have constraints (primary foreign keys, index, unique etc) set for the tables. I can't

Re: [PERFORM] Optimization required for multiple insertions in PostgreSQL

2011-11-03 Thread Kevin Grittner
[Please keep the list copied.] siva palanisamy psiv...@gmail.com wrote: Could you pls guide me on how to minimize time consumption? I've postgresql 8.1.4; Linux OS. Well, the first thing to do is to use a supported version of PostgreSQL. More recent releases perform better, for starters.

[PERFORM] optimization

2011-02-28 Thread croolyc
Hi! can you help me with performance optimization on my machine I have 8 databases with ca. 1-2GB processor is: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E3110 @ 3.00GHz stepping: 10 cpu

Re: [PERFORM] optimization

2011-02-28 Thread Kevin Grittner
croolyc crooli...@gmail.com wrote: can you help me with performance optimization For overall tuning you could start here: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server If, after some general tuning, you are having problems with slow queries, it is best if you pick one and

Re: [PERFORM] Optimization idea

2010-05-01 Thread Cédric Villemain
2010/4/28 Robert Haas robertmh...@gmail.com: On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: In the first query, the planner doesn't use the information of the 2,3,4. It just does a : I'll bet I'll have 2 rows in t1 (I think it should say 3, but it

Re: [PERFORM] Optimization idea

2010-05-01 Thread Cédric Villemain
2010/5/1 Cédric Villemain cedric.villemain.deb...@gmail.com: 2010/4/28 Robert Haas robertmh...@gmail.com: On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: In the first query, the planner doesn't use the information of the 2,3,4. It just does a : I'll

Re: [PERFORM] Optimization idea

2010-04-29 Thread Cédric Villemain
2010/4/29 Robert Haas robertmh...@gmail.com: On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: Even if it will be done it does not solve the original issue. If I understood you right there is now no any decent way of speeding up the query select * from t2 join t1 on

Re: [PERFORM] Optimization idea

2010-04-28 Thread Vlad Arkhipov
2010/4/28 Robert Haas robertmh...@gmail.com: On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: In the first query, the planner doesn't use the information of the 2,3,4. It just does a : I'll bet I'll have 2 rows in t1 (I think it should say 3,

Re: [PERFORM] Optimization idea

2010-04-28 Thread Robert Haas
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: Even if it will be done it does not solve the original issue. If I understood you right there is now no any decent way of speeding up the query select * from t2 join t1 on t1.t = t2.t where t1.id = X; except of the

Re: [PERFORM] Optimization idea

2010-04-27 Thread Robert Haas
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: In the first query, the planner doesn't use the information of the 2,3,4. It just does a : I'll bet I'll have 2 rows in t1 (I think it should say 3, but it doesn't) So it divide the estimated number of

Re: [PERFORM] Optimization idea

2010-04-26 Thread Cédric Villemain
2010/4/26 Vlad Arkhipov arhi...@dc.baikal.ru: On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I don't think this is just an issue with statistics, because the same problem arises when I try executing a query like this: I'm not sure how you think this proves

Re: [PERFORM] Optimization idea

2010-04-25 Thread Vlad Arkhipov
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I don't think this is just an issue with statistics, because the same problem arises when I try executing a query like this: I'm not sure how you think this proves that it isn't a problem with statistics, but

Re: [PERFORM] Optimization idea

2010-04-23 Thread Robert Haas
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I don't think this is just an issue with statistics, because the same problem arises when I try executing a query like this: I'm not sure how you think this proves that it isn't a problem with statistics, but I think

Re: [PERFORM] Optimization idea

2010-04-23 Thread Cédric Villemain
2010/4/23 Robert Haas robertmh...@gmail.com: On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I don't think this is just an issue with statistics, because the same problem arises when I try executing a query like this: I'm not sure how you think this proves that it

Re: [PERFORM] Optimization idea

2010-04-23 Thread Robert Haas
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: 2010/4/23 Robert Haas robertmh...@gmail.com: On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I don't think this is just an issue with statistics, because the same problem

Re: [PERFORM] Optimization idea

2010-04-23 Thread Kevin Grittner
Cédric Villemaincedric.villemain.deb...@gmail.com wrote: 2010/4/23 Robert Haas robertmh...@gmail.com: Since all your data is probably fully cached, at a first cut, I might try setting random_page_cost and seq_page_cost to 0.005 or so, and adjusting effective_cache_size to something

Re: [PERFORM] Optimization idea

2010-04-23 Thread Cédric Villemain
2010/4/23 Robert Haas robertmh...@gmail.com: On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: 2010/4/23 Robert Haas robertmh...@gmail.com: On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I don't think this is just an

Re: [PERFORM] Optimization idea

2010-04-23 Thread Robert Haas
On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: 2010/4/23 Robert Haas robertmh...@gmail.com: On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain cedric.villemain.deb...@gmail.com wrote: 2010/4/23 Robert Haas robertmh...@gmail.com: On Thu, Apr 22, 2010

Re: [PERFORM] Optimization idea

2010-04-23 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes: Hmm. We currently have a heuristic that we don't record a value as an MCV unless it's more frequent than the average frequency. When the number of MCVs is substantially smaller than the number of distinct values in the table this is probably a good

Re: [PERFORM] Optimization idea

2010-04-23 Thread Robert Haas
On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: Hmm.  We currently have a heuristic that we don't record a value as an MCV unless it's more frequent than the average frequency.  When the number of MCVs is substantially smaller than

[PERFORM] Optimization idea

2010-04-22 Thread Vlad Arkhipov
Please do this small optimization if it is possible. It seem that the optimizer have the all information to create a fast plan but it does not do that. create temp table t1 (id bigint, t bigint); insert into t1 values (1, 1); insert into t1 values (2, 2); insert into t1 values (2, 3); insert

Re: [PERFORM] Optimization idea

2010-04-22 Thread Greg Smith
Vlad Arkhipov wrote: Please do this small optimization if it is possible. It seem that the optimizer have the all information to create a fast plan but it does not do that. This isn't strictly an optimization problem; it's an issue with statistics the optimizer has to work with, the ones

Re: [PERFORM] Optimization idea

2010-04-22 Thread Vlad Arkhipov
Greg Smith пишет: Vlad Arkhipov wrote: Please do this small optimization if it is possible. It seem that the optimizer have the all information to create a fast plan but it does not do that. This isn't strictly an optimization problem; it's an issue with statistics the optimizer has to work

Re: [PERFORM] Optimization idea

2010-04-22 Thread Vlad Arkhipov
Greg Smith пишет: I can't replicate your problem on the current development 9.0; all three plans come back with results quickly when I just tried it: Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual time=0.049..0.959 rows=200 loops=1) - Seq Scan on t1 (cost=0.00..1.06 rows=1

Re: [PERFORM] Optimization postgresql 8.1.4 FC 6 X64 ?

2007-03-24 Thread Heikki Linnakangas
amrit angsusingh wrote: I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32 bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try to use rather the same parameter from the

Re: [PERFORM] Optimization postgresql 8.1.4 FC 6 X64 ?

2007-03-24 Thread Dave Cramer
I also think there have been changes in pgbench itself. Make sure you run the same pgbench on both servers. Dave On 24-Mar-07, at 6:44 AM, Heikki Linnakangas wrote: amrit angsusingh wrote: I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32 bit 4Gb SDram Hdd SCSI RAID

[PERFORM] Optimization pg 8.14 and postgresql.conf

2007-03-23 Thread amrit angsusingh
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32 bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try to use rather the same parameter from the previous postgresql.conf :-

[PERFORM] Optimization postgresql 8.1.4 FC 6 X64 ?

2007-03-23 Thread amrit angsusingh
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32 bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try to use rather the same parameter from the previous postgresql.conf :-

Re: [PERFORM] Optimization of this SQL sentence

2006-10-18 Thread Ruben Rubio
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 so, imo alexander is correct: contacto varchar(255) Why do we have limits on this, for example? contacto varchar(255) 1) First of all, this is a web application. People use to enter really strange thinks there, and a lot of rubbish. So, as

Re: [PERFORM] Optimization of this SQL sentence

2006-10-18 Thread Mario Weilguni
Am Dienstag, 17. Oktober 2006 17:50 schrieb Alexander Staubo: On Oct 17, 2006, at 17:29 , Mario Weilguni wrote: Enforcing length constraints with varchar(xyz) is good database design, not a bad one. Using text everywhere might be tempting because it works, but it's not a good idea.

Re: [PERFORM] Optimization of this SQL sentence

2006-10-18 Thread Mark Kirkwood
Mario Weilguni wrote: Â Â contacto varchar(255), Â Â fuente varchar(512), Â Â prefijopais varchar(10) Instead, use: Â Â contacto text, Â Â fuente text, Â Â prefijopais text See the PostgreSQL manual for an explanation of varchar vs. text. Enforcing length constraints with

Re: [PERFORM] Optimization of this SQL sentence

2006-10-18 Thread mark
On Wed, Oct 18, 2006 at 11:31:44AM +0200, Mario Weilguni wrote: It's not a bad idea. Usually I use postal codes with 25 chars, and never had any problem. With text, the limit would be ~1 GB. No matter how much testing in the application happens, the varchar(25) as last resort is a good idea.

Re: [PERFORM] Optimization of this SQL sentence

2006-10-18 Thread Jim C. Nasby
On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote: so, imo alexander is correct: contacto varchar(255) ...is a false constraint, why exactly 255? is that were the dart landed? BTW, if we get variable-length varlena headers at some point, then setting certain limits might make

Re: [PERFORM] Optimization of this SQL sentence (SOLVED)

2006-10-18 Thread Jim C. Nasby
On Tue, Oct 17, 2006 at 12:25:39PM +0200, Ruben Rubio wrote: First of all I have to say that I now the database is not ok. There was a people before me that didn't do the thinks right. I would like to normalize the database, but it takes too much time (there is is hundred of SQLs to change and

[PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Ruben Rubio
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 This SQL sentence is very simple. I need to get better results. I have tried some posibilities and I didn't get good results. SELECT max(idcomment) FROM ficha vf INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR idestado=4)) WHERE

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Gregory S. Williamson
- From: [EMAIL PROTECTED] on behalf of Ruben Rubio Sent: Tue 10/17/2006 2:05 AM To: pgsql-performance@postgresql.org Cc: Subject:[PERFORM] Optimization of this SQL sentence -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 This SQL sentence is very simple. I need to get better results

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Ruben Rubio
-performance@postgresql.org Cc: Subject: [PERFORM] Optimization of this SQL sentence This SQL sentence is very simple. I need to get better results. I have tried some posibilities and I didn't get good results. SELECT max(idcomment) FROM ficha vf INNER JOIN comment c ON (vf.idficha

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Ruben Rubio
Rubio Sent: Tue 10/17/2006 2:05 AM To: pgsql-performance@postgresql.org Cc: Subject: [PERFORM] Optimization of this SQL sentence This SQL sentence is very simple. I need to get better results. I have tried some posibilities and I didn't get good results. SELECT max(idcomment

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread A. Kretschmer
am Tue, dem 17.10.2006, um 11:33:18 +0200 mailte Ruben Rubio folgendes: SELECT max(idcomment) FROM ficha vf INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR idestado=4)) WHERE idstatus=3 AND ctype=1 check for indexes on vf.idficha, c.idfile, idstatus and ctype.

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Alexander Staubo
On Oct 17, 2006, at 11:33 , Ruben Rubio wrote: CREATE TABLE comment ( idcomment int4 NOT NULL DEFAULT nextval('comment_idcomment_seq'::regclass), [snip 28 columns] CONSTRAINT comment_pkey PRIMARY KEY (idcomment) ) Ficha structure: No indexes in ficha Ficha rows: 17.850 CREATE TABLE

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Heikki Linnakangas
You could try rewriting the query like this: SELECT MAX(idcomment) FROM comment c WHERE idstatus=3 AND ctype=1 AND EXISTS (SELECT 1 FROM ficha vf WHERE idestado IN ('3', '4') AND vf.idficha = c.idfile); The planner can then try a backward scan on the comment_pkey index, which should be

Re: [PERFORM] Optimization of this SQL sentence (SOLVED)

2006-10-17 Thread Ruben Rubio
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi to everyone, First of all I have to say that I now the database is not ok. There was a people before me that didn't do the thinks right. I would like to normalize the database, but it takes too much time (there is is hundred of SQLs to change and

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Craig A. James
These tables are particularly egregious examples of ignorant database design. You need to understand the relational model This email is a *particularly* egregious example of rudeness. You owe Mr. Staubo, and the Postgress community, an apology. There is absolutely no reason to insult

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Alexander Staubo
On Oct 17, 2006, at 17:10 , Craig A. James wrote: These tables are particularly egregious examples of ignorant database design. You need to understand the relational model This email is a *particularly* egregious example of rudeness. You owe Mr. Staubo, and the Postgress community, an

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Alexander Staubo
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote: Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo: Lastly, note that in PostgreSQL these length declarations are not necessary: contacto varchar(255), fuente varchar(512), prefijopais varchar(10) Enforcing length

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Mario Weilguni
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo: Lastly, note that in PostgreSQL these length declarations are not   necessary:    contacto varchar(255),    fuente varchar(512),    prefijopais varchar(10) Instead, use:    contacto text,    fuente text,    prefijopais text

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Merlin Moncure
On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote: Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo: Lastly, note that in PostgreSQL these length declarations are not necessary: contacto varchar(255), fuente varchar(512), prefijopais varchar(10) Instead, use: contacto

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Chris Browne
[EMAIL PROTECTED] (Merlin Moncure) writes: On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote: Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo: Lastly, note that in PostgreSQL these length declarations are not necessary: contacto varchar(255), fuente varchar(512),

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Chris Browne
[EMAIL PROTECTED] (Alexander Staubo) writes: On Oct 17, 2006, at 17:29 , Mario Weilguni wrote: Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo: Lastly, note that in PostgreSQL these length declarations are not necessary: contacto varchar(255), fuente varchar(512),

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Shane Ambler
Chris Browne wrote: In the case of a zip code? Sure. US zip codes are integer values either 5 or 9 characters long. So your app will only work in the US? And only for US companies that only have US clients? Sorry had to dig at that ;-P -- Shane Ambler [EMAIL PROTECTED] Get Sheeky @

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Christopher Browne
The world rejoiced as [EMAIL PROTECTED] (Shane Ambler) wrote: Chris Browne wrote: In the case of a zip code? Sure. US zip codes are integer values either 5 or 9 characters long. So your app will only work in the US? And only for US companies that only have US clients? Sorry had to dig

Re: [PERFORM] Optimization of this SQL sentence

2006-10-17 Thread Shane Ambler
Christopher Browne wrote: The world rejoiced as [EMAIL PROTECTED] (Shane Ambler) wrote: Chris Browne wrote: In the case of a zip code? Sure. US zip codes are integer values either 5 or 9 characters long. So your app will only work in the US? And only for US companies that only have US

Re: [PERFORM] optimization downgrade perfomance?

2005-09-23 Thread Tom Lane
eVl [EMAIL PROTECTED] writes: When executing this SELECT (see SELECT.A above) it executes in about 700 ms, but when I want wipe out all info about local traffic, with query like this: SELECT * FROM ( SELECT.A ) a WHERE type = 'global'; It executes about 1 ms - more then 10

[PERFORM] optimization downgrade perfomance?

2005-09-22 Thread eVl
Hello! Got a DB with traffic statictics stored. And a SELECT statement which shows traffic volume per days also divided by regions - local traffic and global. Thus SELECT statement returns about some (in about 10-20) rows paired like this: ttype (text)| volume (int)| tdate

[PERFORM] Optimization while compiling

2004-12-30 Thread Vishal Kashyap @ [SaiHertz]
Dear all, What would be the best configure line that would suite for optimization As I understand by eliminating unwanted modules, I would make the DB lighter and faster. Lets say the module needed are only english module with LC_collate C module type. How could we eliminate the unwanted

Re: [PERFORM] optimization ideas for frequent, large(ish) updates

2004-02-15 Thread Jeff Trout
On Feb 14, 2004, at 9:02 PM, Marinos J. Yannikos wrote: Josh Berkus wrote: 800MB for sort mem? Are you sure you typed that correctly? You must be counting on not having a lot of concurrent queries. It sure will speed up index updating, though! 800MB is correct, yes... There are usually

Re: [PERFORM] optimization ideas for frequent, large(ish) updates

2004-02-15 Thread Marinos J. Yannikos
Jeff Trout wrote: Remember that it is going to allocate 800MB per sort. It is not you can allocate up to 800MB, so if you need 1 meg, use one meg. Some queries may end up having a few sort steps. I didn't know that it always allocates the full amount of memory specificed in the configuration

Re: [PERFORM] optimization ideas for frequent, large(ish) updates

2004-02-14 Thread Marinos J. Yannikos
Josh Berkus wrote: 800MB for sort mem? Are you sure you typed that correctly? You must be counting on not having a lot of concurrent queries. It sure will speed up index updating, though! 800MB is correct, yes... There are usually only 10-30 postgres processes active (imagine 5-10 people

Re: [PERFORM] Optimization

2003-07-29 Thread Shridhar Daithankar
On 28 Jul 2003 at 12:27, Josh Berkus wrote: Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your database. You will also want to increase your FSM_relations so that VACUUM is more

Re: [PERFORM] Optimization

2003-07-29 Thread Peter Childs
On Tue, 29 Jul 2003, Shridhar Daithankar wrote: On 28 Jul 2003 at 12:27, Josh Berkus wrote: Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every 10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your database. You will also want to increase

[PERFORM] Optimization

2003-07-28 Thread Justin Long
Greetings, I am trying to understand the various factors used by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of memory. I've done the following: set /proc/sys/kernel/shmmax to 51200 shared_buffers = 32000sort_mem = 32000max_connections=64fsync=false Can

Re: [PERFORM] Optimization

2003-07-28 Thread Josh Berkus
Justin, I am trying to understand the various factors used by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of memory. I've done the following: Please set the performance articles at: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php -- -Josh Berkus

Re: [PERFORM] Optimization

2003-07-28 Thread Josh Berkus
Justin, I am trying to understand the various factors used by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of memory. I've done the following: see: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php which has articles on .conf files. (feel free to link