Em 19/01/2017 12:13, Tom Lane escreveu:
Gustavo Rezende Montesino writes:
Being the client in question, I would like to make a little remark: What
we thought could be optimized here at first is on the row estimate of
the index scan; which could take null_frac
The picture is becoming clearer now. So to recap the issue is in the plan
selection not utilizing the null_frac statistic properly to skip what seems to
be in your case 99% of the rows which are NULL for the field the join is
happening on and would be discarded anyways.
For completeness do you
Gustavo Rezende Montesino writes:
> Being the client in question, I would like to make a little remark: What
> we thought could be optimized here at first is on the row estimate of
> the index scan; which could take null_frac into account. To put things
> into
I apologize my statement about NULL being used to join is incorrect as both
Vitalii and Gustavo have both pointed out in their respective replies.
-
Phillip Couto
> On Jan 19, 2017, at 08:30, Vitalii Tymchyshyn wrote:
>
>
> Hi.
>
> In SQL "null == any value"
Hi.
In SQL "null == any value" resolves to false, so optimizer can safely skip
nulls from either side if any for the inner join.
Best regards, Vitalii Tymchyshyn
NULL is still a value that may be paired with a NULL in a.a
>
> The only optimization I could see is if the a.a column has NOT NULL
>
Hello,
Em 19/01/2017 11:04, Clailson escreveu:
Hi Phillip.
Not sure if it is all that common. Curious what if you put b.b IS NOT
NULL in the WHERE statement?
It's the question. In the company I work with, one of my clients asked
me: "Why PostgreSQL does not remove rows with null in
Ah ok that makes sense. I am curious if there is actually a performance benefit
to doing that. In postgresql as per the execution plan you provided the Merge
Join joins both sets after the have been sorted. If they are sorted already
then the NULLs will all be grouped at the beginning or end.
Hi Phillip.
The only optimization I could see is if the a.a column has NOT NULL
defined while b.b does not have NOT NULL defined.
a.a is the primary key on table a and b.b is the foreign key on table b.
Tabela "public.a"
++-+---+
| Coluna | Tipo | Modificadores
NULL is still a value that may be paired with a NULL in a.a
The only optimization I could see is if the a.a column has NOT NULL defined
while b.b does not have NOT NULL defined.
Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in
the WHERE statement?
Hi,
Is there something in the roadmap to optimize the inner join?
I've this situation above. Table b has 400 rows with null in the column b.
explain analyze select * from a inner join b on (b.b = a.a);
"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599
Hello guru of postgres, it's possoble to tune query with join on random
string ?
i know that it is not real life example, but i need it for tests.
soe=# explain
soe-# SELECT ADDRESS_ID,
soe-# CUSTOMER_ID,
soe-# DATE_CREATED,
soe-# HOUSE_NO_OR_NAME,
soe-#
On 05/04/2015 12:23 AM, Anton Bushmelev wrote:
Hello guru of postgres, it's possoble to tune query with join on random
string ?
i know that it is not real life example, but i need it for tests.
soe=# explain
soe-# SELECT ADDRESS_ID,
soe-# CUSTOMER_ID,
soe-#
Folks,
So one thing we tell users who have chronically long IN() lists is that
they should create a temporary table and join against that instead.
Other than not having the code, is there a reason why PostgreSQL
shouldn't do something like this behind the scenes, automatically?
--
Josh Berkus
On Sat, Aug 9, 2014 at 5:15 AM, Josh Berkus j...@agliodbs.com wrote:
Folks,
So one thing we tell users who have chronically long IN() lists is that
they should create a temporary table and join against that instead.
Other than not having the code, is there a reason why PostgreSQL
shouldn't
I basically have 3 tables. One being the core table and the other 2 depend
on the 1st. I have the requirement to add upto 7 records in the tables.
I do have constraints (primary foreign keys, index, unique etc) set for
the tables. I can't go for bulk import (using COPY command) as there is no
siva palanisamy psiv...@gmail.com wrote:
I basically have 3 tables. One being the core table and the other
2 depend on the 1st. I have the requirement to add upto 7
records in the tables. I do have constraints (primary foreign
keys, index, unique etc) set for the tables. I can't go for
On 3 Listopad 2011, 16:52, siva palanisamy wrote:
I basically have 3 tables. One being the core table and the other 2 depend
on the 1st. I have the requirement to add upto 7 records in the
tables.
I do have constraints (primary foreign keys, index, unique etc) set for
the tables. I can't
[Please keep the list copied.]
siva palanisamy psiv...@gmail.com wrote:
Could you pls guide me on how to minimize time consumption? I've
postgresql 8.1.4; Linux OS.
Well, the first thing to do is to use a supported version of
PostgreSQL. More recent releases perform better, for starters.
Hi!
can you help me with performance optimization
on my machine I have 8 databases with ca. 1-2GB
processor is:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E3110 @ 3.00GHz
stepping: 10
cpu
croolyc crooli...@gmail.com wrote:
can you help me with performance optimization
For overall tuning you could start here:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
If, after some general tuning, you are having problems with slow
queries, it is best if you pick one and
2010/4/28 Robert Haas robertmh...@gmail.com:
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3, but it
2010/5/1 Cédric Villemain cedric.villemain.deb...@gmail.com:
2010/4/28 Robert Haas robertmh...@gmail.com:
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll
2010/4/29 Robert Haas robertmh...@gmail.com:
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
Even if it will be done it does not solve the original issue. If I
understood you right there is now no any decent way of speeding up the query
select *
from t2
join t1 on
2010/4/28 Robert Haas robertmh...@gmail.com:
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3,
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
Even if it will be done it does not solve the original issue. If I
understood you right there is now no any decent way of speeding up the query
select *
from t2
join t1 on t1.t = t2.t
where t1.id = X;
except of the
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3, but it doesn't)
So it divide the estimated number of
2010/4/26 Vlad Arkhipov arhi...@dc.baikal.ru:
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru
wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I think
2010/4/23 Robert Haas robertmh...@gmail.com:
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves that it
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
2010/4/23 Robert Haas robertmh...@gmail.com:
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
I don't think this is just an issue with statistics, because the same
problem
Cédric Villemaincedric.villemain.deb...@gmail.com wrote:
2010/4/23 Robert Haas robertmh...@gmail.com:
Since all your data is probably fully cached, at a first cut, I
might try setting random_page_cost and seq_page_cost to 0.005 or
so, and adjusting effective_cache_size to something
2010/4/23 Robert Haas robertmh...@gmail.com:
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
2010/4/23 Robert Haas robertmh...@gmail.com:
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov arhi...@dc.baikal.ru
wrote:
I don't think this is just an
On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
2010/4/23 Robert Haas robertmh...@gmail.com:
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
2010/4/23 Robert Haas robertmh...@gmail.com:
On Thu, Apr 22, 2010
Robert Haas robertmh...@gmail.com writes:
Hmm. We currently have a heuristic that we don't record a value as an
MCV unless it's more frequent than the average frequency. When the
number of MCVs is substantially smaller than the number of distinct
values in the table this is probably a good
On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Hmm. We currently have a heuristic that we don't record a value as an
MCV unless it's more frequent than the average frequency. When the
number of MCVs is substantially smaller than
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does not
do that.
create temp table t1 (id bigint, t bigint);
insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work with, the ones
Greg Smith пишет:
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work
Greg Smith пишет:
I can't replicate your problem on the current development 9.0; all
three plans come back with results quickly when I just tried it:
Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual
time=0.049..0.959 rows=200 loops=1)
- Seq Scan on t1 (cost=0.00..1.06 rows=1
amrit angsusingh wrote:
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one
with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try
to use rather the same parameter from the
I also think there have been changes in pgbench itself.
Make sure you run the same pgbench on both servers.
Dave
On 24-Mar-07, at 6:44 AM, Heikki Linnakangas wrote:
amrit angsusingh wrote:
I try to change my database server from the older one ie. 2Cpu
Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one
with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try
to use rather the same parameter from the previous postgresql.conf :-
I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32
bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one
with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try
to use rather the same parameter from the previous postgresql.conf :-
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
so, imo alexander is correct:
contacto varchar(255)
Why do we have limits on this, for example?
contacto varchar(255)
1) First of all, this is a web application. People use to enter really
strange thinks there, and a lot of rubbish. So, as
Am Dienstag, 17. Oktober 2006 17:50 schrieb Alexander Staubo:
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
Enforcing length constraints with varchar(xyz) is good database
design, not a
bad one. Using text everywhere might be tempting because it works,
but it's
not a good idea.
Mario Weilguni wrote:
  contacto varchar(255),
  fuente varchar(512),
  prefijopais varchar(10)
Instead, use:
  contacto text,
  fuente text,
  prefijopais text
See the PostgreSQL manual for an explanation of varchar vs. text.
Enforcing length constraints with
On Wed, Oct 18, 2006 at 11:31:44AM +0200, Mario Weilguni wrote:
It's not a bad idea. Usually I use postal codes with 25 chars, and never had
any problem. With text, the limit would be ~1 GB. No matter how much testing
in the application happens, the varchar(25) as last resort is a good idea.
On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote:
so, imo alexander is correct:
contacto varchar(255)
...is a false constraint, why exactly 255? is that were the dart landed?
BTW, if we get variable-length varlena headers at some point, then
setting certain limits might make
On Tue, Oct 17, 2006 at 12:25:39PM +0200, Ruben Rubio wrote:
First of all I have to say that I now the database is not ok. There was
a people before me that didn't do the thinks right. I would like to
normalize the database, but it takes too much time (there is is hundred
of SQLs to change and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This SQL sentence is very simple. I need to get better results. I have
tried some posibilities and I didn't get good results.
SELECT max(idcomment)
FROM ficha vf
INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR
idestado=4))
WHERE
-
From: [EMAIL PROTECTED] on behalf of Ruben Rubio
Sent: Tue 10/17/2006 2:05 AM
To: pgsql-performance@postgresql.org
Cc:
Subject:[PERFORM] Optimization of this SQL sentence
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
This SQL sentence is very simple. I need to get better results
-performance@postgresql.org
Cc:
Subject: [PERFORM] Optimization of this SQL sentence
This SQL sentence is very simple. I need to get better results. I have
tried some posibilities and I didn't get good results.
SELECT max(idcomment)
FROM ficha vf
INNER JOIN comment c ON (vf.idficha
Rubio
Sent: Tue 10/17/2006 2:05 AM
To: pgsql-performance@postgresql.org
Cc:
Subject: [PERFORM] Optimization of this SQL sentence
This SQL sentence is very simple. I need to get better results. I have
tried some posibilities and I didn't get good results.
SELECT max(idcomment
am Tue, dem 17.10.2006, um 11:33:18 +0200 mailte Ruben Rubio folgendes:
SELECT max(idcomment)
FROM ficha vf
INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR
idestado=4))
WHERE idstatus=3
AND ctype=1
check for indexes on vf.idficha, c.idfile, idstatus and ctype.
On Oct 17, 2006, at 11:33 , Ruben Rubio wrote:
CREATE TABLE comment
(
idcomment int4 NOT NULL DEFAULT
nextval('comment_idcomment_seq'::regclass),
[snip 28 columns]
CONSTRAINT comment_pkey PRIMARY KEY (idcomment)
)
Ficha structure:
No indexes in ficha
Ficha rows: 17.850
CREATE TABLE
You could try rewriting the query like this:
SELECT MAX(idcomment)
FROM comment c
WHERE idstatus=3 AND ctype=1
AND EXISTS (SELECT 1 FROM ficha vf WHERE idestado IN ('3', '4') AND
vf.idficha = c.idfile);
The planner can then try a backward scan on the comment_pkey index,
which should be
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi to everyone,
First of all I have to say that I now the database is not ok. There was
a people before me that didn't do the thinks right. I would like to
normalize the database, but it takes too much time (there is is hundred
of SQLs to change and
These tables are particularly egregious examples of ignorant database
design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe Mr.
Staubo, and the Postgress community, an apology.
There is absolutely no reason to insult
On Oct 17, 2006, at 17:10 , Craig A. James wrote:
These tables are particularly egregious examples of ignorant
database design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You
owe Mr. Staubo, and the Postgress community, an
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
prefijopais varchar(10)
Enforcing length
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
prefijopais varchar(10)
Instead, use:
contacto text,
fuente text,
prefijopais text
On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
prefijopais varchar(10)
Instead, use:
contacto
[EMAIL PROTECTED] (Merlin Moncure) writes:
On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
[EMAIL PROTECTED] (Alexander Staubo) writes:
On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
Chris Browne wrote:
In the case of a zip code? Sure. US zip codes are integer values
either 5 or 9 characters long.
So your app will only work in the US?
And only for US companies that only have US clients?
Sorry had to dig at that ;-P
--
Shane Ambler
[EMAIL PROTECTED]
Get Sheeky @
The world rejoiced as [EMAIL PROTECTED] (Shane Ambler) wrote:
Chris Browne wrote:
In the case of a zip code? Sure. US zip codes are integer values
either 5 or 9 characters long.
So your app will only work in the US?
And only for US companies that only have US clients?
Sorry had to dig
Christopher Browne wrote:
The world rejoiced as [EMAIL PROTECTED] (Shane Ambler) wrote:
Chris Browne wrote:
In the case of a zip code? Sure. US zip codes are integer values
either 5 or 9 characters long.
So your app will only work in the US?
And only for US companies that only have US
eVl [EMAIL PROTECTED] writes:
When executing this SELECT (see SELECT.A above) it executes in about
700 ms, but when I want wipe out all info about local traffic, with query
like this:
SELECT * FROM ( SELECT.A ) a WHERE type = 'global';
It executes about 1 ms - more then 10
Hello!
Got a DB with traffic statictics stored. And a SELECT statement which
shows traffic volume per days also divided by regions - local traffic and
global.
Thus SELECT statement returns about some (in about 10-20) rows paired
like this:
ttype (text)| volume (int)| tdate
Dear all,
What would be the best configure line that would suite for optimization
As I understand by eliminating unwanted modules, I would make the DB lighter
and faster.
Lets say the module needed are only english module with LC_collate C
module type.
How could we eliminate the unwanted
On Feb 14, 2004, at 9:02 PM, Marinos J. Yannikos wrote:
Josh Berkus wrote:
800MB for sort mem? Are you sure you typed that correctly? You
must be counting on not having a lot of concurrent queries. It sure
will speed up index updating, though!
800MB is correct, yes... There are usually
Jeff Trout wrote:
Remember that it is going to allocate 800MB per sort. It is not you
can allocate up to 800MB, so if you need 1 meg, use one meg. Some
queries may end up having a few sort steps.
I didn't know that it always allocates the full amount of memory
specificed in the configuration
Josh Berkus wrote:
800MB for sort mem? Are you sure you typed that correctly? You must be
counting on not having a lot of concurrent queries. It sure will speed up
index updating, though!
800MB is correct, yes... There are usually only 10-30 postgres processes
active (imagine 5-10 people
On 28 Jul 2003 at 12:27, Josh Berkus wrote:
Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your
database. You will also want to increase your FSM_relations so that VACUUM
is more
On Tue, 29 Jul 2003, Shridhar Daithankar wrote:
On 28 Jul 2003 at 12:27, Josh Berkus wrote:
Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your
database. You will also want to increase
Greetings,
I am trying to understand the various factors used
by Postgres to optimize. I presently have a dual-866 Dell server with 1GB of
memory. I've done the following:
set /proc/sys/kernel/shmmax to
51200
shared_buffers = 32000sort_mem =
32000max_connections=64fsync=false
Can
Justin,
I am trying to understand the various factors used by Postgres to optimize.
I presently have a dual-866 Dell server with 1GB of memory. I've done the
following:
Please set the performance articles at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
--
-Josh Berkus
Justin,
I am trying to understand the various factors used by Postgres to optimize.
I presently have a dual-866 Dell server with 1GB of memory. I've done the
following:
see: http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
which has articles on .conf files.
(feel free to link
79 matches
Mail list logo