IF;
prev_r = r;
END LOOP;
Probably slow part of your query is sorting - first can be accelerated
by index, but second (as CTE result cannot) - you can try increase
work_mem ??
Regards
Pavel
On Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
you
2013/2/19 Bastiaan Olij basti...@basenlily.me:
Hi Andy,
I've tried that with the same result. One subquery works beautifully,
two subqueries with an OR and it starts to do a sequential scan...
try to rewrite OR to two SELECTs joined by UNION ALL
Pavel
Thanks,
Bastiaan Olij
On 19/02/13
Hello
you can try to wrap searching to immutable function and use following trick
http://postgres.cz/wiki/PostgreSQL_SQL_Tricks#Using_IMMUTABLE_functions_as_hints_for_the_optimizer
Regards
Pavel Stehule
2013/2/8 Stefan Keller sfkel...@gmail.com:
Hi,
I have problems with the performance
2013/1/10 Heikki Linnakangas hlinnakan...@vmware.com:
On 10.01.2013 20:45, Matheus de Oliveira wrote:
Inspired by Charles' thread and the work of Emmanuel [1], I have made some
experiments trying to create a trigger to make partitioning using C
language.
The first attempt was not good, I
Hello
Also, for bulk insert, have you tried for each statement triggers instead
of for each row?
This would look like a lot of inserts and would not be fast in
single-row-insert case, but can give you benefit for huge inserts.
It should look like
insert into quotes_2012_09_10 select * from
.
Using plpgsql row triggers for partitioning is not good idea - it is
just work around from my perspective, and we should to solve source of
problem - missing native support.
Regards
Pavel Stehule
2012/12/28 Stephen Frost sfr...@snowman.net
Vitalii,
* Vitalii Tymchyshyn (tiv...@gmail.com
it currently perform the same
as an if/elsif tree or is it implemented to actually use a table lookup?
both IF and CASE has very similar implementation - table lookup is not
used - there are not special path for searching constants
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
please, look
Hello
2012/12/28 Luciano Ernesto da Silva luci...@cpd.ufrgs.br:
UNSUBSCRIBE
De: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] Em nome de Jeff Janes
Enviada em: sexta-feira, 28 de dezembro de 2012 14:31
Para: Scott Marlowe
Cc: Tom Lane; Charles
2012/12/27 Stephen Frost sfr...@snowman.net:
* Jeff Janes (jeff.ja...@gmail.com) wrote:
If the main goal is to make it faster, I'd rather see all of plpgsql get
faster, rather than just a special case of partitioning triggers. For
example, right now a CASE expression statement with 100
2012/12/27 Jeff Janes jeff.ja...@gmail.com:
On Monday, December 24, 2012, Charles Gomes wrote:
I think your performance bottleneck is almost certainly the dynamic
SQL. Using C to generate that dynamic SQL isn't going to help much,
because it is still
Hello
HashSetOp is memory expensive operation, and should be problematic
when statistic estimation is bad.
Try to rewritre this query to JOIN
Regards
Pavel Stehule
2012/11/15 Antti Jokipii anttijoki...@gmail.com:
Hi
I tried to run quite simple query. For some reason query took lots
Hello
2012/11/8 Denis soc...@gmail.com:
Samuel Gendler wrote
On Thu, Nov 8, 2012 at 1:36 AM, Denis lt;
socsam@
gt; wrote:
P.S.
Not to start a holywar, but FYI: in a similar project where we used MySQL
now we have about 6000 DBs and everything works like a charm.
You seem to have
Hello
2012/10/31 Mahavir Trivedi mahavir.triv...@gmail.com:
dear friends
i have - sql file of size more than 1 gb
when i execute it then after some time Invalid memory alloc request size
100234023 byte occcured
what ' s problem that i don't know ?
there is hard-coded limit for memory
this functionality. Packages are in our
ToDo, but probably nobody working on it and I don't expect it in next
few years.
Regards
Pavel Stehule
Thanks
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
ANALYZE result of both queries?
Regards
Pavel Stehule
Any help?
Regards.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing
, Jul 28, 2012 at 9:07 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
I had same problem with large numbers of tables - you can move
pg_stat_tmp to tmpfs filesystem - it was solution for us
Regards
Pavel
2012/7/28 David Barton d...@oneit.com.au:
Hi,
I am running postgres 9.1.4
2012/8/6 Magnus Hagander mag...@hagander.net:
On Mon, Aug 6, 2012 at 4:16 PM, Pavel Stehule pavel.steh...@gmail.com wrote:
2012/8/6 Magnus Hagander mag...@hagander.net:
That's not a good way of doing it, since you loose persistent storage.
Instead, you should set the stats_temp_dir paramter
Hello
I had same problem with large numbers of tables - you can move
pg_stat_tmp to tmpfs filesystem - it was solution for us
Regards
Pavel
2012/7/28 David Barton d...@oneit.com.au:
Hi,
I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is
generating very high IO usage
Hello
you have too slow merge join
maybe you have bloated item_common_pkey or item_common relations -
can you try reindex or vacuum full
you use random_page_cost = 1.0 - it can be source of bad plan
Regards
Pavel Stehule
2012/7/25 Marcus Engene meng...@engene.se:
Hi,
Lacking index hints
2012/7/8 Nate Allan nal...@ancestry.com:
Thanks for your reply Tom.
I have a query which joins to a nested union and I'm getting a plan which
never returns. Here is the query simplified as much as possible:
select 'anything' as result
from Attribute as A1
Hello
2012/7/6 CSS c...@morefoo.com:
Hello,
Time for a broad question. I'm aware of some specific select queries that
will generate disk writes - for example, a sort operation when there's not
enough work_mem can cause PG to write out some temp tables (not the correct
terminology?).
2012/6/26 Marc Mamin m.ma...@intershop.de:
On 22/06/12 09:02, Maxim Boguk wrote:
May be I completely wrong but I always assumed that the access speed to the
array element in PostgreSQL should be close to constant time.
But in tests I found that access speed degrade as O(N) of array size.
2012/6/26 Marc Mamin m.ma...@intershop.de:
-Original Message-
From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
2012/6/26 Marc Mamin m.ma...@intershop.de:
On 22/06/12 09:02, Maxim Boguk wrote:
May be I completely wrong but I always assumed that the access
speed
2012/6/26 Maxim Boguk maxim.bo...@gmail.com:
On Tue, Jun 26, 2012 at 6:04 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
2012/6/26 Marc Mamin m.ma...@intershop.de:
On 22/06/12 09:02, Maxim Boguk wrote:
May be I completely wrong but I always assumed that the access speed
2012/5/27 Ivan Voras ivo...@freebsd.org:
On 27 May 2012 05:28, Pavel Stehule pavel.steh...@gmail.com wrote:
Hello
2012/5/26 Ivan Voras ivo...@freebsd.org:
Hello,
I have a SQL function (which I've pasted below) and while testing its
code directly (outside a function), this is the normal
Hello
2012/5/26 Ivan Voras ivo...@freebsd.org:
Hello,
I have a SQL function (which I've pasted below) and while testing its
code directly (outside a function), this is the normal, default plan:
http://explain.depesz.com/s/vfP (67 ms)
and this is the plain with enable_seqscan turned off:
capability.
no, PostgreSQL doesn't support parallel processing of one query. You
can use some hardcore tricks and implement cooperative functions in C
- but this is hard work for beginner. The most simple solution is
parallelism on application level.
Regards
Pavel Stehule
Thanks, Venki
--
Sent via
.
PostgreSQL also provides functions that return the start time of the
current statement, as well as the actual current time at the instant
the function is called. The complete list of non-SQL-standard time
functions is:
transaction_timestamp()
statement_timestamp()
Regards
Pavel Stehule
2012/4/25 Venki
2012/3/26 Tomas Vondra t...@fuzzy.cz:
Hi all,
today I've noticed this link on HN: http://plasma.cs.umass.edu/emery/hoard
Seems like an interesting option for systems with a lot of CPUs that are
doing a lot of alloc operations. Right now I don't have a suitable system
to test it - anyone
2012/1/31 Carlo Stonebanks stonec.regis...@sympatico.ca:
Pavel, thank you very much for your explanation.
Is it possible to define under what conditions that sql procs will
outperform plpgsql ones, and vice-versa?
yes, little bit :)
when inlining is possible, then SQL function will be faster
---
Function Scan on pg_catalog.generate_series (cost=0.00..10.00
rows=1000 width=4)
Output: generate_series.generate_series
Function Call: generate_series(1, 10) -- inlined query
(3 rows)
Regards
Pavel Stehule
-Original Message-
From: pgsql-performance-ow
and then there are performance lost.
For example this optimization is not possible (sometimes) when some
parameter is volatile
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
performance issues with the fetch in the cursor?
Cursors are optimized to returns small subset of result - if you plan
to read complete result, then set
set cursor_tuple_fraction to 1.0;
this is session config value, you can set it before selected cursors queries
Regards
Pavel Stehule
Thanks.
Tony
probably
http://www.postgresql.org/docs/9.1/interactive/plpgsql-cursors.html
Regards
Pavel Stehule
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
much of query time is spent to prepare the query and how much
time is spent executing it.
Thanks,
James
On Dec 27, 2011, at 1:38 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
Hello
2011/12/23 Igor Schtein ischt...@gmail.com:
I'd like to find some measurements/figures of query
Pavel Stehule
information about the server-
-CentOS 5.6
-4-cores
-12GB ram
shared_buffers: 1 GB
temp_buffers = 100MB
work_mem : 30 MB
maintenance_mem: 512 MB
database_size: 1,5 GB
archive_mode is ON
vacuum/analyze (vacuum_scale_factor 0.1, analyze 0.05)
this behaviour is not related
measure the time the optimizer spends parsing and
planning for query execution?
You can use time for EXPLAIN statement
Regards
Pavel Stehule
Thank you,
James
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
must be transformed
from postgres format to perl format and any result must be transformed
too. Perl and other languages doesn't use data type compatible with
Postgres.
Regards
Pavel Stehule
Thanks,
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
or you can use hstore
Regards
Pavel Stehule
--
Best regards
Aleksej Trofimov
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list
] = in_input_nr then
return in_inputs[i][2];
end if;
END LOOP;
END IF;
return null;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
On 12/13/2011 04:02 PM, Pavel Stehule wrote:
Hello
do you know FOREACH IN ARRAY statement in 9.1
is ~180msec.
* use a copy statement
* use a explicit transaction
* if you can disable triggers (and RI)
* if you cannot and use a RI, unsures a indexes on PK and FK
Regards
Pavel Stehule
I belive I use the fastest index type (default).
So any idea to make postgres faster at higher number
and then there optimization can be more exact (but it repeat
a plan generation)
http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
Regards
Pavel Stehule
2011/10/28 CS DBA cs_...@consistentstate.com:
Hi All ;
I have code that drops a table, re
Hello
no, there is no difference - you can check it via EXPLAIN statement
Regards
Pavel Stehule
2011/10/19 Gnanakumar gna...@zoniac.com:
Hi,
In PostgreSQL, is there any performance difference between queries written
using explicit join notation vs implicit join notation in complex
queries
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBA cs_...@consistentstate.com:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id
of buckets - ideal is one.
* use a some filter if it's possible
* use a limit if it's possible
if you really should to process all rows and you need better reaction
time, try to use a cursor. It is optimized for fast first row
Regards
Pavel Stehule
friend :)
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
)
RETURNS double precision AS $$
SELECT $1/$2/$3* 10::double precision;
$$ LANGUAGE sql;
Regards
Pavel Stehule
The query that takes 7.6 seconds, when I calculate the statistic from within
the query:
explain analyze
select
agg.primary_id,
avg(agg.a / agg.b / agg.c
the function in C?
only SQL and C has zero overhead - SQL because uses inlining and C is
just readable assambler.
I am thinking, overhead of PL/pgSQL is minimal from languages from your list.
Regards
Pavel
Anish
On Wed, Aug 17, 2011 at 11:27 AM, Pavel Stehule pavel.steh...@gmail.com
wrote
://www.postgresql.org/docs/8.3/static/sql-prepare.html
if you cannot to use a outer transaction, and you can to replay a
process, if there are some problems, use a asynchronnous commit
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
Regards
Pavel Stehule
2011/7/7 sergio mayoral smayo
Hello
Is impossible to help you without more detailed info about your problems,
we have to see a execution plan, we have to see slow query
Regards
Pavel Stehule
2011/7/9 Jonathan jonat...@kc8onw.net:
Does anyone have any suggestions for my problem? (I have to wonder if I'm
somehow just
;
Regards
Pavel Stehule
2011/7/9 Pavel Stehule pavel.steh...@gmail.com:
Hello
Is impossible to help you without more detailed info about your problems,
we have to see a execution plan, we have to see slow query
Regards
Pavel Stehule
2011/7/9 Jonathan jonat...@kc8onw.net:
Does anyone
answer to get
It's hard to say where is problem - PostgreSQL wraps libxml2 library
for xml functionality, so problem can be
a) inside libxml2
b) on interface between libxml2 and PostgreSQL
c) on PostgreSQL memory management
can you send a profile?
Regards
Pavel Stehule
--
Julius Tuskenis
2011/6/20 Julius Tuskenis jul...@nsoft.lt:
Thank you, Pavel for your answer
2011.06.20 09:51, Pavel Stehule rašė:
can you send a profile?
Excuse me, but what do you mean by saying profile? I've sent content of
pg_settings in the first post. Please be more specific as I am more
2011/6/20 Pavel Stehule pavel.steh...@gmail.com:
2011/6/20 Julius Tuskenis jul...@nsoft.lt:
Thank you, Pavel for your answer
2011.06.20 09:51, Pavel Stehule rašė:
can you send a profile?
Excuse me, but what do you mean by saying profile? I've sent content of
pg_settings in the first post
;
end;
if you use FOR statement, there should be a problem in using a
implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
Regards
Pavel Stehule
--
Anthony Shipman | Life is the interval
anthony.ship...@symstream.com | between pay days.
--
Sent via pgsql
Hello
2011/6/8 anthony.ship...@symstream.com:
On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:
if you use FOR statement, there should be a problem in using a
implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
Alas this is mammoth replicator, equivalent to PG 8.3
Hello
what is your settings for
random_page_cost, seq_page_cost and work_mem?
Regards
Pavel Stehule
2011/6/8 Tony Capobianco tcapobia...@prospectiv.com:
Here's the explain analyze:
pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)
as
select o.emailcampaignid, count
--
768MB
(1 row)
it is ok.
Pavel
On Wed, 2011-06-08 at 18:27 +0200, Pavel Stehule wrote:
Hello
what is your settings for
random_page_cost, seq_page_cost and work_mem?
Regards
Pavel Stehule
2011/6/8 Tony Capobianco tcapobia...@prospectiv.com:
Here's the explain analyze:
pg_dw
Hello
did you run a ANALYZE statement on table tdiag? A statistics are
absolutelly out.
Regards
Pavel Stehule
2011/6/7 anthony.ship...@symstream.com:
Version: PostgreSQL 8.3.5 (mammoth replicator)
Schema:
CREATE TABLE tdiag (
diag_id integer DEFAULT nextval('diag_id_seq
, and sure enough, nothing really changed when I
increased it.
probably not
Just PL/pgSQL is not C, and you cannot do some heavy string or array operations.
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
SELECT *
FROM (SELECT * FROM tab1 LIMIT n) s1
UNION ALL
SELECT *
FROM (SELECT * FROM tab2 LIMIT n) s2
LIMIT n
Regards
Pavel Stehule
Thanks,
Dave
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
Hello
please, can you attach a value of shadow_buffers and work_mem from config
file?
Windows are very sensitive on memory setting. There must be lot of memory
just for MS Windows.
Regards
Pavel Stehule
2011/4/20 Allen Sooredoo allen_soore...@carrefour.com
Hi,
we are facing a performance
($0))
Total runtime: 485.638 ms
(9 rows)
On Mar 21, 2011, at 1:54 AM, Pavel Stehule wrote:
Hello
I think so HashAggregate goes out of memory - you can try to increase
a work_mem.
There are better queries for counting duplicit then cross join
Regards
Pavel Stehule
2011/3/21 Adam
a result of EXPLAIN ANALYZE SELECT ..., please
The reasons can be different - less seq scans, indexes
Regards
Pavel Stehule
./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl
CentOS release 5.4 (Final)
psql (PostgreSQL) 8.4.1
prompt2=# select count(*) from nodes;
count
Hello
I think so HashAggregate goes out of memory - you can try to increase
a work_mem.
There are better queries for counting duplicit then cross join
Regards
Pavel Stehule
2011/3/21 Adam Tistler atist...@gmail.com:
logicops2=# explain analyze select count(*) from nodes where node_id = any
Hello
for example queries with LIMIT clause can be significantly faster with
nested loop. But you don't need to disable nested loop globally.
You can wrap your query to sql functions and disable nested loop just
for these functions.
Regards
Pavel Stehule
2011/3/18 Anssi Kääriäinen
more memory - it should be timestamp
Regards
Pavel Stehule
2011/2/24 Gnanakumar gna...@zoniac.com:
Hi,
We're using PostgreSQL v8.2.3 on RHEL5.
I'm developing a PostgreSQL plpgsql function for one of our application
report. When I try to run the function multiple times (even twice or
thrice
. is
there any issue would face in performance related things which one will
cause the performance issue.
yes, there is. Planner can not to work well with foreign keys stored in array.
Regards
Pavel Stehule
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/performance
if you need to calculate a numeric expensive task, then you need to
use Perl, maybe Python or C. If you need to join a embedded SQL, then
PL/pgSQL is good tool.
Regards
Pavel Stehule
p.s. Once I had to solve very slow statistical analysis. 99% of time
needed a bublesort implemented in PL/pgSQL
-plantuner-enable-PostgreSQL-planner-hints-td1924794.html
Regards
Pavel Stehule
--
Mladen Gogala Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions
--
Sent via pgsql-performance mailing list
- a common talk.
Regards
Pavel Stehule
--
Mladen Gogala Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
asinteger(integer)
RETURNS integer AS
'oeudfpg.dll', 'AsInteger'
LANGUAGE c VOLATILE
COST 1;
are you sure so your function needs a VOLATILE flag?
Regards
Pavel Stehule
Why SeqScan???
this query is simple sample to show SLOW seq scan plan
I have a real query what i don`t know when
(cost=0.00..450.25 rows=1 width=8)
(4 rows)
regards
Pavel Stehule
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list
2011/1/17 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
it should to work without functional index - but not sure about effectivity
As long as the function is VOLATILE, the planner can't use any
intelligent query plan. Merge or hash join both require at least
Hello
you can emulate it now.
a) try to do a simple stored procedure, where you can wrap your query
b) use a FAST CALL API to call this procedure
c) use a some pool tool for pooling and persisting sessions
Regards
Pavel Stehule
2010/12/21 Michael Ben-Nes mich...@epoch.co.il:
Hi,
Just
2010/12/21 Michael Ben-Nes mich...@epoch.co.il:
Hi Pavel,
Thanks for your quick answer. Can you please elaborate a bit more about the
points bellow.
On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
you can emulate it now.
a) try to do a simple stored
IN (SELECT .. FROM WHERE some = C2 UNION ALL SELECT C1)
Regards
Pavel Stehule
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
- and then is necessary to use a
stored procedures.
Regards
Pavel Stehule
Best Regards,
Divakar
From: Alex Goncharov alex-goncha...@comcast.net
To: Divakar Singh dpsma...@yahoo.com
Cc: alex-goncha...@comcast.net; pgsql-performance@postgresql.org
Sent: Thu
Pavel Stehule
2) Dataset
name,pages,tuples,pg_size_pretty
pivotbad;1870;93496;15 MB
pivotgood;5025;251212;39 MB
3) EXPLAIN (ANALYZE ON, BUFFERS ON)
Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual
time=25814.222..32296.765 rows=3163 loops=1)
Hash Cond: (((pb.id)::text
not sure if the work_mem is a factor):
it's has a little bit different meaning. work_mem is just limit, so
memory usage must not be great than work_mem ever. if then pg
increase butches number - store data to blocks on disk. Higher
work_mem ~ less butches. So ideal is 1 butches.
Regards
Pavel Stehule
Hello,
there should be a problem in a statistic, they are out of reality.
Please, try to use a DISTINCT OF operator now - maybe a statistic will
be better. Next - try to increase a work_mem. Hash join is
untypically slow in your comp.
Regards
Pavel Stehule
2010/11/17 Humair Mohammed huma
Hello
my opinion:
@1 can be faster for access to last items with index
@2 can be more effective about data files length allocation
@1 or @2 - it depends on number of prices per product. For small
number (less 100) I am strong for @2 (if speed is important).
Personally prefer @2.
Pavel
2010/11/16 Louis-David Mitterrand vindex+lists-pgsql-performa...@apartia.org:
On Tue, Nov 16, 2010 at 12:03:29PM +0100, Pavel Stehule wrote:
Hello
my opinion:
@1 can be faster for access to last items with index
@2 can be more effective about data files length allocation
Hi Pavel,
What
problem can be in ugly predicate
coalesce(t1.response,'ISNULL')
coalesce(t2.response,'ISNULL')
try use a IS DISTINCT OF operator
... AND t1.response IS DISTINCT t2.response
Regards
Pavel Stehule
p.s. don't use a coalesce in WHERE clause if it is possible.
--
Sent via pgsql-performance
,'ISNULL')
What gives?
I think, so must problem can be in ugly predicate
coalesce(t1.response,'ISNULL')
coalesce(t2.response,'ISNULL')
try use a IS DISTINCT OF operator
... AND t1.response IS DISTINCT t2.response
Regards
Pavel Stehule
p.s. don't use a coalesce in WHERE clause
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
look on EXPLAIN ANALYZE
Hello
look on EXPLAIN ANALYZE command. Probably your statistic are out, and
then planner can be confused. EXPLAIN ANALYZE statement show it.
Regards
Pavel Stehule
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Fri, Nov 5, 2010 at 7:26 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
I
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
look on EXPLAIN ANALYZE command. Probably your statistic are out, and
then planner can be confused. EXPLAIN ANALYZE statement show it.
As I noted earlier
Hello
do you use a VACUUM statement?
Regards
Pavel Stehule
2010/11/8 shaiju.ck shaiju...@gmail.com:
Hi, I have a table employee with 33 columns. The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!! Which seems to
be very slow. But when I say select id
statistics on columns. The estimation is totally out.
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list
. The
are no data casting, there are no overhead from communication, there
are no overhead from content switch.
Regards
Pavel Stehule
It takes half the time compared to the consecutive insert using libpq.
In the backend, does it use COPY or prepared statement? or something else?
Best Regards,
Divakar
?
no - it's just EAV table on very large data :(
Regards
Pavel Stehule
If yes, and if i where you, i would try to rewrite this query, to
something like:
select
timestamp,
sum (case when keyid = 1 then value else 0 end) as Raumsolltemperatur,
...
from
log
group by
timestamp;
Assuming you
2010/8/30 Gerhard Wiesinger li...@wiesinger.com:
On Mon, 30 Aug 2010, Pavel Stehule wrote:
Hello
2010/8/30 Andreas Kretschmer akretsch...@spamfence.net:
Gerhard Wiesinger li...@wiesinger.com wrote:
I know that the data model is key/value pairs but it worked well in 8.3.
I need
Hello
I'm not understanding why it is sorting on disk if it would fit within
a work_mem segment - by a fairly wide margin. Is there something else
I can do to get that sort to happen in memory?
Planner working with estimations. So there is some probability so
planner expected a larger
there are not necessary full table scan.
regards
Pavel Stehule
In table songs are about 150.000 rows.
Thank you for your reply.
Best regards.
Marek Fiala
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
are in table pg_stat_user_tables, pg_stat_user_indexes
But for example CPU consumption you can see never - PostgreSQL uses
little bit different methods.
Regards
Pavel Stehule
maybe you searching some like http://pgfouine.projects.postgresql.org/
--
Sent via pgsql-performance mailing list (pgsql
. Somewhere optimalizer
prefer hash join (available for sets less than work_mem), but try to
store to much data to hash tables and system will to use a swap :(.
Regards
Pavel Stehule
kind regards
Armin
For reasons of completeness the eplain output with hashjoin off:
# explain analyze SELECT
(~250 GB) for ca. 1600 seconds
and the sort will result in a disk merge deploying ca. 200 GB of data to the
local disk (ca. 180.000 tmp-files)
can you try show check explain with set enable_hashjoin to off; ?
Regards
Pavel Stehule
explain SELECT DISTINCT t4.objid
FROM fscsubfile t4
on PostgreSQL now, when memcached exists.
Much more important is smarter cache controlling then we have now -
maybe with priorities for some tables and some operations
(applications) - sometimes we don't need use cache for extra large
scans.
Regards
Pavel Stehule
Yep. And it's quite possible
2010/6/24 Joshua D. Drake j...@commandprompt.com:
On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
2010/6/24 Josh Berkus j...@agliodbs.com:
And I'm also planning to implement unlogged tables, which have the
same contents for all sessions but are not WAL-logged (and are
truncated
101 - 200 of 242 matches
Mail list logo