hello
maybe
http://www.gtsm.com/oscon2003/toc.html
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
bye
Pavel
On Thu, 11 Dec 2003, sandra ruiz wrote:
Hi list,
I need to know if there is anything like hints of Oracle in
Postgres..otherwise..I wish to find a way to force a
Hello
It is normal behavior PostgreSQL. Use
SELECT id FROM tabulka ORDER BY id DESC LIMIT 1;
regards
Pavel
On Mon, 22 Dec 2003, Ryszard Lach wrote:
Hi.
I have a table with 24k records and btree index on column 'id'. Is this
normal, that 'select max(id)' or 'select count(id)' causes a
Hello
try prepared statements, PQexecPrepared
http://developer.postgresql.org/docs/postgres/libpq-exec.html
Regards
Pavel Stehule
On Thu, 22 Jan 2004, pginfo wrote:
Hi,
thanks for the answer.
It is very interest, because I readet many times that if I write the trigger
in C it will work
Hello,
If you has index on id, then you can use
SELECT id FROM tabulka ORDER BY id DESC LIMIT 1;
See 4.8. FAQ
Regards
Pavel Stehule
On Wed, 11 Feb 2004, David Teran wrote:
Hi
we have a table with about 4 million rows. One column has an int value,
there is a btree index on it. We
|stddev_pop
---+---++--+--+---
1000 | 0.418 | 20.792 | 0.619168 | 0.81550718804297 | 0.815099332459549
(1 row)
Time: 33568,818 ms
It's true, stddev_samp(TRUNCATE) stddev_samp(DROP)
Regards
Pavel Stehule
---(end
Hello
did you vacuum?
It's good technique do vacuum table after remove bigger number of rows.
Regards
Pavel Stehule
2007/7/22, Jozsef Szalay [EMAIL PROTECTED]:
I'm having this very disturbing problem. I got a table with about 100,000
rows in it. Our software deletes the majority
have to up it.
Regards
Pavel Stehule
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list
, July 25, 2007 1:12 PM
To: Jozsef Szalay
Cc: Pavel Stehule; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Simple select hangs while CPU close to 100%
In response to Jozsef Szalay [EMAIL PROTECTED]:
Hi Pavel,
Yes I did vacuum. In fact the only way to fix this problem is
executing
why not
select id,
min(the_date) as min_date,
max(the_date) as max_date
from my_table group by id;
Since 8.0 or was it earlier this will use an index should a reasonable one
exist.
without any limits, seq scan is optimal.
Regards
Pavel Stehule
;
SELECT * FROM c(8692);
Regards
Pavel Stehule
On 02/11/2007, Jens-Wolfhard Schicke [EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
I have the following query:
explain analyze
SELECT
(cast((
SELECT cast(row(1, o.id, NULL, NULL, NULL, NULL
-83.htm
Regards
Pavel Stehule
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hello
this is known problem of prepared statements. Prepared statement has
plan built without knowledge any values and should not be optimal.
try use dynamic query and statement EXECUTE INTO
Regards
Pavel Stehule
On 10/12/2007, Piotr Gasidło [EMAIL PROTECTED] wrote:
Hello,
I've created
cache or somewhere
there.
Regards
Pavel Stehule
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through
On 21/12/2007, Merlin Moncure [EMAIL PROTECTED] wrote:
On Dec 21, 2007 3:18 AM, Pavel Stehule [EMAIL PROTECTED] wrote:
I have similar patch and it works. There is two isues:
* we missing column in pg_proc about state (not all procedures are
obfuscated), I solved it for plpgsl with using
Hello
I tested it and it is true. In my configuration 1GRam, Fedora 8, is
PostgreSQL most fast with 32M shared buffers :(. Diff is about 5% to
256M
Regards
Pavel Stehule
On 26/12/2007, Guillaume Smet [EMAIL PROTECTED] wrote:
On Dec 26, 2007 12:21 PM, Simon Riggs [EMAIL PROTECTED] wrote
?
there can be two issues:
a) some trigger activity for DEFERRED constraints
b) slow write to WAL
http://www.westnet.com/~gsmith/content/postgresql/
in normal cases COMMIT is really fast operation.
Regards
Pavel Stehule
Peter Childs
---(end of broadcast
is off is probably that the planner cannot make
any useful estimate about those COALESCE expressions. Try rewriting
them in the simpler forms
(at_type = 1 or at_type is null) AND
(at_language = 0 or at_language is null)
what about put this topic into FAQ.
Regards
Pavel Stehule
Hello
synchronous_commit = off is well for specific load,
try only one connect pgbench ~ it is analogy for database import or
some administrator's work.
Regards
Pavel Stehule
On 21/01/2008, Hannes Dorbath [EMAIL PROTECTED] wrote:
I might completely misunderstand this feature. Shouldn't
methods with 2 8k writes:
(o_dsync unavailable)
open o_sync, write 0.340511
write, fdatasync 0.182257
write, fsync,0.177968
any ideas are welcome
Regards
Pavel Stehule
---(end of broadcast
On 03/02/2008, Scott Marlowe [EMAIL PROTECTED] wrote:
On Feb 3, 2008 10:02 AM, Pavel Stehule [EMAIL PROTECTED] wrote:
Hello
I am testing one server and I found strange behave of 8.2.6. My
configuration is:
Note that with a scaling factor that's the number of clients, your
test isn't
hello
Are you sure, so you don't call trigger recursion?
Regards
Pavel Stehule
2008/7/14 Praveen [EMAIL PROTECTED]:
Hi all,
Please find the procedure and trigger which took more time when we try to
update value in table through Procedure.
CREATE OR REPLACE FUNCTION procname1(args
your server has
problems with memory or controller.
regards
Pavel Stehule
2008/8/6 Marcin Citowicki [EMAIL PROTECTED]:
Hello,
I forgot to add - all those 'out of memory' errors happen when backup db is
trying to create index. Every 'CREATE INDEX' operation is followed by 'out
of memory' error
c;
http://www.postgresql.org/docs/8.2/static/sql-fetch.html
maybe
select * from (select * from mytable where b = '21' offset 0) where a
is null limit 15
regards
Pavel Stehule
As you can see the estimated cost was 3.68: a long way from the true value.
Doing 'set enable_seqscan=false
2008/9/2 Guillaume Cottenceau [EMAIL PROTECTED]:
Pavel Stehule pavel.stehule 'at' gmail.com writes:
Hello
2008/9/1 David West [EMAIL PROTECTED]:
Thanks for your suggestion but the result is the same.
Here is the explain analyse output from different queries.
Select * from my_table where
)) x$$ language sql rows 432;
CREATE FUNCTION
postgres=# explain select * from fooo();
QUERY PLAN
-
Function Scan on fooo (cost=0.00..112.32 rows=432 width=4)
(1 row)
postgres=#
regards
Pavel Stehule
--
GJ
Hello
without source code we cannot help
regards
Pavel Stehule
2009/4/27 Wojtek f...@twine.pl:
Hi,
I'm having serious performance problems with my two plpgsql functions (lots
of calculations inside, including distance (lat,long) using earthdistance
etc).
My problem is: both functions
Hello
maybe is wrong tip, but your function like de* should be slow. What is
time of query without calling these functions?
Pavel Stehule
2009/7/27 Thomas Zaksek zak...@ptt.uni-due.de:
Hi,
subject is the following type of query needed in a function to select data:
SELECT ' 13.04.2009 12:00
suitable indexing type ??
you can use hashing functions
http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Using_hash_functions_for_ensuring_uniqueness_of_texts
regards
Pavel Stehule
--
View this message in context:
http://www.nabble.com/Index-row-requires-9324-bytes-maximum-size-is-8191
generate_subscripts($1,1) g1(i),
generate_subscripts($1,2) g2(j);
$$ language sql immutable;
postgres=# select * from unnest2(array[[1,2],[3,4]]);
unnest2
-
1
2
3
4
(4 rows)
regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance
2009/10/14 Thom Brown thombr...@gmail.com:
2009/10/14 Scott Marlowe scott.marl...@gmail.com:
If what you're trying to do is emulate a real world app which randomly
grabs rows, then you want to setup something ahead of time that has a
pseudo random order and not rely on using anything like
data automatically
http://www.postgresql.org/docs/8.4/interactive/storage-toast.html
Regards
Pavel Stehule
Thanks,
-- Shaul
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
is slow.
regards
Pavel Stehule
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list (pgsql
2009/11/15 Oleg Bartunov o...@sai.msu.su:
On Sun, 15 Nov 2009, Pavel Stehule wrote:
2009/11/15 Oleg Bartunov o...@sai.msu.su:
Yes, as stated original author use polish ispell dictionary.
Ispell dictionary is slow to load first time. In real life it should be
no
problem.
it is a problem
(in plpgsql) uses new plan for
every call (and generated with knowledge of real params) - so it is a
solution for you.
http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
Regards
Pavel Stehule
Thanks in Advance
Ram
--
Sent via pgsql-performance
'SELECT * FROM tab WHERE a = $1' USING _a LOOP
RAISE NOTICE '%', r.a;
END LOOP;
END;
$$ LANGUAGE plpgsql;
regards
Pavel Stehule
Regards,
Ram
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
for lock.
see
http://old.nabble.com/Query-is-slow-when-executing-in-procedure-td26490782.html
Regards
Pavel Stehule
Is my observation correct? If yes then what is the solution for this? If not
where/how to find the exact cause of the above problem
- in bgwriter configuration. There
could help migration on 8.3, maybe.
http://old.nabble.com/Checkpoint-tuning-on-8.2.4-td17685494.html
Regards
Pavel Stehule
Regards,
Vishal Gupta - 9910991635
- Original Message -
From: Pavel Stehule pavel.steh...@gmail.com
To: Vishal Gupta
Hello
look on http://blog.endpoint.com/2008/12/why-is-my-function-slow.html
Regards
Pavel Stehule
2010/2/2 Mridula Mahadevan mmahade...@stratify.com:
Hi,
I am running a bunch of queries within a function, creating some temp
tables and populating them. When the data exceeds say, 100k
2010/2/10 Bryce Nesbitt bry...@obviously.com:
Or, if you want to actually read that query plan, try:
http://explain.depesz.com/s/qYq
hello,
check your work_mem sesttings. Hash join is very slow in your case.
Pavel
--
Sent via pgsql-performance mailing list
2010/6/23 Bruce Momjian br...@momjian.us:
Tom Lane wrote:
Dimitri Fontaine dfonta...@hi-media.com writes:
Josh Berkus j...@agliodbs.com writes:
a) Eliminate WAL logging entirely
If we elimiate WAL logging, that means a reinstall is required for even
a postmaster crash, which is a new
on PostgreSQL now, when memcached exists.
Much more important is smarter cache controlling then we have now -
maybe with priorities for some tables and some operations
(applications) - sometimes we don't need use cache for extra large
scans.
Regards
Pavel Stehule
Yep. And it's quite possible
2010/6/24 Joshua D. Drake j...@commandprompt.com:
On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
2010/6/24 Josh Berkus j...@agliodbs.com:
And I'm also planning to implement unlogged tables, which have the
same contents for all sessions but are not WAL-logged (and are
truncated
2010/6/24 A.M. age...@themactionfaction.com:
On Jun 24, 2010, at 4:01 PM, Pavel Stehule wrote:
2010/6/24 Joshua D. Drake j...@commandprompt.com:
On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
2010/6/24 Josh Berkus j...@agliodbs.com:
And I'm also planning to implement unlogged
2010/6/24 Josh Berkus j...@agliodbs.com:
this is similar MySQL's memory tables. Personally, I don't see any
practical sense do same work on PostgreSQL now, when memcached exists.
Thing is, if you only have one table (say, a sessions table) which you
don't want logged, you don't necessarily
(~250 GB) for ca. 1600 seconds
and the sort will result in a disk merge deploying ca. 200 GB of data to the
local disk (ca. 180.000 tmp-files)
can you try show check explain with set enable_hashjoin to off; ?
Regards
Pavel Stehule
explain SELECT DISTINCT t4.objid
FROM fscsubfile t4
. Somewhere optimalizer
prefer hash join (available for sets less than work_mem), but try to
store to much data to hash tables and system will to use a swap :(.
Regards
Pavel Stehule
kind regards
Armin
For reasons of completeness the eplain output with hashjoin off:
# explain analyze SELECT
are in table pg_stat_user_tables, pg_stat_user_indexes
But for example CPU consumption you can see never - PostgreSQL uses
little bit different methods.
Regards
Pavel Stehule
maybe you searching some like http://pgfouine.projects.postgresql.org/
--
Sent via pgsql-performance mailing list (pgsql
there are not necessary full table scan.
regards
Pavel Stehule
In table songs are about 150.000 rows.
Thank you for your reply.
Best regards.
Marek Fiala
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Hello
I'm not understanding why it is sorting on disk if it would fit within
a work_mem segment - by a fairly wide margin. Is there something else
I can do to get that sort to happen in memory?
Planner working with estimations. So there is some probability so
planner expected a larger
?
no - it's just EAV table on very large data :(
Regards
Pavel Stehule
If yes, and if i where you, i would try to rewrite this query, to
something like:
select
timestamp,
sum (case when keyid = 1 then value else 0 end) as Raumsolltemperatur,
...
from
log
group by
timestamp;
Assuming you
2010/8/30 Gerhard Wiesinger li...@wiesinger.com:
On Mon, 30 Aug 2010, Pavel Stehule wrote:
Hello
2010/8/30 Andreas Kretschmer akretsch...@spamfence.net:
Gerhard Wiesinger li...@wiesinger.com wrote:
I know that the data model is key/value pairs but it worked well in 8.3.
I need
statistics on columns. The estimation is totally out.
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list
. The
are no data casting, there are no overhead from communication, there
are no overhead from content switch.
Regards
Pavel Stehule
It takes half the time compared to the consecutive insert using libpq.
In the backend, does it use COPY or prepared statement? or something else?
Best Regards,
Divakar
Hello
do you use a VACUUM statement?
Regards
Pavel Stehule
2010/11/8 shaiju.ck shaiju...@gmail.com:
Hi, I have a table employee with 33 columns. The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!! Which seems to
be very slow. But when I say select id
Hello
look on EXPLAIN ANALYZE command. Probably your statistic are out, and
then planner can be confused. EXPLAIN ANALYZE statement show it.
Regards
Pavel Stehule
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Fri, Nov 5, 2010 at 7:26 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
I
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
look on EXPLAIN ANALYZE command. Probably your statistic are out, and
then planner can be confused. EXPLAIN ANALYZE statement show it.
As I noted earlier
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
2010/11/12 Jon Nelson jnelson+pg...@jamponi.net:
On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
look on EXPLAIN ANALYZE
,'ISNULL')
What gives?
I think, so must problem can be in ugly predicate
coalesce(t1.response,'ISNULL')
coalesce(t2.response,'ISNULL')
try use a IS DISTINCT OF operator
... AND t1.response IS DISTINCT t2.response
Regards
Pavel Stehule
p.s. don't use a coalesce in WHERE clause
Hello
my opinion:
@1 can be faster for access to last items with index
@2 can be more effective about data files length allocation
@1 or @2 - it depends on number of prices per product. For small
number (less 100) I am strong for @2 (if speed is important).
Personally prefer @2.
Pavel
2010/11/16 Louis-David Mitterrand vindex+lists-pgsql-performa...@apartia.org:
On Tue, Nov 16, 2010 at 12:03:29PM +0100, Pavel Stehule wrote:
Hello
my opinion:
@1 can be faster for access to last items with index
@2 can be more effective about data files length allocation
Hi Pavel,
What
problem can be in ugly predicate
coalesce(t1.response,'ISNULL')
coalesce(t2.response,'ISNULL')
try use a IS DISTINCT OF operator
... AND t1.response IS DISTINCT t2.response
Regards
Pavel Stehule
p.s. don't use a coalesce in WHERE clause if it is possible.
--
Sent via pgsql-performance
Hello,
there should be a problem in a statistic, they are out of reality.
Please, try to use a DISTINCT OF operator now - maybe a statistic will
be better. Next - try to increase a work_mem. Hash join is
untypically slow in your comp.
Regards
Pavel Stehule
2010/11/17 Humair Mohammed huma
not sure if the work_mem is a factor):
it's has a little bit different meaning. work_mem is just limit, so
memory usage must not be great than work_mem ever. if then pg
increase butches number - store data to blocks on disk. Higher
work_mem ~ less butches. So ideal is 1 butches.
Regards
Pavel Stehule
Pavel Stehule
2) Dataset
name,pages,tuples,pg_size_pretty
pivotbad;1870;93496;15 MB
pivotgood;5025;251212;39 MB
3) EXPLAIN (ANALYZE ON, BUFFERS ON)
Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual
time=25814.222..32296.765 rows=3163 loops=1)
Hash Cond: (((pb.id)::text
IN (SELECT .. FROM WHERE some = C2 UNION ALL SELECT C1)
Regards
Pavel Stehule
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
- and then is necessary to use a
stored procedures.
Regards
Pavel Stehule
Best Regards,
Divakar
From: Alex Goncharov alex-goncha...@comcast.net
To: Divakar Singh dpsma...@yahoo.com
Cc: alex-goncha...@comcast.net; pgsql-performance@postgresql.org
Sent: Thu
Hello
you can emulate it now.
a) try to do a simple stored procedure, where you can wrap your query
b) use a FAST CALL API to call this procedure
c) use a some pool tool for pooling and persisting sessions
Regards
Pavel Stehule
2010/12/21 Michael Ben-Nes mich...@epoch.co.il:
Hi,
Just
2010/12/21 Michael Ben-Nes mich...@epoch.co.il:
Hi Pavel,
Thanks for your quick answer. Can you please elaborate a bit more about the
points bellow.
On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
Hello
you can emulate it now.
a) try to do a simple stored
asinteger(integer)
RETURNS integer AS
'oeudfpg.dll', 'AsInteger'
LANGUAGE c VOLATILE
COST 1;
are you sure so your function needs a VOLATILE flag?
Regards
Pavel Stehule
Why SeqScan???
this query is simple sample to show SLOW seq scan plan
I have a real query what i don`t know when
(cost=0.00..450.25 rows=1 width=8)
(4 rows)
regards
Pavel Stehule
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list
2011/1/17 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
it should to work without functional index - but not sure about effectivity
As long as the function is VOLATILE, the planner can't use any
intelligent query plan. Merge or hash join both require at least
-plantuner-enable-PostgreSQL-planner-hints-td1924794.html
Regards
Pavel Stehule
--
Mladen Gogala Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions
--
Sent via pgsql-performance mailing list
- a common talk.
Regards
Pavel Stehule
--
Mladen Gogala Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
if you need to calculate a numeric expensive task, then you need to
use Perl, maybe Python or C. If you need to join a embedded SQL, then
PL/pgSQL is good tool.
Regards
Pavel Stehule
p.s. Once I had to solve very slow statistical analysis. 99% of time
needed a bublesort implemented in PL/pgSQL
. is
there any issue would face in performance related things which one will
cause the performance issue.
yes, there is. Planner can not to work well with foreign keys stored in array.
Regards
Pavel Stehule
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/performance
more memory - it should be timestamp
Regards
Pavel Stehule
2011/2/24 Gnanakumar gna...@zoniac.com:
Hi,
We're using PostgreSQL v8.2.3 on RHEL5.
I'm developing a PostgreSQL plpgsql function for one of our application
report. When I try to run the function multiple times (even twice or
thrice
Hello
for example queries with LIMIT clause can be significantly faster with
nested loop. But you don't need to disable nested loop globally.
You can wrap your query to sql functions and disable nested loop just
for these functions.
Regards
Pavel Stehule
2011/3/18 Anssi Kääriäinen
a result of EXPLAIN ANALYZE SELECT ..., please
The reasons can be different - less seq scans, indexes
Regards
Pavel Stehule
./configure --prefix=/usr/local/pgsql84 --with-openssl --with-perl
CentOS release 5.4 (Final)
psql (PostgreSQL) 8.4.1
prompt2=# select count(*) from nodes;
count
Hello
I think so HashAggregate goes out of memory - you can try to increase
a work_mem.
There are better queries for counting duplicit then cross join
Regards
Pavel Stehule
2011/3/21 Adam Tistler atist...@gmail.com:
logicops2=# explain analyze select count(*) from nodes where node_id = any
($0))
Total runtime: 485.638 ms
(9 rows)
On Mar 21, 2011, at 1:54 AM, Pavel Stehule wrote:
Hello
I think so HashAggregate goes out of memory - you can try to increase
a work_mem.
There are better queries for counting duplicit then cross join
Regards
Pavel Stehule
2011/3/21 Adam
Hello
please, can you attach a value of shadow_buffers and work_mem from config
file?
Windows are very sensitive on memory setting. There must be lot of memory
just for MS Windows.
Regards
Pavel Stehule
2011/4/20 Allen Sooredoo allen_soore...@carrefour.com
Hi,
we are facing a performance
SELECT *
FROM (SELECT * FROM tab1 LIMIT n) s1
UNION ALL
SELECT *
FROM (SELECT * FROM tab2 LIMIT n) s2
LIMIT n
Regards
Pavel Stehule
Thanks,
Dave
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
, and sure enough, nothing really changed when I
increased it.
probably not
Just PL/pgSQL is not C, and you cannot do some heavy string or array operations.
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
Hello
did you run a ANALYZE statement on table tdiag? A statistics are
absolutelly out.
Regards
Pavel Stehule
2011/6/7 anthony.ship...@symstream.com:
Version: PostgreSQL 8.3.5 (mammoth replicator)
Schema:
CREATE TABLE tdiag (
diag_id integer DEFAULT nextval('diag_id_seq
;
end;
if you use FOR statement, there should be a problem in using a
implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
Regards
Pavel Stehule
--
Anthony Shipman | Life is the interval
anthony.ship...@symstream.com | between pay days.
--
Sent via pgsql
Hello
2011/6/8 anthony.ship...@symstream.com:
On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:
if you use FOR statement, there should be a problem in using a
implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.
Alas this is mammoth replicator, equivalent to PG 8.3
Hello
what is your settings for
random_page_cost, seq_page_cost and work_mem?
Regards
Pavel Stehule
2011/6/8 Tony Capobianco tcapobia...@prospectiv.com:
Here's the explain analyze:
pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)
as
select o.emailcampaignid, count
--
768MB
(1 row)
it is ok.
Pavel
On Wed, 2011-06-08 at 18:27 +0200, Pavel Stehule wrote:
Hello
what is your settings for
random_page_cost, seq_page_cost and work_mem?
Regards
Pavel Stehule
2011/6/8 Tony Capobianco tcapobia...@prospectiv.com:
Here's the explain analyze:
pg_dw
answer to get
It's hard to say where is problem - PostgreSQL wraps libxml2 library
for xml functionality, so problem can be
a) inside libxml2
b) on interface between libxml2 and PostgreSQL
c) on PostgreSQL memory management
can you send a profile?
Regards
Pavel Stehule
--
Julius Tuskenis
2011/6/20 Julius Tuskenis jul...@nsoft.lt:
Thank you, Pavel for your answer
2011.06.20 09:51, Pavel Stehule rašė:
can you send a profile?
Excuse me, but what do you mean by saying profile? I've sent content of
pg_settings in the first post. Please be more specific as I am more
2011/6/20 Pavel Stehule pavel.steh...@gmail.com:
2011/6/20 Julius Tuskenis jul...@nsoft.lt:
Thank you, Pavel for your answer
2011.06.20 09:51, Pavel Stehule rašė:
can you send a profile?
Excuse me, but what do you mean by saying profile? I've sent content of
pg_settings in the first post
Hello
Is impossible to help you without more detailed info about your problems,
we have to see a execution plan, we have to see slow query
Regards
Pavel Stehule
2011/7/9 Jonathan jonat...@kc8onw.net:
Does anyone have any suggestions for my problem? (I have to wonder if I'm
somehow just
;
Regards
Pavel Stehule
2011/7/9 Pavel Stehule pavel.steh...@gmail.com:
Hello
Is impossible to help you without more detailed info about your problems,
we have to see a execution plan, we have to see slow query
Regards
Pavel Stehule
2011/7/9 Jonathan jonat...@kc8onw.net:
Does anyone
://www.postgresql.org/docs/8.3/static/sql-prepare.html
if you cannot to use a outer transaction, and you can to replay a
process, if there are some problems, use a asynchronnous commit
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
Regards
Pavel Stehule
2011/7/7 sergio mayoral smayo
)
RETURNS double precision AS $$
SELECT $1/$2/$3* 10::double precision;
$$ LANGUAGE sql;
Regards
Pavel Stehule
The query that takes 7.6 seconds, when I calculate the statistic from within
the query:
explain analyze
select
agg.primary_id,
avg(agg.a / agg.b / agg.c
the function in C?
only SQL and C has zero overhead - SQL because uses inlining and C is
just readable assambler.
I am thinking, overhead of PL/pgSQL is minimal from languages from your list.
Regards
Pavel
Anish
On Wed, Aug 17, 2011 at 11:27 AM, Pavel Stehule pavel.steh...@gmail.com
wrote
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBA cs_...@consistentstate.com:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id
of buckets - ideal is one.
* use a some filter if it's possible
* use a limit if it's possible
if you really should to process all rows and you need better reaction
time, try to use a cursor. It is optimized for fast first row
Regards
Pavel Stehule
friend :)
Regards
Pavel Stehule
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hello
no, there is no difference - you can check it via EXPLAIN statement
Regards
Pavel Stehule
2011/10/19 Gnanakumar gna...@zoniac.com:
Hi,
In PostgreSQL, is there any performance difference between queries written
using explicit join notation vs implicit join notation in complex
queries
1 - 100 of 242 matches
Mail list logo