Benjamin Arai [EMAIL PROTECTED] schrieb:
Hi,
Will simple queries such as SELECT * FROM blah_table WHERE tag='x'; work any
faster by putting them into a stored procedure?
IMHO no, why do you think so? You can use PREPARE instead, if you have many
selects like this.
HTH, Andreas
--
Kevin Grittner [EMAIL PROTECTED] schrieb:
Attached is a simplified example of a performance problem we have seen,
Odd. Can you tell us your PG-Version?
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.
Merlin Moncure [EMAIL PROTECTED] schrieb:
On 3/15/06, Kevin Grittner [EMAIL PROTECTED] wrote:
Attached is a simplified example of a performance problem we have seen,
with a workaround and a suggestion for enhancement (hence both the
performance and hackers lists).
Hi Kevin. In
Bealach-na Bo [EMAIL PROTECTED] schrieb:
The node table is tiny (2500 records). What I'm pulling my hair out
over is that ANY Query, even something as simple as select count(*)
form job_log takes of the order of tens of minutes to complete. Just
now I'm trying to run an explain analyze on the
Tom Lane [EMAIL PROTECTED] schrieb:
Eric Lam [EMAIL PROTECTED] writes:
what is the quickest way of dumping a DB and restoring it? I have done a
pg_dump -D database | split --line-bytes 1546m part
Don't use -D if you want fast restore ...
hehe, yes ;-)
Joost Kraaijeveld [EMAIL PROTECTED] schrieb:
Because JBoss is largely responsible for the SQL queries that are send
to the back-end , I would like to see the queries that are actually
received by PostgreSQL (insert, select, update and delete), together
with the number of times they are called,
Jens Schipkowski [EMAIL PROTECTED] schrieb:
Hello!
In our JAVA application we do multiple inserts to a table by data from a
Hash Map. Due to poor database access implemention - done by another
company (we got the job to enhance the software) - we cannot use prepared
statements. (We
[EMAIL PROTECTED] [EMAIL PROTECTED] schrieb:
Hello i would like to know if not determining a max size value for a character
varying's fields decrease the perfomance (perhaps size of stockage ? or
something else ?)
No problem because of the TOAST-technology:
Steven Flatt [EMAIL PROTECTED] schrieb:
For example, on a toy table with two columns, I noticed about a 20% increase
when bulking together 1000 tuples in one INSERT statement as opposed to doing
1000 individual INSERTS. Would this be the same for 1? 10? Does it
depend on the width of
Hi,
Okay, i know, not really a recent version:
PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian
1:3.3.5-13)
I have a fresh ANALYZED table with some indexes.
scholl=*# set enable_bitmapscan=1;
SET
scholl=*# explain analyse select sum(flaeche) from bde_meldungen
Tom Lane [EMAIL PROTECTED] schrieb:
Thanks you and Alex for the response.
PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5
(Debian 1:3.3.5-13)
You need a newer one.
I know ;-)
This is simply a stupid choice on the part of choose_bitmap_and() ---
it's adding on
Ben [EMAIL PROTECTED] schrieb:
If I have this:
create table foo (bar int primary key);
...then in my ideal world, Postgres would be able to use that index on bar
to help me with this:
select bar from foo order by bar desc limit 20;
But in my experience, PG8.2 is doing a full table
Markus Schaber [EMAIL PROTECTED] schrieb:
is there any way to get both results in a single query,
eventually through stored procedure?
The retrieved [count(*),A] ; [count(*),B)] data couldnt fit
on a single table, of course.
The main goal would be to get multiple results while
Roberts, Jon [EMAIL PROTECTED] schrieb:
Is it possible yet in PostgreSQL to hide the source code of functions from
users based on role membership? I would like to avoid converting the code
to C to secure the source code and I don't want it obfuscated either.
Some days ago i have seen a
Giorgio Valoti [EMAIL PROTECTED] schrieb:
Hi all,
maybe it?s a naive question but I was wondering if there is any
difference, from a performance point of view, between a view and a
function performing the same task, something like:
CREATE VIEW foo AS ?;
CREATE FUNCTION getfoo(int)
Tom Lane [EMAIL PROTECTED] schrieb:
Giorgio Valoti [EMAIL PROTECTED] writes:
maybe its a naive question but I was wondering if there is any
difference, from a performance point of view, between a view and a
function performing the same task,
Yes. Usually the view will win.
petchimuthu lingam [EMAIL PROTECTED] schrieb:
Hi friends,
I am using postgresql 8.1, I have shared_buffers = 5, now i execute the
query, it takes 18 seconds to do sequential scan, when i reduced to 5000, it
takes one 10 seconds, Why.
Wild guess: the second time the data are in the
Nikhil G. Daddikar [EMAIL PROTECTED] schrieb:
Hello,
I have been searching on the net on how to tune and monitor performance
of my postgresql server but not met with success. A lot of information
is vague and most often then not the answer is it depends. Can anyone
of you refer me a
Andrus [EMAIL PROTECTED] schrieb:
Index is not used for
is null
condition:
create index makse_dokumnr_idx on makse(dokumnr);
explain select
sum( summa)
from MAKSE
where dokumnr is null
Aggregate (cost=131927.95..131927.96 rows=1 width=10)
- Seq Scan on makse
Tom Lane [EMAIL PROTECTED] schrieb:
Andrus [EMAIL PROTECTED] writes:
Index is not used for
is null
How to fix ?
Update to something newer than 8.1 (specifically, you'll need 8.3).
Right. For my example in the other mail:
test=*# create index idx_foo on foo(i);
CREATE INDEX
test=*#
Andrus [EMAIL PROTECTED] schrieb:
There are indexes on rid(dokumnr) and dok(dokumnr) and dokumnr is int.
Instead of using single key index, 8.1.4 scans over whole rid table.
Sometimes idtelluued can contain more than single row so replacing join
with equality is not possible.
How to fix
Kynn Jones [EMAIL PROTECTED] schrieb:
Hi. I have a longish collection of SQL statements stored in a file that I run
periodically via cron. Running this script takes a bit too long, even for a
cron job, and I would like to streamline it.
Is there a way to tell Postgres to print out, after
Jörg Kiegeland kiegel...@ikv.de schrieb:
Hello,
I created a multicolumn index on the columns c_1,..,c_n .
If I do use only a true subset of these columns in a SQL query, is the
index still efficient?
Or is it better to create another multicolumn index defined on this subset?
Create
sathiya psql sathiya.p...@gmail.com schrieb:
can some body give me ideas on what to do for confirming what is the issue for
consuming much time for the query execution ?
Run
EXPLAIN ANALYSE your query on both machines and compare the output or
show the output here.
Andreas
--
Really, I'm
Carlo Stonebanks stonec.regis...@sympatico.ca wrote:
Inerestingly, the total index size is 148GB, twice that of the table,
which may be an indication of where the performance bottleneck is.
Maybe a sign for massive index-bloat?
Andreas
--
Really, I'm not out to destroy Microsoft. That will
Hi,
version: 8.4.2
I have a table called values:
test=*# \d values
Table public.values
Column | Type | Modifiers
+-+---
id | integer |
value | real|
Indexes:
idx_id btree (id)
The table contains 10 random rows and is analysed.
And i have 2
Tom Lane t...@sss.pgh.pa.us wrote:
Andreas Kretschmer akretsch...@spamfence.net writes:
No question, this is a silly query, but the problem is the 2nd query: it
is obviously not possible for the planner to put the where-condition
into the subquery.
Well, yeah: it might change
Tom Lane t...@sss.pgh.pa.us wrote:
Andreas Kretschmer akretsch...@spamfence.net writes:
Tom Lane t...@sss.pgh.pa.us wrote:
I see no bug here. Your second query asks for a much more complicated
computation, it's not surprising it takes longer.
But sorry, I disagree. It is the same
Andreas Kretschmer akretsch...@spamfence.net wrote:
they are being done over all rows. In this particular example you
happen to get the same result, but that's just because avg(foo) over
partition by foo is a dumb example --- it will necessarily just yield
identically foo. In more
Gerhard Wiesinger li...@wiesinger.com wrote:
I know that the data model is key/value pairs but it worked well in 8.3.
I need this flexibility.
Any ideas?
If i understand the query correctly it's a pivot-table, right?
If yes, and if i where you, i would try to rewrite this query, to
Trenta sis trenta@gmail.com wrote:
Hi,
I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a
massive update, about 40 updates/inserts.
Updates or Inserts?
If I execute about 10 it seems all ok, but when I execute 40, I have
the same problem with
Marc Mamin m.ma...@intershop.de wrote:
Hello,
UNION will remove all duplicates, so that the result additionally requires to
be sorted.
Right, to avoid the SORT and UNIQUE - operation you can use UNION ALL
Anyway, for performance issues, you should always start investigation with
croolyc crooli...@gmail.com wrote:
Hi!
can you help me with performance optimization
on my machine I have 8 databases with ca. 1-2GB
Performace optimization depends on the workload...
Is that a dedicated server, only for PostgreSQL? I assume it.
memory: 8GB (4*2GB ecc ram)
Okay, as a
Adarsh Sharma adarsh.sha...@orkash.com wrote:
Dear all,
Today I got to run a query internally from my application by more than
10 connections.
But The query performed very badly. A the data size of tables are as :
pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));
Ogden li...@darkstatic.com wrote:
I have been wrestling with the configuration of the dedicated Postges 9.0.3
server at work and granted, there's more activity on the production server,
but
the same queries take twice as long on the beefier server than my mac at home.
I have pasted what I
Daniel Cristian Cruz danielcrist...@gmail.com wrote:
Hi all,
I'm trying to figure out some common slow queries running on the server, by
analyzing the slow queries log.
I found debug_print_parse, debug_print_rewritten, debug_print_plan, which are
too much verbose and logs all queries.
Mike Blackwell mike.blackw...@rrd.com wrote:
We have a set of large tables. One of the columns is a status indicator
(active / archived). The queries against these tables almost always include
the status, so partitioning against that seems to makes sense from a logical
standpoint,
Steve Horn st...@stevehorn.cc wrote:
Execute the function: select * from geocode_carrier_route_by_geocode('xyz');
This query takes 500 milliseconds to run. My question of course is why?
Wild guess:
The planner doesn't know the actual value of the input-parameter, so the
planner doesn't use
Tom Lane t...@sss.pgh.pa.us wrote:
Andreas Kretschmer akretsch...@spamfence.net writes:
You can check the plan with the auto_explain - Extension, and you can
force the planner to create a plan based on the actual input-value by
using dynamic SQL (EXECUTE 'your query string' inside
Aryan Ariel Rodriguez Chalas wimo...@yahoo.com wrote:
Hello,
I'm working with an application that connects to a remote server database
using libpq library over internet, but making a simple query is really slow
even though I've done PostgreSQL Tunning and table being indexed, so I want
Craig James cja...@emolecules.com wrote:
On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:
Rather than telling the planner what to do or not to do, I'd much rather
have hints that give the planner more information about the tables and
quals involved in the query. A typical
Rogerio Pereira rogerio.pere...@riosoft.com.br wrote:
Hi,
I am need help, about subject Query cache in Postgres.
how is it possible to put sql statements cached in postgres ?
I did some tests and I can not get even with improved tuning
parameters in the postgresql.conf.
No, there isn't
Hengky Liwandouw hengkyliwand...@gmail.com wrote:
But the problem is : when i change the where clause to :
where jualid is not null or returjualid is not null
and extract(year from tanggal)='2013')
Try to create this index:
create index xxx on public.tbltransaksi((extract(year from
, 2013, at 3:12 PM, Andreas Kretschmer wrote:
Hengky Liwandouw hengkyliwand...@gmail.com wrote:
But the problem is : when i change the where clause to :
where jualid is not null or returjualid is not null
and extract(year from tanggal)='2013')
Try to create this index:
create
Torsten Förtsch torsten.foert...@gmx.net wrote:
I'd try 2 things:
1) set work_mem to ~100Mb. You don't have to do that globally in
postgresql.conf. You can set it for the current session only.
set work_mem to '100MB';
Then run your query.
2) change the common table expression to a
Hengky Liwandouw hengkyliwand...@gmail.com wrote:
For Mat : what command i can use to show how big the tables in MB ?
http://andreas.scherbaum.la/blog/archives/282-table-size,-database-size.html
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
Huang, Suya suya.hu...@au.experian.com wrote:
Hi,
I’ve got a table with GIN index on integer[] type. While doing a query with
filter criteria on that column has GIN index created, it’s not using index at
all, still do the full table scan. Wondering why?
Try to add an index on the
Tom Lane t...@sss.pgh.pa.us wrote:
What PG version is this? What non-default planner parameter settings are
you using? (Don't say none, because I can see you've got enable_seqscan
turned off.)
LOL, right ;-)
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a
Robert Kaye r...@musicbrainz.org wrote:
However, I am glad to report that our problems are fixed and that our server
is
back to humming along nicely.
What we changed:
1. As it was pointed out here, max_connections of 500 was in fact insanely
high, especially in light of using
Robert Kaye r...@musicbrainz.org wrote:
Hi!
We at MusicBrainz have been having trouble with our Postgres install for the
past few days. I’ve collected all the relevant information here:
http://blog.musicbrainz.org/2015/03/15/postgres-troubles/
If anyone could provide tips,
> Rick Otten hat am 11. Dezember 2015 um 21:40
> geschrieben:
>
>
> I do not know why if I blast a new index creation on the 20 or so children
> all at once some of them fail, but then if I go back and do a few at a time
> they all work. It has happened to me 3
> Rick Otten hat am 11. Dezember 2015 um 23:09
> geschrieben:
>
> The query performance hit for sequence scanning isn't all that terrible,
> but I'd rather understand and get rid of the issue if I can, now, before I
> run into it again in a situation where it is
Matteo Grolla wrote:
>
> ---Questions
>
> 1) Can you explain me the big difference between the result in A for table
> alf_node_properties: 17GB and the result in B: ~6GB ?
>
> 2) Can you explain me the difference between the result in B: ~6GB and
Gunnar Nick Bluth wrote:
> Am 15.12.2015 um 09:05 schrieb Mathieu VINCENT:
> > Hello,
> >
> > No one to help me to understand this bad estimation rows ?
>
> Well,
>
> on a rather beefy machine, I'm getting quite a different plan:
>
> Rick Otten hat am 12. Dezember 2015 um 01:55
> geschrieben:
>
>
> Why does it index scan when I use where, but not when I do a join?
difficult to say/guess because of anonymized names and not knowing the real
query. This one? http://explain.depesz.com/s/1es ?
All
Rick Otten wrote:
> I'm using PostgreSQL 9.5 Beta 2.
>
> I am working with a partitioned table set.
>
> The first thing I noticed, when creating indexes on the 20 or so partitions,
> was that if I create them too fast they don't all succeed. I have to do a few
> at a
> Tom McLoughlin hat am 6. Januar 2016 um 09:08
> geschrieben:
>
>
>
> As you can see below it's a big query, and I didn't want to overwhelm
> everyone with the schema, so let me know what bits you might need to help!
>
> Any help improving the performance will be
Tom McLoughlin wrote:
> Thank you very much for your help.
>
> It's difficult for me to run analyse explain for the query given because it
> takes so long. However, the query below has a similar structure but has less
> data to process.
Seems okay, but it's better to
dicated tablespace for indexes, is this a SSD? You can try
to reduce the random_page_cost, from default 4 to maybe 2.(depends on
hardware) This would reduce the estimated costs for the Index-scan and
prefer the index-scan.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com
where you i would switch to PG 9.5 - or wait for 9.6 and parallel
execution of aggregates.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (p
Am 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros :
>
>various combinations)? How can I investigate what's limiting postgres
>from
>doing so?
Why fsync=off?
Please run the queries with EXPLAIN ANALYSE and show us the output.
--
Diese Nachricht wurde von meinem
u ...
you can use the same trigger-function for more than one table, but a
TRIGGER is per table, you have to define the trigger for every table.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Service
imeout
you can increase authentication_timeout, default is 1 minute. How many
clients do you have, do you really needs 1000 connections? If yes,
please consider a connection pooler like pgbouncer.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Suppo
99% idle, the only thing is that there was a
> pg_dump running on one of the DB, can this cause this error, also i have
no.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-
d that the query returns 100x
> faster.
please show us explain analyse with/without index on master.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mail
Am 15.04.2017 um 10:11 schrieb Hans Braxmeier:
Hello,
restoring a dump with Ubuntu 16.04 using postgres.conf default
configuration (autovacuum on) takes several hours, with Ubuntu 14.04
only 20 minutes. Turning autovacuum off in Ubuntu 16.04 makes
restoring much faster, with 14.04
Am 10.07.2017 um 16:03 schrieb Charles Nadeau:
random_page_cost | 22
why such a high value for random_page_cost?
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato
wrote:
>Hi List,
>
>I have a Server where a simple SQL is taking a long time to return the
>results the Server settings are as follows:
>
>Debian GNU/Linux 7 (wheezy)
>CPU: Intel(R) Xeon(R) CPU
pages FROM pg_class WHERE relname
>=
>'mytable';
>pg_relation_filepath=base/24576/205166
>relpages=30449
>--16ms
>
>2017-06-30 16:50 GMT-03:00 Andreas Kretschmer
><andr...@a-kretschmer.de>:
>
>> On 30 June 2017 20:14:33 GMT+01:00, Daviramos Roussenq Fortunato
On 2 July 2017 02:26:01 GMT+01:00, Daviramos Roussenq Fortunato
wrote:
>I am using pgAdmin for SQL test.
>
>
Are you using real hardware or is it vitual? Needs the query without explain
analyse the same time? Can you try it with psql (THE command line interface)?
Am 26.05.2017 um 14:31 schrieb Dinesh Chandra 12108:
Hi Thomas,
Thanks for your reply.
Yes, the query is absolutely same which I posted.
Please suggest if something need to change in query.
As Per your comment...
The query you posted includes there two join conditions:
Am 25.05.2017 um 07:13 schrieb Daulat Ram:
Hi team,
We are getting very slow response of this query.
SELECT date_trunc('day', insert_time),workflow.project.project_name,
workflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id,count(*),
Hi,
there is a similar question from dinesh.chan...@cyient.com, but it is
not exact the same query.
[PERFORM] Query is running very slow.., some hours ago.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list
Am 07.06.2017 um 13:33 schrieb Dinesh Chandra 12108:
Dear Expert,
Is there any way to rollback table data in PostgreSQL?
if you are looking for somewhat similar to flashback in oracle the
answer is no.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
Hi all,
I am trying to improve the runtime of a big data warehouse
application. One significant bottleneck found was insert performance,
so I am investigating ways of getting Postgresql to insert data faster.
* use COPY instead of Insert, it
Am 15. Juni 2017 16:53:44 MESZ schrieb "l...@laurent-hasson.com"
:
>Hello all,
>
>I have a query with many joins, something like:
>
>Select c1, c2, c3, sum(c5)
> From V1
> Join V2 on ...
> Left join V3 on ...
> Left join T4 on ...
> Join T5 on
Am 01.10.2017 um 14:41 schrieb Mariel Cherkassky:
Hi,
I need to use the max function in my query. I had very bad performance
when I used the max :
SELECT Ma.User_Id,
COUNT(*) COUNT
FROM Manuim Ma
WHERE Ma.Bb_Open_Date =
77 matches
Mail list logo