On Mon, May 27, 2013 at 9:02 AM, John Mudd johnbm...@gmail.com wrote:
Postgres 9.1.2 on Ubuntu 12.04
Any reason why a select by primary key would be slower than a select that
includes an ORDER BY? I was really hoping using the primary key would give
me a boost.
I stopped the server and
-Original Message-
From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
ow...@postgresql.org] On Behalf Of Merlin Moncure
Sent: Thursday, May 30, 2013 11:14 AM
To: John Mudd
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow SELECT by primary key
I flushed the caches in an attempt to get meaningful results. I've seen
complaints to previous posts that don't include clearing the caches.
I agree this tends to be artificial in another direction. I will strive to
come up with a more realistic test environment next time. Maybe performing
many
-performance@postgresql.org
Subject: Re: [PERFORM] Slow SELECT by primary key? Postgres 9.1.2
On Mon, May 27, 2013 at 9:02 AM, John Mudd johnbm...@gmail.com wrote:
Postgres 9.1.2 on Ubuntu 12.04
Any reason why a select by primary key would be slower than a select
that includes an ORDER BY? I
On 28.05.2013, at 2:17, John Mudd johnbm...@gmail.com wrote:
Thanks again.
Well, I have two problems with using the CLUSTER option. It's only temporary
since any updates, depending how much free space is reserved per page,
requires re-running the CLUSTER. And my primary concern is
John,
And can you please include BUFFERS to ANALYZE?
Regards,
Roman Konoval
On Tue, May 28, 2013 at 9:48 AM, Evgeniy Shishkin itparan...@gmail.comwrote:
On 28.05.2013, at 2:17, John Mudd johnbm...@gmail.com wrote:
Thanks again.
Well, I have two problems with using the CLUSTER option.
On Mon, May 27, 2013 at 11:02 AM, John Mudd johnbm...@gmail.com wrote:
Postgres 9.1.2 on Ubuntu 12.04
Any reason why a select by primary key would be slower than a select that
includes an ORDER BY? I was really hoping using the primary key would give
me a boost.
I stopped the server and
Postgres 9.1.2 on Ubuntu 12.04
Any reason why a select by primary key would be slower than a select that
includes an ORDER BY? I was really hoping using the primary key would give
me a boost.
I stopped the server and cleared the O/S cache using sync; echo 3
/proc/sys/vm/drop_caches between the
On May 27, 2013, at 6:02 PM, John Mudd johnbm...@gmail.com wrote:
Postgres 9.1.2 on Ubuntu 12.04
Any reason why a select by primary key would be slower than a select that
includes an ORDER BY? I was really hoping using the primary key would give me
a boost.
You created my_key after
Thanks, that's easy enough to test. Didn't seem to help though.
test=# REINDEX index test_select_pkey;
REINDEX
test=# VACUUM ANALYZE test_select ;
VACUUM
(stopped postgres; reset O/S cache; started postgres)
test=# explain analyze SELECT * FROM test_select WHERE key1 = 50 ORDER
BY key1,
On May 27, 2013, at 6:35 PM, John Mudd johnbm...@gmail.com wrote:
Thanks, that's easy enough to test. Didn't seem to help though.
Ok. And if you CLUSTER tables USING PK?
test=# REINDEX index test_select_pkey;
REINDEX
test=# VACUUM ANALYZE test_select ;
VACUUM
(stopped postgres;
Thanks again.
Well, I have two problems with using the CLUSTER option. It's only
temporary since any updates, depending how much free space is reserved per
page, requires re-running the CLUSTER. And my primary concern is that it
arbitrarily gives an unfair advantage to the primary key SELECT.
Hi,
I am using Postgresql: 9.01, PostGIS 1.5 on FreeBSD 7.0. I have at
least one table on which SELECT's turn terribly slow from time to time.
This happened at least three times, also on version 8.4.
The table has only ~1400 rows. A count(*) takes more than 70 seconds.
Other tables are fast as
Martin Boese boese...@gmx.de wrote:
The table has only ~1400 rows. A count(*) takes more than 70
seconds. Other tables are fast as usual.
When this happens I can also see my system's disks are suffering.
'systat -vm' shows 100% disk load at ~4MB/sec data rates.
A simple VACUUM does
Hi,
=== Problem ===
i have a db-table data_measurand with about 6000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
--
60846187
(1 row)
=== Question ===
- What can i do to improve the
On Thu, 14 Jan 2010, tom wrote:
i have a db-table data_measurand with about 6000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
--
60846187
(1 row)
Sounds pretty reasonable to me. Looking at your
In response to tom :
Hi,
=== Problem ===
i have a db-table data_measurand with about 6000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
--
60846187
(1 row)
=== Question ===
Matthew Wakeling matt...@flymine.org wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Kevin Grittner wrote:
Matthew Wakeling matt...@flymine.org wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
Maybe you could add a short note why an estimation like from the
Ivan Voras ivo...@freebsd.org wrote:
Maybe you could add a short note why an estimation like from the
pg_class table is usually enough.
OK. Will do.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Kevin Grittner wrote:
Matthew Wakeling matt...@flymine.org wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
The content was already there, just not linked into the main
Greg Smith g...@2ndquadrant.com wrote:
The content was already there, just not linked into the main FAQ
yet:
http://wiki.postgresql.org/wiki/Slow_Counting
For a question asked this frequently, it should probably be in the
FAQ. I'll add a link from there to the more thorough write-up.
Kevin Grittner wrote:
Greg Smith g...@2ndquadrant.com wrote:
The content was already there, just not linked into the main FAQ
yet:
http://wiki.postgresql.org/wiki/Slow_Counting
For a question asked this frequently, it should probably be in the
FAQ. I'll add a link from there to
Greg Smith g...@2ndquadrant.com wrote:
There's a whole list of FAQs that are documented on the wiki but
not in the main FAQ yet leftover from before the main FAQ was
hosted there. You can see them all at
http://wiki.postgresql.org/wiki/Frequently_Asked_Questions
I just haven't had time
Kevin Grittner wrote:
Greg Smith g...@2ndquadrant.com wrote:
There's a whole list of FAQs that are documented on the wiki but
not in the main FAQ yet leftover from before the main FAQ was
hosted there. You can see them all at
http://wiki.postgresql.org/wiki/Frequently_Asked_Questions
I
astro77 astro_co...@yahoo.com writes:
Kevin Grittner wrote:
I would try to minimize how many XML values it had to read, parse, and
search. The best approach that comes to mind would be to use tsearch2
techniques (with a GIN or GiST index on the tsvector) to identify
which rows contain
CREATE INDEX CONCURRENTLY idx_serializedxml
ON object (serialized_object ASC NULLS LAST);
yields the error:
ERROR: data type xml has no default operator class for access method btree
The same error occurs when I try to use the other access methods as well.
On Thu, Sep 3, 2009 at 4:06 PM,
Thanks Kevin. I thought about using tsearch2 but I need to be able to select
exact values on other numerical queries and cannot use contains queries.
It's got to be fast so I cannot have lots of records returned and have to do
secondary processing on the xml for the records which contain the
As a follow-up, when I try to create the index like this...
CREATE INDEX concurrently
idx_object_nodeid2
ON
object
USING
btree(
xpath('/a:root/a:Identification/b:ObjectId/text()', serialized_object,
ARRAY
[
ARRAY['a',
astro77 wrote:
Thanks Kevin. I thought about using tsearch2 but I need to be able to select
exact values on other numerical queries and cannot use contains queries.
You might be able to make use of a custom parser for tsearch2 that creates
something like a single word for xml fragments like
I've got a table set up with an XML field that I would like to search on with
2.5 million records. The xml are serialized objects from my application
which are too complex to break out into separate tables. I'm trying to run a
query similar to this:
SELECT serialized_object as outVal
astro77 astro_co...@yahoo.com wrote:
I've got a table set up with an XML field that I would like to search
on
with
2.5 million records. The xml are serialized objects from my
application
which are too complex to break out into separate tables. I'm trying
to run a
query similar to this:
On Wed, Sep 2, 2009 at 11:04 AM, astro77astro_co...@yahoo.com wrote:
I've got a table set up with an XML field that I would like to search on with
2.5 million records. The xml are serialized objects from my application
which are too complex to break out into separate tables. I'm trying to run
I was receiving an error that an XML field does not support the various
indexes available in postgresql. Is there an example of how to do this
properly?
Robert Haas wrote:
On Wed, Sep 2, 2009 at 11:04 AM, astro77astro_co...@yahoo.com wrote:
I've got a table set up with an XML field that I
On Thu, Sep 3, 2009 at 4:06 PM, astro77astro_co...@yahoo.com wrote:
I was receiving an error that an XML field does not support the various
indexes available in postgresql.
Please post what happens when you try.
Is there an example of how to do this
properly?
Not sure.
...Robert
--
Sent
2009/5/7 David Brain dbr...@bandwidth.com:
Hi,
Hi,
Some answers in-line:
Has there been a performance *change*, or are you just concerned about a
query which doesn't seem to use enough disc bandwidth?
Performance has degraded noticeably over the past few days.
Certainly random access
Hi,
Some context, we have a _lot_ of data, 1TB, mostly in 1 'table' -
the 'datatable' in the example below although in order to improve
performance this table is partitioned (by date range) into a number of
partition tables. Each partition contains up to 20GB of data (tens of
millons of rows),
On Thu, May 7, 2009 at 10:14 AM, David Brain dbr...@bandwidth.com wrote:
Hi,
Some context, we have a _lot_ of data, 1TB, mostly in 1 'table' -
the 'datatable' in the example below although in order to improve
performance this table is partitioned (by date range) into a number of
partition
On Thu, 7 May 2009, David Brain wrote:
This has been working reasonably well, however in the last few days
I've been seeing extremely slow performance on what are essentially
fairly simple 'index hitting' selects on this data. From the host
side I see that the postgres query process is mostly
Hi,
Interesting, for one index on one partition:
idx_scan: 329
idx_tup_fetch: 8905730
So maybe a reindex would help?
David.
On Thu, May 7, 2009 at 10:26 AM, Scott Mead
scott.li...@enterprisedb.com wrote:
On Thu, May 7, 2009 at 10:14 AM, David Brain dbr...@bandwidth.com wrote:
Hi,
Some
Hi,
Some answers in-line:
Has there been a performance *change*, or are you just concerned about a
query which doesn't seem to use enough disc bandwidth?
Performance has degraded noticeably over the past few days.
Certainly random access like this index scan can be extremely slow. 2-4 MB/s
Nested Loop Left Join (cost=0.00..6462463.96 rows=1894 width=110)
- Append (cost=0.00..6453365.66 rows=1894 width=118)
- Seq Scan on datatable sum (cost=0.00..10.75 rows=1 width=118)
Filter: ((datapointdate = '2009-04-01
00:00:00'::timestamp without time zone)
On Thu, 7 May 2009, David Brain wrote:
Certainly random access like this index scan can be extremely slow. 2-4 MB/s
is quite reasonable if you're fetching one 8kB block per disc seek - no more
than 200 per second.
We have read ahead set pretty aggressively high as the SAN seems to
'like' this,
On Thu, May 7, 2009 at 11:19 AM, Matthew Wakeling matt...@flymine.orgwrote:
On Thu, 7 May 2009, David Brain wrote:
Certainly random access like this index scan can be extremely slow. 2-4
MB/s
is quite reasonable if you're fetching one 8kB block per disc seek - no
more
than 200 per second.
Hi Markus,
Thank you very much for the information.
I was able to make it fast by correcting indices, i created index on
frequently filtered fields.
now it runs at 0.05 seconds average, much faster than before 0.86.
I will also upgrade to 8.1.5.
Once again, thank you very much. it helped me a
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in only one client consumes about 0.86 seconds.
The client executes three 'select' statements to perform the task which consumes 2.58
* AMIR FRANCO D. JOVEN [EMAIL PROTECTED] [061115 12:44]:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
Ancient. Upgrade it, especially if it's a new database.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in only
AMIR FRANCO D. JOVEN wrote:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
Upgrading your version of PostgreSQL to 8.1 will give you significant
benefits to performance.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in
Williamson
DBA
GlobeXplorer LLC
-Original Message-
From: [EMAIL PROTECTED] on behalf of Russell Smith
Sent: Wed 11/15/2006 5:31 AM
To: AMIR FRANCO D. JOVEN
Cc: pgsql-performance@postgresql.org
Subject:Re: [PERFORM] Slow SELECT on three or more clients
AMIR FRANCO D
On 11/15/06, AMIR FRANCO D. JOVEN [EMAIL PROTECTED] wrote:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in only
one client consumes about 0.86 seconds.
The client executes
Hi, Amir,
AMIR FRANCO D. JOVEN wrote:
My current project uses PostgreSQL 7.3.4.
By all means, please upgrade.
The newest 7.3 series version is 7.3.16, which fixes lots of critical
bugs, and can be used as a drop-in replacement for 7.3.4 (see Release
Notes at
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
x-tad-biggercount/x-tad-biggerx-tad-bigger
/x-tad-biggerx-tad-bigger9106
1 row(s)
Total
Paul Langard [EMAIL PROTECTED] writes:
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
count
9106
1 row(s)
Total runtime:
Paul,
Paul Langard wrote:
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
*count
*9106
1 row(s)
Total runtime: 45,778.813 ms
snip
... the
Paul Langard [EMAIL PROTECTED] writes:
select count(id) from project
count
9106
1 row(s)
Total runtime: 45,778.813 ms
Yipes. The only explanation I can think of is tremendous table bloat.
What do you get from vacuum verbose project --- in particular, how
many pages in the table?
... the
Does that mean reindex is not needed
for PG version 7.4?
In what kind situations under PG 7.4,
reindex is worthwhile?
Thanks,
Here is doc from 7.3:
PostgreSQL is unable to reuse B-tree index pages in
certain cases. The problem is that if indexed rows are
deleted, those index pages can only
Vivek Khera wrote:
JB == Josh Berkus [EMAIL PROTECTED] writes:
JB Actually, what OS's can't use all idle ram for kernel cache? I
JB should note that in my performance docs
FreeBSD. Limited by the value of sysctl vfs.hibufspace from what I
understand. This value is set at boot based on
Medora Schauer [EMAIL PROTECTED] writes:
Merge Join (cost=0.00..287726.10 rows=100221 width=58) (actual time=61.60..5975.63
rows=100425 loops=1)
Merge Cond: ((outer.shot_line_num = inner.shot_line_num) AND
(outer.shotpoint = inner.shotpoint))
- Index Scan using hsot_record_idx on
JB == Josh Berkus [EMAIL PROTECTED] writes:
JB Medora,
Increasing effective_cache_size to 1 did it.
JB That would be 78MB RAM. If you have more than that available, you can
JB increase it further. Ideally, it should be about 2/3 to 3/4 of available
JB RAM.
Assuming your OS will use
I'm using pg 7.3.4 to do a select involving a join on 2 tables.
The query is taking 15 secs which seems extreme to me considering
the indices that exist on the two tables. EXPLAIN ANALYZE shows
that the indices aren't being used. I've done VACUUM ANALYZE on the
db with no change in
Medora,
I'm using pg 7.3.4 to do a select involving a join on 2 tables.
The query is taking 15 secs which seems extreme to me considering
the indices that exist on the two tables. EXPLAIN ANALYZE shows
that the indices aren't being used. I've done VACUUM ANALYZE on the
db with
Medora,
So why did were the indices not used before when they yield a better plan?
Your .conf settings, most likely. I'd lower your random_page_cost and raise
your effective_cache_size.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
---(end of
Medora,
Increasing effective_cache_size to 1 did it.
That would be 78MB RAM. If you have more than that available, you can
increase it further. Ideally, it should be about 2/3 to 3/4 of available
RAM.
The query now
takes 4 secs. I left random_page_cost at the default value of 4.
63 matches
Mail list logo