Mudd
>> Cc: pgsql-performance@postgresql.org
>> Subject: Re: [PERFORM] Slow SELECT by primary key? Postgres 9.1.2
>>
>> On Mon, May 27, 2013 at 9:02 AM, John Mudd wrote:
>> > Postgres 9.1.2 on Ubuntu 12.04
>> >
>> > Any reason why a select by prima
I flushed the caches in an attempt to get meaningful results. I've seen
complaints to previous posts that don't include clearing the caches.
I agree this tends to be artificial in another direction. I will strive to
come up with a more realistic test environment next time. Maybe performing
many ra
> -Original Message-
> From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
> ow...@postgresql.org] On Behalf Of Merlin Moncure
> Sent: Thursday, May 30, 2013 11:14 AM
> To: John Mudd
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] S
On Mon, May 27, 2013 at 9:02 AM, John Mudd wrote:
> Postgres 9.1.2 on Ubuntu 12.04
>
> Any reason why a select by primary key would be slower than a select that
> includes an ORDER BY? I was really hoping using the primary key would give
> me a boost.
>
> I stopped the server and cleared the O/S c
On Mon, May 27, 2013 at 11:02 AM, John Mudd wrote:
> Postgres 9.1.2 on Ubuntu 12.04
>
> Any reason why a select by primary key would be slower than a select that
> includes an ORDER BY? I was really hoping using the primary key would give
> me a boost.
>
> I stopped the server and cleared the O/S
John,
And can you please include BUFFERS to ANALYZE?
Regards,
Roman Konoval
On Tue, May 28, 2013 at 9:48 AM, Evgeniy Shishkin wrote:
>
>
>
>
> On 28.05.2013, at 2:17, John Mudd wrote:
>
> Thanks again.
>
> Well, I have two problems with using the CLUSTER option. It's only
> temporary since an
On 28.05.2013, at 2:17, John Mudd wrote:
> Thanks again.
>
> Well, I have two problems with using the CLUSTER option. It's only temporary
> since any updates, depending how much free space is reserved per page,
> requires re-running the CLUSTER. And my primary concern is that it
> arbitra
Thanks again.
Well, I have two problems with using the CLUSTER option. It's only
temporary since any updates, depending how much free space is reserved per
page, requires re-running the CLUSTER. And my primary concern is that it
arbitrarily gives an unfair advantage to the primary key SELECT. Stil
On May 27, 2013, at 6:35 PM, John Mudd wrote:
> Thanks, that's easy enough to test. Didn't seem to help though.
>
Ok. And if you CLUSTER tables USING PK?
>
> test=# REINDEX index test_select_pkey;
> REINDEX
> test=# VACUUM ANALYZE test_select ;
> VACUUM
>
>
> (stopped postgres; reset O/S c
Thanks, that's easy enough to test. Didn't seem to help though.
test=# REINDEX index test_select_pkey;
REINDEX
test=# VACUUM ANALYZE test_select ;
VACUUM
(stopped postgres; reset O/S cache; started postgres)
test=# explain analyze SELECT * FROM test_select WHERE key1 >= 50 ORDER
BY key1, k
On May 27, 2013, at 6:02 PM, John Mudd wrote:
> Postgres 9.1.2 on Ubuntu 12.04
>
> Any reason why a select by primary key would be slower than a select that
> includes an ORDER BY? I was really hoping using the primary key would give me
> a boost.
>
You created my_key after data loading, a
Martin Boese wrote:
> The table has only ~1400 rows. A count(*) takes more than 70
> seconds. Other tables are fast as usual.
>
> When this happens I can also see my system's disks are suffering.
> 'systat -vm' shows 100% disk load at ~4MB/sec data rates.
>
> A simple VACUUM does *not* fix it
Kevin Grittner wrote:
Greg Smith wrote:
There's a whole list of FAQs that are documented on the wiki but
not in the main FAQ yet leftover from before the main FAQ was
hosted there. You can see them all at
http://wiki.postgresql.org/wiki/Frequently_Asked_Questions
I just haven't had time
Greg Smith wrote:
> There's a whole list of FAQs that are documented on the wiki but
> not in the main FAQ yet leftover from before the main FAQ was
> hosted there. You can see them all at
> http://wiki.postgresql.org/wiki/Frequently_Asked_Questions
>
> I just haven't had time to merge those a
Kevin Grittner wrote:
Greg Smith wrote:
The content was already there, just not linked into the main FAQ
yet:
http://wiki.postgresql.org/wiki/Slow_Counting
For a question asked this frequently, it should probably be in the
FAQ. I'll add a link from there to the more thorough writ
Greg Smith wrote:
> The content was already there, just not linked into the main FAQ
> yet:
> http://wiki.postgresql.org/wiki/Slow_Counting
For a question asked this frequently, it should probably be in the
FAQ. I'll add a link from there to the more thorough write-up.
-Kevin
--
Sent via
Kevin Grittner wrote:
Matthew Wakeling wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
The content was already there, just not linked into the main FAQ yet:
http://wiki
Ivan Voras wrote:
> Maybe you could add a short note why an estimation like from the
> pg_class table is usually enough.
OK. Will do.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
Kevin Grittner wrote:
Matthew Wakeling wrote:
This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
Maybe you could add a short note why an estimation like from the
pg_class table is usually
Matthew Wakeling wrote:
> This is an FAQ.
I just added it to the wiki FAQ page:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your su
In response to tom :
> Hi,
>
> === Problem ===
>
> i have a db-table "data_measurand" with about 6000 (60 Millions)
> rows and the following query takes about 20-30 seconds (with psql):
>
> mydb=# select count(*) from data_measurand;
> count
> --
> 60846187
> (1 row)
>
>
> ==
On Thu, 14 Jan 2010, tom wrote:
i have a db-table "data_measurand" with about 6000 (60 Millions)
rows and the following query takes about 20-30 seconds (with psql):
mydb=# select count(*) from data_measurand;
count
--
60846187
(1 row)
Sounds pretty reasonable to me. Looking at you
astro77 writes:
> Kevin Grittner wrote:
>> I would try to minimize how many XML values it had to read, parse, and
>> search. The best approach that comes to mind would be to use tsearch2
>> techniques (with a GIN or GiST index on the tsvector) to identify
>> which rows contain 'fdc3da1f-060f-4c34
astro77 wrote:
> Thanks Kevin. I thought about using tsearch2 but I need to be able to select
> exact values on other numerical queries and cannot use "contains" queries.
You might be able to make use of a custom parser for tsearch2 that creates
something like a single "word" for xml fragments lik
As a follow-up, when I try to create the index like this...
CREATE INDEX concurrently
idx_object_nodeid2
ON
object
USING
btree(
xpath('/a:root/a:Identification/b:ObjectId/text()', serialized_object,
ARRAY
[
ARRAY['a', 'http://schemas.datacontract.org/20
Thanks Kevin. I thought about using tsearch2 but I need to be able to select
exact values on other numerical queries and cannot use "contains" queries.
It's got to be fast so I cannot have lots of records returned and have to do
secondary processing on the xml for the records which contain the exa
CREATE INDEX CONCURRENTLY idx_serializedxml
ON "object" (serialized_object ASC NULLS LAST);
yields the error:
ERROR: data type xml has no default operator class for access method "btree"
The same error occurs when I try to use the other access methods as well.
On Thu, Sep 3, 2009 at 4:06 PM
On Thu, Sep 3, 2009 at 4:06 PM, astro77 wrote:
> I was receiving an error that an XML field does not support the various
> indexes available in postgresql.
Please post what happens when you try.
> Is there an example of how to do this
> properly?
Not sure.
...Robert
--
Sent via pgsql-performa
I was receiving an error that an XML field does not support the various
indexes available in postgresql. Is there an example of how to do this
properly?
Robert Haas wrote:
>
> On Wed, Sep 2, 2009 at 11:04 AM, astro77 wrote:
>>
>> I've got a table set up with an XML field that I would like to se
On Wed, Sep 2, 2009 at 11:04 AM, astro77 wrote:
>
> I've got a table set up with an XML field that I would like to search on with
> 2.5 million records. The xml are serialized objects from my application
> which are too complex to break out into separate tables. I'm trying to run a
> query similar
astro77 wrote:
> I've got a table set up with an XML field that I would like to search
on
> with
> 2.5 million records. The xml are serialized objects from my
application
> which are too complex to break out into separate tables. I'm trying
to run a
> query similar to this:
>
> SELECT s
2009/5/7 David Brain :
> Hi,
Hi,
>
> Some answers in-line:
>
>>
>> Has there been a performance *change*, or are you just concerned about a
>> query which doesn't seem to use "enough" disc bandwidth?
>
> Performance has degraded noticeably over the past few days.
>
>> Certainly random access like t
On Thu, May 7, 2009 at 11:19 AM, Matthew Wakeling wrote:
> On Thu, 7 May 2009, David Brain wrote:
>
>> Certainly random access like this index scan can be extremely slow. 2-4
>>> MB/s
>>> is quite reasonable if you're fetching one 8kB block per disc seek - no
>>> more
>>> than 200 per second.
>>>
On Thu, 7 May 2009, David Brain wrote:
Certainly random access like this index scan can be extremely slow. 2-4 MB/s
is quite reasonable if you're fetching one 8kB block per disc seek - no more
than 200 per second.
We have read ahead set pretty aggressively high as the SAN seems to
'like' this,
>
> Nested Loop Left Join (cost=0.00..6462463.96 rows=1894 width=110)
> -> Append (cost=0.00..6453365.66 rows=1894 width=118)
> -> Seq Scan on datatable sum (cost=0.00..10.75 rows=1 width=118)
> Filter: ((datapointdate >= '2009-04-01
> 00:00:00'::timestamp without time
Hi,
Some answers in-line:
>
> Has there been a performance *change*, or are you just concerned about a
> query which doesn't seem to use "enough" disc bandwidth?
Performance has degraded noticeably over the past few days.
> Certainly random access like this index scan can be extremely slow. 2-4
On Thu, 7 May 2009, David Brain wrote:
This has been working reasonably well, however in the last few days
I've been seeing extremely slow performance on what are essentially
fairly simple 'index hitting' selects on this data. From the host
side I see that the postgres query process is mostly in
Hi,
Interesting, for one index on one partition:
idx_scan: 329
idx_tup_fetch: 8905730
So maybe a reindex would help?
David.
On Thu, May 7, 2009 at 10:26 AM, Scott Mead
wrote:
> On Thu, May 7, 2009 at 10:14 AM, David Brain wrote:
>>
>> Hi,
>>
>> Some context, we have a _lot_ of data, > 1TB, m
On Thu, May 7, 2009 at 10:14 AM, David Brain wrote:
> Hi,
>
> Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -
> the 'datatable' in the example below although in order to improve
> performance this table is partitioned (by date range) into a number of
> partition tables. Each
Hi Markus,
Thank you very much for the information.
I was able to make it fast by correcting indices, i created index on
frequently filtered fields.
now it runs at 0.05 seconds average, much faster than before 0.86.
I will also upgrade to 8.1.5.
Once again, thank you very much. it helped me a
Hi, Amir,
AMIR FRANCO D. JOVEN wrote:
> My current project uses PostgreSQL 7.3.4.
By all means, please upgrade.
The newest 7.3 series version is 7.3.16, which fixes lots of critical
bugs, and can be used as a drop-in replacement for 7.3.4 (see Release
Notes at http://www.postgresql.org/docs/7.3
On 11/15/06, AMIR FRANCO D. JOVEN <[EMAIL PROTECTED]> wrote:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in only
one client consumes about 0.86 seconds.
The client execute
Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From: [EMAIL PROTECTED] on behalf of Russell Smith
Sent: Wed 11/15/2006 5:31 AM
To: AMIR FRANCO D. JOVEN
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow SELECT on three or more clients
AMIR FRAN
AMIR FRANCO D. JOVEN wrote:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
Upgrading your version of PostgreSQL to 8.1 will give you significant
benefits to performance.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in
onl
* AMIR FRANCO D. JOVEN <[EMAIL PROTECTED]> [061115 12:44]:
> Hi!
>
> Im new to PostgreSQL.
>
> My current project uses PostgreSQL 7.3.4.
Ancient. Upgrade it, especially if it's a new database.
>
> the problem is like this:
>
> I have a table with 94 fields and a select with only one resultset
Does that mean reindex is not needed
for PG version 7.4?
In what kind situations under PG 7.4,
reindex is worthwhile?
Thanks,
Here is doc from 7.3:
PostgreSQL is unable to reuse B-tree index pages in
certain cases. The problem is that if indexed rows are
deleted, those index pages can only be
Paul Langard <[EMAIL PROTECTED]> writes:
> select count(id) from project
> count
> 9106
> 1 row(s)
> Total runtime: 45,778.813 ms
Yipes. The only explanation I can think of is tremendous table bloat.
What do you get from "vacuum verbose project" --- in particular, how
many pages in the table?
>
Paul,
Paul Langard wrote:
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
*count
*9106
1 row(s)
Total runtime: 45,778.813 ms
... the database i
Paul Langard <[EMAIL PROTECTED]> writes:
> Having trouble with one table (see time to count records below!).
>
> Fairly new to postgres so any help much appreciated.
>
> It only contains 9,106 records - as you can see from:
>
>
> select count(id) from project
>
> count
> 9106
> 1 row(s)
> Total ru
Vivek Khera wrote:
"JB" == Josh Berkus <[EMAIL PROTECTED]> writes:
JB> Actually, what OS's can't use all idle ram for kernel cache? I
JB> should note that in my performance docs
FreeBSD. Limited by the value of "sysctl vfs.hibufspace" from what I
understand. This value is set at boot based
> "JB" == Josh Berkus <[EMAIL PROTECTED]> writes:
JB> Vivek,
>> Assuming your OS will use that much RAM for the cache... the whole
>> world's not Linux :-)
JB> It's not? Darn!
:-)
JB> Actually, what OS's can't use all idle ram for kernel cache? I
JB> should note that in my performance do
Vivek,
> Assuming your OS will use that much RAM for the cache... the whole
> world's not Linux :-)
It's not? Darn!
Actually, what OS's can't use all idle ram for kernel cache? I should note
that in my performance docs
--
Josh Berkus
Aglio Database Solutions
San Francisco
--
"Medora Schauer" <[EMAIL PROTECTED]> writes:
> Merge Join (cost=0.00..287726.10 rows=100221 width=58) (actual time=61.60..5975.63
> rows=100425 loops=1)
>Merge Cond: (("outer".shot_line_num = "inner".shot_line_num) AND
> ("outer".shotpoint = "inner".shotpoint))
>-> Index Scan using hs
> "JB" == Josh Berkus <[EMAIL PROTECTED]> writes:
JB> Medora,
>> Increasing effective_cache_size to 1 did it.
JB> That would be 78MB RAM. If you have more than that available, you can
JB> increase it further. Ideally, it should be about 2/3 to 3/4 of available
JB> RAM.
Assuming you
Medora,
> Increasing effective_cache_size to 1 did it.
That would be 78MB RAM. If you have more than that available, you can
increase it further. Ideally, it should be about 2/3 to 3/4 of available
RAM.
>The query now
> takes 4 secs. I left random_page_cost at the default value of 4.
Josh,
> > So why did were the indices not used before when they yield
> a better plan?
>
> Your .conf settings, most likely. I'd lower your
> random_page_cost and raise
> your effective_cache_size.
Increasing effective_cache_size to 1 did it. The query now
takes 4 secs. I left random_
Medora,
> So why did were the indices not used before when they yield a better plan?
Your .conf settings, most likely. I'd lower your random_page_cost and raise
your effective_cache_size.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)-
>
> Medora,
>
> > I'm using pg 7.3.4 to do a select involving a join on 2 tables.
> > The query is taking 15 secs which seems extreme to me considering
> > the indices that exist on the two tables. EXPLAIN ANALYZE shows
> > that the indices aren't being used. I've done VACUUM ANALYZE on the
Medora,
> I'm using pg 7.3.4 to do a select involving a join on 2 tables.
> The query is taking 15 secs which seems extreme to me considering
> the indices that exist on the two tables. EXPLAIN ANALYZE shows
> that the indices aren't being used. I've done VACUUM ANALYZE on the
> db with no chang
59 matches
Mail list logo