On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo wrote:
>
> Output from "explain analyze":
>
> Limit (cost=0.00..973.63 rows=4 width=48) (actual
> time=61.554..4039.704 rows=1 loops=1)
> -> Nested Loop (cost=0.00..70101.65 rows=288 width=48) (actual
> time=61.552..4039.700 rows=1 loops=1)
>
This dinky little query takes about 4 seconds to run:
select event_occurrences.*
from event_occurrences
join section_items on section_items.subject_id = event_occurrences.event_id
and section_items.subject_type = 'Event'
and section_items.sandbox_id = 16399
where event_occurrences.
On Sat, 2009-02-14 at 15:02 -0500, Tom Lane wrote:
> Octavio Alvarez writes:
> > The result, on the above view: ~80ms. Fair enough. But if I apply a
> > condition:
> > SELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE
> > parent_schema <> child_schema;
> > it takes ~2 seconds (!) to com
On Sat, Feb 14, 2009 at 8:23 PM, Tom Lane wrote:
> Alexander Staubo writes:
>> Wow -- I set this to 10GB (5GB for shared buffers + another 5GB for
>> cache), and today's average write frequency went from 20MB/sec to just
>> 1MB/sec. The documentation suggests that effective_cache_size is only
>>
Octavio Alvarez writes:
> The result, on the above view: ~80ms. Fair enough. But if I apply a
> condition:
> SELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE
> parent_schema <> child_schema;
> it takes ~2 seconds (!) to complete.
I'm not sure I'm seeing the exact same case as you, but
Alexander Staubo writes:
> wrote:
>> You should definitely set effective_cache_size.
> Wow -- I set this to 10GB (5GB for shared buffers + another 5GB for
> cache), and today's average write frequency went from 20MB/sec to just
> 1MB/sec. The documentation suggests that effective_cache_size is o
On Sat, Feb 14, 2009 at 9:49 AM, Craig Ringer
wrote:
> Is there any chance you had pg_xlog stored separately on your old database,
> and I/O for it wasn't being recorded?
No, the database files have always been on a single volume.
Alexander.
--
Sent via pgsql-performance mailing list (pgsql-pe
On Fri, Feb 13, 2009 at 6:35 PM, Kevin Grittner
wrote:
> You should definitely set effective_cache_size.
Wow -- I set this to 10GB (5GB for shared buffers + another 5GB for
cache), and today's average write frequency went from 20MB/sec to just
1MB/sec. The documentation suggests that effective_ca
I'm aware you already know that information_schema is slow [1] [2], so I
just want to expose/document another case and tests I did.
I'm using the following view to check what tables depend on what other
tables.
CREATE VIEW raw_relation_tree AS
SELECT
tc_p.table_catalog AS parent_catalog,
tc
I did some performance tests (retrieving data from DB to .NET
application using Npgsql driver) and I found that for command type ==
StoredProcedure is better to first call function Prepare(). But, for
command type == Text (direct query) performance is better if I do not
run function Prepare(). W
Its nice to know the evolution of autovacuum and i understand that
the suggestion/requirement of "autovacuum at lean hours only"
was defeating the whole idea.
regds
--rajesh kumar mallah.
On Fri, Feb 13, 2009 at 11:07 PM, Chris Browne wrote:
> mallah.raj...@gmail.com (Rajesh Kumar Mallah) writ
mallah.raj...@gmail.com (Rajesh Kumar Mallah) writes:
> why is it not a good idea to give end users control over when they
> want to run it ?
It's not a particularly good idea to give end users things that they
are likely then to *immediately* use to shoot themselves in the foot.
Turning off vacu
Alexander Staubo wrote:
On Fri, Feb 13, 2009 at 12:53 PM, Alexander Staubo wrote:
The upgrade was done with dump/restore using "pg_dump -Fc". The old
database lived on a SAN volume, whereas the new database lives on a
local disk volume.
I need to correct myself: The Munin graphs were never se
Matthew Wakeling wrote:
In fact, it is recommended that two different types of drives are used.
That way, if there's a mass failure of a whole batch of drives from one
particular vendor, you don't lose all your data.
Don't think this is just paranoia, either. I've had it happen to me
SEVERAL
14 matches
Mail list logo