Tomas Vondra wrote:
Stats about access to the data (index/seq scans, cache hit ratio etc.)
are stored in pg_stat_* and pg_statio_* catalogs, and are updated after
running each query. AFAIK it's not a synchronous process, but when a
backend finishes a query, it sends the stats to the postmaster (a
drvillo wrote:
-given the configuration attached (which is basically a vanilla one) and the
number of buffers written at each execution, are these execution times
normal or above average?
Given the configuration attached, most of them are normal. One problem
may be that your vanilla confi
> Sounds like a reasonable starting point. You could certainly fiddle
> around a bit - especially with shared_buffers - to see if some other
> setting works better, but that should be in the ballpark.
I tend to set it a bit higher on EC2 to discourage the VM from
overcommitting memory I need. S
On Thu, May 12, 2011 at 8:53 AM, Prodan, Andrei
wrote:
>
> @Jeff: thank you for the clear plan interpretation - but I'm afraid I
> don't really understand the second bit:
> 1) I provided the GOOD plan, so we already know what postgres thinks,
> right? (Later edit: guess not. Doesn't work)
> 2) The
I've got a stored proc that constructs some aggregation queries as strings
and then executes them. I'd like to be able to increase work_mem before
running those queries. If I set a new value for work_mem within the stored
proc prior to executing my query string, will that actually have an impact
> It says the sequential scan has a cost that's way too high, and I'm
> presuming that's why it's choosing the extremely slow plan over the much
> faster plan.
Well, not exactly. It's giving you that cost because you disabled
seqscan, which actually just bumps the cost really high:
postgres=# cre
Dne 12.5.2011 22:03, Willy-Bas Loos napsal(a):
> Then, are the index scans counted in a memory variable and written at
> analyze time?
No, I believe raghu mixed two things - stats used by the planner and
stats about access to the data (how many tuples were read using an
index, etc.)
Stats for the
On Thu, May 12, 2011 at 9:09 PM, Willy-Bas Loos wrote:
> Hi,
>
> We have some indexes that don't seem to be used at all.
> I'd like to know since when they have not been used.
> That is, the time when postgres started counting to reach the number that
> is in pg_stat_user_indexes.idx_scan
>
> Is
On 05/11/2011 09:38 AM, Robert Haas wrote:
However, if I disable seqscan (set enable_seqscan=false), I get the
following plan:
QUERY PLAN
Hash Join (cost=10001298843.53..290002337961.71 rows=8643757 width=506)
Hash Cond: (f.id = objects.id)
->Append (cost=100
Greg Smith writes:
> You're saying to watch out for (3); I think that's not usually the case,
> but that's a fair thing to warn about. Even in that case, though, it
> may still be worth dropping the index. Year-end processes are not
> usually very sensitive to whether they take a little or a
Tomas Vondra wrote:
BTW it's really really tricky to remove indexes once they're created.
What if the index is created for a single batch process that runs once a
year to close the fiscal year etc?
True in theory. Reports that are executing something big at the end of
the year fall into th
Then, are the index scans counted in a memory variable and written at
analyze time?
On Thu, May 12, 2011 at 8:22 PM, raghu ram wrote:
>
> "Analyze" activity will update the statistics of each catalog table.
> --Raghu Ram
>
>
--
"Patriotism is the conviction that your country is superior to al
Thank you for all the leads.
I've increased stats to 1200 on everything obvious (external_id,
attr_name, attr_value, party_id), and ran ANALYZE, but it didn't help at
all - any other ideas of what else could be going wrong ?
We'll disable preparation, but the thing is it works brilliantly 90% of
t
On 05/12/2011 11:07 AM, Tom Lane wrote:
I find it odd that replacing the subquery with a temp table helps,
though, because (unless you stuck in an ANALYZE you didn't mention)
it would have no stats at all about the number of groups in the temp
table.
I did have an analyze initially for exactly
Shaun Thomas writes:
> On 05/12/2011 09:51 AM, Tom Lane wrote:
>> It does. I was a bit surprised that Shaun apparently got a plan that
>> didn't include a materialize step, because when I test a similar query
>> here, I get:
> Remember when I said "old version" that prevented us from using CTEs?
>
> This is a perfect example of a place where you could push some work out of
> the application and into the database. You can consolidate your 1 to 101
> queries into a single query. If you use:
>
> WHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm
> - 256 DESC LIMIT 1
>
Everyone,
Just wanted to say thanks for your help with my performance question. You
have given me plenty of things to investigate. Further, I think the problem
is almost certainly with my app, so I need to do more work there!
I really like the idea of just loading everything in memory and then du
Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):
> Hi,
>
> We have some indexes that don't seem to be used at all.
> I'd like to know since when they have not been used.
> That is, the time when postgres started counting to reach the number
> that is in pg_stat_user_indexes.idx_scan
>
> Is there a
Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):
> Hi,
>
> We have some indexes that don't seem to be used at all.
> I'd like to know since when they have not been used.
> That is, the time when postgres started counting to reach the number
> that is in pg_stat_user_indexes.idx_scan
>
> Is there a
On Wed, May 11, 2011 at 9:17 PM, Aren Cambre wrote:
> *2. Not TxDPS reference markers correspond to TxDOT reference markers.*
>
> Now, if I've matched a route, I have to find the reference marker.
>
> The TxDOT database is pretty good but not 100% complete, so some TxDPS
> tickets' reference mark
On 05/12/2011 09:51 AM, Tom Lane wrote:
> It does. I was a bit surprised that Shaun apparently got a plan that
> didn't include a materialize step, because when I test a similar query
> here, I get:
Remember when I said "old version" that prevented us from using CTEs?
We're still on 8.2 (basical
Hi,
We have some indexes that don't seem to be used at all.
I'd like to know since when they have not been used.
That is, the time when postgres started counting to reach the number that is
in pg_stat_user_indexes.idx_scan
Is there a way to retrieve that from the database ?
Cheers,
WBL
--
"Pa
Josh Berkus writes:
> On 5/11/11 3:04 PM, Shaun Thomas wrote:
>> The original query, with our very large tables, ran for over *two hours*
>> thanks to a nested loop iterating over the subquery. My replacement ran
>> in roughly 30 seconds. If we were using a newer version of PG, we could
>> have us
On 05/12/2011 03:30 AM, Michael Graham wrote:
Do you happen to produce slides for these lunch n learns or are they
more informal than that? I guess you can work out where I'm going with
this ;)
Oh of course. I use rst2s5 for my stuff, so I have the slideshow and
also generate a PDF complete
On Wed, 2011-05-11 at 17:04 -0500, Shaun Thomas wrote:
> We hold regular Lunch'n'Learns for our developers to teach them the
> good/bad of what they're doing, and that helps significantly. Even
> hours later, I see them using the techniques I showed them. The one
> I'm presenting soon is entitled
12.05.11 06:18, Aren Cambre ???(??):
> Using one thread, the app can do about 111 rows per second, and it's
> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000
rows /
> 111 rows per second ~= 30 hours.
I don't know how I missed that. You ARE maxing out one
http://sourceforge.net/mailarchive/forum.php?forum_name=osdldbt-general&max_rows=25&style=nested&viewmonth=201104
- Sethu
On Thu, May 12, 2011 at 5:22 AM, Robert Haas wrote:
> On Tue, Apr 12, 2011 at 3:51 AM, Sethu Prasad
> wrote:
> > Anyone lucky to have dbt5 run for PostgreSQL 9.0.3?!
> >
>
27 matches
Mail list logo