Nothing?
No ideas?
Did I forget to include some useful bit?
On Fri, Mar 4, 2011 at 8:22 PM, Claudio Freire wrote:
> Hello, first post to this list.
>
> I have this query that ran in milliseconds in postgres 8.3.7 (usually 50,
> 100ms), and now it takes a full 10 minutes to com
This is postgresql 9.0.3:
Query:
select
sum(stat_responses) * 100.0 / sum(stat_invites) as stat_response_rate,
sum(real_responses) * 100.0 / sum(real_invites) as real_response_rate
from (
select
ms.invites as stat_invites,
(select count(*)
On Wed, Mar 16, 2011 at 3:34 PM, Shaun Thomas wrote:
> If not, it seems like a valid configurable. We set our random_page_cost to
> 1.5 once the DB was backed by NVRAM. I could see that somehow influencing
> precedence of a backwards index scan. But even then, SSDs and their ilk
> react more like
On Thu, Mar 17, 2011 at 1:42 PM, J Sisson wrote:
> On Thu, Mar 17, 2011 at 10:13 AM, Jeff wrote:
>> hey folks,
>>
>> Running into some odd performance issues between a few of our db boxes.
>
> We've noticed similar results both in OLTP and data warehousing conditions
> here.
>
> Opteron machines
On Fri, Mar 18, 2011 at 7:52 AM, Vitalii Tymchyshyn wrote:
> 18.03.11 09:15, Anssi Kääriäinen написав(ла):
> Hello.
>
> If your queries work on single attribute, you can try adding partial indexes
> for different attributes. Note that in this case parameterized statements
> may prevent index usage
On Fri, Mar 18, 2011 at 3:19 AM, Jesper Krogh wrote:
> * Dependent on your workload of-course, you're typically not
> bottlenecked by the amount of cpu-cores, so strive for fewer
> faster cores.
Depending on your workload again, but faster memory is even more
important than faster math.
So go
On Wed, Mar 23, 2011 at 2:12 PM, Josh Berkus wrote:
> Folks,
>
>...
> It really seems like we should be able to detect an obvious high-risk
> situation like this one. Or maybe we're just being too optimistic about
> discarding subplans?
Why not letting the GEQO learn from past mistakes?
If some
On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus wrote:
> On 3/23/11 10:35 AM, Claudio Freire wrote:
>> * consider plan bailout: execute a tempting plan, if it takes too
>> long or its effective cost raises well above the expected cost, bail
>> to a safer plan
>
> Tha
On Wed, Mar 23, 2011 at 6:00 PM, Tom Lane wrote:
> Claudio Freire writes:
>> In my head, safer = better worst-case performance.
>
> If the planner starts operating on the basis of worst case rather than
> expected-case performance, the complaints will be far more numerous tha
On Thu, Mar 24, 2011 at 5:30 PM, Nathan Boley wrote:
> Another approach, that hasn't been suggested yet, is some Bayesian
> update method. There, rather than calculating a specific parameter
> value ( like ndistinct ), you try to store the entire distribution and
> choose the plan that minimizes c
On Tue, Mar 29, 2011 at 7:16 PM, Samuel Gendler
wrote:
> Is there any performance benefit to revamping the workload such that it issues
> a single:
> insert into (...) select ... UNION select ... UNION select
> as opposed to 3 separate "insert into () select ..." statements.
I wouldn't expect any
I actually implemented a statistical system for measuring these kinds
of correlations.
It's complex, but it might be adaptable to pgsql. Furthermore, one of
the latest projects of mine was to replace the purely statistical
approach with SVCs.
Too bad I won't be able to devote any time to that proj
On Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein
wrote:
> Hi everybody,
>
> I have a performance-problem with a query using a LIMIT. There are other
> threads rergading performance issues with LIMIT, but I didn't find useful
> hints for our problem and it might
> be interesting for other postgres
On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein
wrote:
> I just executed a VACUUM ANALYZE and now everything performs well. hm,
> strange.
That probably means you need more statistics - try increasing the
newsfeed's statistics target count.
ALTER TABLE newsfeed_item ALTER COLUMN newsfeed SET S
On Tue, Apr 12, 2011 at 6:40 PM, Kevin Grittner
wrote:
>
> Well, that pretty much clinches it. Your RAM access tops out at 16
> processors. It appears that your processors are spending most of
> their time waiting for and contending for the RAM bus.
It tops, but it doesn't drop.
I'd propose th
On Wed, Apr 13, 2011 at 4:32 PM, Kevin Grittner
wrote:
> If you model the costing to reflect the reality on your server, good
> plans will be chosen.
Wouldn't it be "better" to derive those costs from actual performance
data measured at runtime?
Say, pg could measure random/seq page cost, *per t
On Wed, Apr 13, 2011 at 10:16 PM, Tomas Vondra wrote:
> You mean Support Vector Classifiers? Interesting idea, although I don't
> see how to apply that to query planning, especially with non-numeric
> inputs. Could you share more details on that statistical system and how
> do you think it could b
On Wed, Apr 13, 2011 at 11:52 PM, Tom Lane wrote:
> Getting numbers that mean much of anything is a slow, expensive
> process. You really don't want the database trying to do that for you.
> Once you've got them, you *really* don't want the database
> editorializing on them.
>
So it hasn't even
On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra wrote:
>
> Another issue is that when measuring multiple values (processing of
> different requests), the decisions may be contradictory so it really
> can't be fully automatic.
>
I don't think it's s dependant on workload. It's dependant on
acce
On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra wrote:
> Workload A: Touches just a very small portion of the database, to the
> 'active' part actually fits into the memory. In this case the cache hit
> ratio can easily be close to 99%.
>
> Workload B: Touches large portion of the database, so it hi
On Thu, Apr 14, 2011 at 10:05 PM, Scott Carey wrote:
> Huge Pages helps caches.
> Dual-Pivot quicksort is more cache friendly and is _always_ equal to or
> faster than traditional quicksort (its a provably improved algorithm).
If you want a cache-friendly sorting algorithm, you need mergesort.
I
On Fri, Apr 15, 2011 at 12:42 AM, Scott Carey wrote:
> I do know that dual-pivot quicksort provably causes fewer swaps (but the
> same # of compares) as the usual single-pivot quicksort. And swaps are a
> lot slower than you would expect due to the effects on processor caches.
> Therefore it migh
On Mon, Apr 18, 2011 at 7:14 AM, Phoenix Kiula wrote:
> # REINDEX INDEX new_idx_userid;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting res
On Mon, Apr 18, 2011 at 8:39 AM, Claudio Freire wrote:
> Aside from a bug, that's the only reason I can think for a pg backend
> to bail out like that. Well, the connection could have been cut off by
> other means (ie: someone tripped on the cable or something), but lets
> n
On Tue, Apr 19, 2011 at 11:07 AM, Robert Klemme
wrote:
> I find that slightly contradictory: either you do care about the
> values then your business requirements dictate the aggregate function.
> If you only want to pick any value actually in the table but do not
> care about which one (e.g. MIN
On Thu, Apr 21, 2011 at 4:05 AM, Brendan Jurd wrote:
>
> "IMMUTABLE indicates that the function cannot modify the database and
> always returns the same result when given the same argument values"
>
> Emphasis on "always". If the result of the function, given the same
> argument values, can be di
On Thu, Apr 21, 2011 at 9:30 AM, Claudio Freire wrote:
> On Thu, Apr 21, 2011 at 4:05 AM, Brendan Jurd wrote:
>>
>> "IMMUTABLE indicates that the function cannot modify the database and
>> always returns the same result when given the same argument values"
>
On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost wrote:
>
> There's probably something else that's trying to grab all the memory and
> then tries to use it and PG ends up getting nailed because the kernel
> over-attributes memory to it. You should be looking for that other
> process..
Not only tha
On Thu, Apr 21, 2011 at 2:53 PM, Claudio Freire wrote:
> On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost wrote:
>>
>> There's probably something else that's trying to grab all the memory and
>> then tries to use it and PG ends up getting nailed because the kernel
On Thu, Apr 21, 2011 at 5:50 PM, Tory M Blue wrote:
> # - Checkpoints -
> checkpoint_segments = 100
> max_connections = 300
> shared_buffers = 2500MB # min 128kB or max_connections*16kB
> max_prepared_transactions = 0
> work_mem = 100MB
> maintenance_work_mem = 128MB
> fsync = on
That's an
On Thu, Apr 21, 2011 at 6:15 PM, Tory M Blue wrote:
> While I don't mind the occasional slap of reality. This configuration
> has run for 4+ years. It's possible that as many other components each
> fedora release is worse then the priors.
I'd say you've been lucky.
You must be running overnight
On Fri, Apr 22, 2011 at 6:45 PM, Cédric Villemain
wrote:
> Are you sure it is a PAE kernel ? You look limited to 4GB.
If my memory/knowledge serves me right, PAE doesn't remove that limit.
PAE allows more processes, and they can use more memory together, but
one process alone has to live within a
On Tue, Apr 26, 2011 at 7:30 AM, Robert Haas wrote:
> On Apr 14, 2011, at 2:49 AM, Claudio Freire wrote:
>> This particular factor is not about an abstract and opaque "Workload"
>> the server can't know about. It's about cache hit rate, and the server
>>
On Wed, Apr 27, 2011 at 3:04 AM, Merlin Moncure wrote:
> The very first thing to check is effective_cache_size and to set it to
> a reasonable value.
>
The problem there, I think, is that the planner is doing a full join,
instead of a semi-join.
--
Sent via pgsql-performance mailing list (pgsql
On Wed, Apr 27, 2011 at 9:22 AM, Claudio Freire wrote:
> The problem there, I think, is that the planner is doing a full join,
> instead of a semi-join.
Or, rather, computing cost as if it was a full join. I'm not sure why.
--
Sent via pgsql-performance mailing list (pgsql-
On Wed, Apr 27, 2011 at 10:27 PM, Robert Haas wrote:
>
> What if the user is using an SSD or ramdisk?
>
> Admittedly, in many cases, we could probably get somewhat useful
> numbers this way. But I think it would be pretty expensive.
> gettimeofday() is one of the reasons why running EXPLAIN ANALY
As for the question in the title, no, if the server starts, shared
buffers should not be the reason for a subsequent crash.
In debian, it is common that the maximum allowed shared memory setting
on your kernel will prevent a server from even starting, but I guess
that's not your problem (because i
On Fri, Apr 29, 2011 at 11:37 PM, Greg Smith wrote:
> Anyway, how to collect this data is a separate problem from what should be
> done with it in the optimizer. I don't actually care about the collection
> part very much; there are a bunch of approaches with various trade-offs.
> Deciding how t
On Sun, May 1, 2011 at 12:23 PM, Mark wrote:
> Now the problem.
> When I try ANALYZE it shows:
That's a regular explain... can you post an EXPLAIN ANALYZE?
Hash joins are very inefficient if they require big temporary files.
I usually work around that by disabling hash joins for the problematic
On Mon, May 2, 2011 at 10:54 PM, Mark wrote:
> but the result have been worst than before. By the way is there a posibility
> to create beeter query with same effect?
> I have tried more queries, but this has got best performance yet.
Well, this seems to be the worst part:
(SELECT
On Mon, May 9, 2011 at 10:32 PM, Chris Hoover wrote:
> So, does anyone have any suggestions/experiences in benchmarking storage
> when the storage is smaller then 2x memory?
Try writing a small python script (or C program) to mmap a large chunk
of memory, with MAP_LOCKED, this will keep it in RAM
On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure wrote:
> Point being: cranking buffers
> may have been the bee's knees with, say, the 8.2 buffer manager, but
> present and future improvements may have render that change moot or
> even counter productive.
I suggest you read the docs on how shared
On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure wrote:
> The point is what we can prove, because going through the
> motions of doing that is useful.
Exactly, and whatever you can "prove" will be workload-dependant.
So you can't prove anything "generally", since no single setting is
best for all.
On Fri, May 27, 2011 at 9:24 PM, Maciek Sakrejda wrote:
> Another +1. While I understand that this is not simple, many users
> will not look outside of standard docs, especially when first
> evaluating PostgreSQL. Merlin is right that the current wording does
> not really mention a down side to cr
On Thu, Jun 2, 2011 at 4:57 PM, Kevin Grittner
wrote:
> And the
> planner does take the size of work_mem and the expected data set
> into consideration when estimating the cost of the hash join.
And shouldn't it?
In a gross mode, when hash joins go to disk, they perform very poorly.
Maybe the pl
On Wed, Jun 8, 2011 at 7:08 AM, wrote:
> What seems odd to me is that the only difference between the two is the limit
> clause
Why would that seem odd?
Of course optimally executing a plan with limit is a lot different
than one without.
Just... why are you sorting by diag_id?
I believe you w
-- Forwarded message --
From: Claudio Freire
Date: Wed, Jun 8, 2011 at 11:57 PM
Subject: Re: [PERFORM] poor performance when recreating constraints on
large tables
To: Samuel Gendler
On Wed, Jun 8, 2011 at 9:57 PM, Samuel Gendler
wrote:
> Sure, but if it is a query that
On Fri, Jun 10, 2011 at 1:22 PM, wrote:
>> If I had set the primary key to (diag_id, create_time) would simple
>> queries on
>> diag_id still work well i.e.
>> select * from tdiag where diag_id = 1234;
>
> Yes. IIRC the performance penalty for using non-leading column of an index
> is negligi
Something like this[0] ?
[0] http://archives.postgresql.org/pgsql-hackers/2007-04/msg01037.php
On Mon, Jun 20, 2011 at 9:57 AM, Didik Prasetyo
wrote:
> hai friend i have problem with performance database in postgre, how to know
> slowly query in postgre,
> i want kill or stop query to make postg
On Thu, Jun 23, 2011 at 5:05 PM, Harry Mantheakis
wrote:
> TOP, FREE and VMSTAT utilities indicate that only about half of the 6GB of
> memory is being used, so I have no reason to believe that the server is
> struggling.
You have a hinky idea of server load.
Mind you, there are lots of ways in
On Fri, Jun 24, 2011 at 1:19 PM, Harry Mantheakis
wrote:
>
>> there are lots of ways in which it could be struggling...
>
> I have been monitoring the server with IOSTAT -d and IOSTAT -c and I cannot
> see anything alarming.
If iostat doesn't show disk load, either iostat doesn't work well
(which
On Fri, Jul 8, 2011 at 12:48 PM, Dean Rasheed wrote:
> Yes, it's the same issue that affects deferrable PK and FK
> constraints, but even non-deferrable FKs use AFTER ROW triggers that
> suffer from this problem. These triggers don't show up in a "\d" from
> psql, but they are there (try select *
On Fri, Sep 9, 2011 at 5:38 PM, Kevin Grittner
wrote:
> This is getting back to that issue of using only enough processes at
> one time to keep all the bottleneck resources fully utilized. Some
> people tend to assuem that if they throw a few more concurrent
> processes into the mix, it'll all ge
On Fri, Sep 9, 2011 at 3:16 PM, Kevin Grittner
wrote:
> Add together the shared_buffers setting and whatever the OS tells
> you is used for cache under your normal load. It's usually 75% of
> RM or higher. (NOTE: This doesn't cause any allocation of RAM; it's
> a hint to the cost calculations.)
On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov wrote:
> Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6
> configuration.
> They are managed by a 3WARE 9750 RAID CARD.
>
> I would say that I am not very concerned with linear relationship of read
> speed to disk speed. If t
On Sun, Sep 11, 2011 at 4:16 PM, Andy Colson wrote:
> Upgrading to major versions of PG may or may not be painful. (mysql
> sometimes works seamlessly between versions, it appears brilliant. But I
> have had problems with an update, and when it goes bad, you dont have a lot
> of options). In th
On Sun, Sep 11, 2011 at 4:21 PM, Igor Chudov wrote:
> Quantitatively, what would you say is the write speed difference between
> RAID 10 and RAID 6?
https://support.nstein.com/blog/archives/73
There you can see a comparison with 4 drives, and raid 10 is twice as fast.
Since raid 5/6 doesn't scal
On Mon, Sep 12, 2011 at 1:16 AM, Igor Chudov wrote:
> I could, say, set work_mem to 30 GB? (64 bit linux)
I don't think you'd want that. Remember, work_mem is the amount of
memory *per sort*.
Queries can request several times that much memory, once per sort they
need to perform.
You can set it r
On Thu, Sep 15, 2011 at 5:00 PM, Merlin Moncure wrote:
>
> HM, what if you junked the current hash indexam, and just implemented
> a wrapper over btree so that the 'hash index' was just short hand for
> hashing the value into a standard index?
I'm doing this (only by hand, indexing on hash(blah))
On Fri, Sep 16, 2011 at 12:38 AM, Tom Lane wrote:
> I'm not entirely following this eagerness to junk that AM, anyway.
> We've put a lot of sweat into it over the years, in the hopes that
> it would eventually be good for something. It's on the edge of
> being good for something now, and there's
On Fri, Sep 16, 2011 at 3:00 AM, Merlin Moncure wrote:
>
> c:\Program Files\PostgreSQL\9.0\data>dir/s | grep 16525
> 09/15/2011 07:46 PM 224,641,024 16525
>
> c:\Program Files\PostgreSQL\9.0\data>dir/s | grep 16526
> 09/15/2011 07:49 PM 268,451,840 16526
That's not surprising at all
It's not an issue for me (it's not really impacting performance), but
since it was odd I thought I might ask.
I have this supermegaquery:
SELECT
t.date AS status_date, lu.id AS memberid, lu.username AS
username, u.url AS url, ub.url_pattern AS urlpattern, lu.email AS
email,
lu.birth
2011/9/19 Ondrej Ivanič :
> BTW, Does Postgres use Bloom filter anywhere?
I saw patches for at least in-memory bloom filters (for hash joins)
Not sure they're committed. I think so.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
On Mon, Sep 19, 2011 at 12:54 PM, Vitalii Tymchyshyn wrote:
> 19.09.11 18:19, Robert Klemme написав(ла):
>>
>> I still haven't seen a solution to locking when a hash table needs
>> resizing. All hashing algorithms I can think of at the moment would
>> require a lock on the whole beast during the
On Mon, Sep 19, 2011 at 3:43 PM, Merlin Moncure wrote:
> To make the test into i/o bound, I change the setrandom from 10 to
> 1000; this produced some unexpected results. The hash index is
> pulling about double the tps (~80 vs ~ 40) over the hybrid version.
> Well, unless my methodology i
2011/9/21 Hellmuth Vargas :
> SAN
> VMWare
> CentOS 6-64bits
> PostgreSQL 9-64bits
> Tengo virtualizada la maquina para atender otros servicios, entonces
> mi pregunta es.. los parámetros y consideraciones que establecen en el
> articulo son aplicables al esquema visualizado especialmente en cuanto
2011/9/29 Numael Vacca Duran :
>
> Hola!!
>
> Tengo un serio inconveniente, estoy trabajando con postgresql 8.2 y tomcat
> 5.5.20 en un equipo con Centos 5.3
1- 8.2 es viejito
2- Hacen falta muchos más datos. Las consultas en sí, un EXPLAIN y
EXLAIN ANALYZE de las consultas, vendrían bien para sab
On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler
wrote:
> The original question doesn't actually say that performance has gone down,
> only that cpu utilization has gone up. Presumably, with lots more RAM, it is
> blocking on I/O a lot less, so it isn't necessarily surprising that CPU
> utilizatio
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
wrote:
> 2) Change all memory chips to new others, instead of maintain the old (16
> GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem correspondingly slower.
By a lot.
--
Sen
On Tue, Oct 11, 2011 at 5:02 PM, alexandre - aldeia digital
wrote:
> The initial change (add more memory) are maded by a technical person of Dell
> and him told us that he use the same especification in memory chips.
> But, you know how "it works"... ;)
Yeah, but different size == different specs
On Tue, Oct 11, 2011 at 5:33 PM, Arjen van der Meijden
wrote:
> That really depends on the chipset/server. The current intel E56xx-chips
> (and previous E55xx) basically just expect groups of 3 modules per
> processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4 in
> terms of perfor
On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks
wrote:
> Question 2) Regardless of the answer to Question 1 - if another_id is not
> guaranteed to be unique, whereas pkey_id is – there any value to changing
> the order of declaration (more generally, is there a performance impact for
> column or
On Mon, Oct 24, 2011 at 6:31 PM, Merlin Moncure wrote:
>> 2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A
>> single SSD should be sufficient (will get you several thousand TPS on
>> pgbench for your DB size).
>
> Also, raid controllers interfere with TRIM.
What about red
On Mon, Oct 24, 2011 at 11:37 PM, David Boreham wrote:
>> What about redundancy?
>>
>> How do you swap an about-to-die SSD?
>>
>> Software RAID-1?
>
> The approach we take is that we use 710 series devices which have predicted
> reliability similar to all the other components in the machine, there
On Thu, Oct 27, 2011 at 4:42 AM, Gregg Jaskiewicz wrote:
> What does 'select * from pg_stat_activity' say, more precisely - the
> "waiting" column.
Whether that particular process is waiting for it to be granted some
kind of database-level lock.
--
Sent via pgsql-performance mailing list (pgsql
On Fri, Oct 28, 2011 at 12:28 PM, Marcus Engene wrote:
> Hi list,
>
> Every now and then I have write peaks which causes annoying delay on my
> website. No particular reason it seems, just that laws of probability
> dictates that there will be peaks every now and then.
>
> Anyway, thinking of ways
On Mon, Oct 31, 2011 at 2:08 PM, Robert Haas wrote:
>> Multicolumn indices on (c1, c2, ..., cn) can only be used on where
>> clauses involving c1..ck with k
> I don't think that's true. I believe it can be used for a query that
> only touches, say, c2. It's just extremely inefficient.
Does post
On Mon, Oct 31, 2011 at 3:24 PM, Robert Haas wrote:
> Sure it does:
>
> rhaas=# create table baz (a bool, b int, c text, primary key (a, b));
> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
> "baz_pkey" for table "baz"
> CREATE TABLE
> rhaas=# insert into baz select true, g,
> ran
On Wed, Nov 2, 2011 at 12:13 PM, Robert Haas wrote:
> I wonder if we need to rethink, though. We've gotten a number of
> reports of problems that were caused by single-use CTEs not being
> equivalent - in terms of performance - to a non-CTE formulation of the
> same idea. It seems necessary for
Hi list,
I've been experiencing a weird performance issue lately.
I have a very simple (and usually very fast) query:
SELECT track_logs.id
FROM track_logs
WHERE track_logs.track_id = AND track_logs.track_status_id =
1 AND track_logs.date >= now() - interval '1 hours'
FOR UPDATE
Whose pla
On Thu, Nov 3, 2011 at 2:51 PM, Claudio Freire wrote:
> What other information should I provide?
Forgot all the usual details:
Server is postgresql 9.0.3, running in linux, quite loaded (load
average ~7), WAL on raid 1 2 spindles, data on raid 10 4 spindles, 16G
RAM.
Could it be h
On Thu, Nov 3, 2011 at 3:45 PM, Tom Lane wrote:
> Claudio Freire writes:
>> But I cannot figure out which transaction it would be. There *are*, in
>> fact, connections in state, which makes me think
>> those would be the culprit. But for the life of me, I cannot make
>
On Thu, Nov 3, 2011 at 4:29 PM, Claudio Freire wrote:
> Next time I find it blocking, I will check pg_locks directly and post
> the output.
Here it is, two instances of the query, while blocked:
select * from pg_locks where pid = 22636;
locktype| database | relation | page |
On Thu, Nov 3, 2011 at 8:45 PM, Tom Lane wrote:
> But before pursuing that idea, probably first you should
> back up and confirm whether the process is actually waiting, or running,
> or just really slow due to CPU contention. It might be useful to see
> what strace has to say about it.
Thanks
On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner
wrote:
> As already pointed out, SELECT FOR UPDATE will require a disk write
> of the tuple(s) read. If these are glutting, increasing
> shared_buffers would tend to make things worse.
I thought shared_buffers improved write caching.
We do tend to w
On Fri, Nov 4, 2011 at 2:07 PM, Kevin Grittner
wrote:
> Before anything else, you might want to make sure you've spread your
> checkpoint activity as much as possible by setting
> checkpoint_completion_target = 0.9.
We have
shared_buffers = 2G
bgwriter_delay = 1000ms
effective_io_concurrency=8
s
On Fri, Nov 4, 2011 at 3:26 PM, Shaun Thomas wrote:
> On 11/04/2011 12:22 PM, Claudio Freire wrote:
>
>> bgwriter_delay = 1000ms
>> wal_writer_delay=2000ms
>> commit_delay=1
>
> !?
>snip
> "Setting commit_delay can only help when there are many concu
On Fri, Nov 4, 2011 at 3:54 PM, Robert Haas wrote:
> On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire wrote:
>> I don't think 1 second can be such a big difference for the bgwriter,
>> but I might be wrong.
>
> Well, the default value is 200 ms. And I've never bef
On Fri, Nov 4, 2011 at 4:07 PM, Claudio Freire wrote:
>> Here again, you've set it to ten times the default value. That
>> doesn't seem like a good idea. I would start with the default and
>> tune down.
>
> Already did that. Waiting to see how it turns out.
On Thu, Nov 17, 2011 at 11:17 AM, Aidan Van Dyk wrote:
> But remember, you're doing all that in a single query. So your disk
> subsystem might even be able to perform even more *througput* if it
> was given many more concurrent request. A big raid10 is really good
> at handling multiple concurre
On Thu, Nov 17, 2011 at 2:10 PM, Jon Nelson wrote:
> What sorts of things should I be looking at to improve the performance
> of this query? Is my interpretation of that log line totally off base?
You'll have to post some more details.
Like a query and an explain/explain analyze.
Memory consumpt
On Thu, Nov 17, 2011 at 10:07 PM, Greg Matthews
wrote:
> if (smoothed_alloc <= (float) recent_alloc)
> smoothed_alloc = recent_alloc;
> else if (smoothed_alloc >= 0.1)
> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /
>
On Wed, Nov 23, 2011 at 7:24 PM, Gary Warner wrote:
> See that "Seq Scan on link_url"? We can't figure out why that is there! We
> should be scanning for a matching "urlid" and we have an index on "urlid"?
>
> When this is happening in a "two table" version of this problem, we can get
> tempor
On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky wrote:
> work_mem = 128MB (tried 257MB, didn't change anything)
This is probably your problem.
Without an EXPLAIN output, I cannot be sure, but 'work_mem' is not the
total amount of memory a query can use, it's the amount of memory it
can use for
On Tue, Dec 27, 2011 at 1:00 PM, Scott Marlowe wrote:
> He can lower it for just that query but honestly, even on a machine
> with much more memory I'd never set it as high as he has it. On a
> busy machine with 128G RAM the max I ever had it set to was 16M, and
> that was high enough I kept a cl
On Wed, Dec 28, 2011 at 3:02 PM, Matteo Beccati wrote:
> The query eventually completed in more than 18h. For comparison a normal
> run doesn't take more than 1m for that specific step.
>
> Do you think that bad stats and suboptimal plan alone could explain such
> a behaviour?
Did you get the exp
On Thu, Dec 29, 2011 at 11:33 AM, Aleksej Trofimov
wrote:
> Postgres Streaming replication is WAL based replication, so using this type
> of replication you will have absolutely identical database servers, what is
> best choice for HA and scaling reads. Also this choice is not practically
> affect
On Wed, Jan 25, 2012 at 6:18 AM, sridhar bamandlapally
wrote:
> I just want to illustrate an idea may possible for bringing up
> parallel process in PostgreSQL at SQL-Query level
>
> The PARALLEL option in Oracle really give great improvment in
> performance, multi-thread concept has great possibi
On Wed, Jan 25, 2012 at 5:16 PM, Merlin Moncure wrote:
> On Wed, Jan 25, 2012 at 7:43 AM, Claudio Freire
> wrote:
>> I know squat about how to implement this, but I've been considering
>> picking the low hanging fruit on that tree and patching up PG to try
>> t
On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas
wrote:
> I don't think there's anything particular in postgres that would make it a
> poor choice on a small system, as far as CPU usage is concerned anyway. But
> inserting rows in a database is certainly slower than, say, writing them
> into a
1 - 100 of 347 matches
Mail list logo