Nothing?
No ideas?
Did I forget to include some useful bit?
On Fri, Mar 4, 2011 at 8:22 PM, Claudio Freire klaussfre...@gmail.com wrote:
Hello, first post to this list.
I have this query that ran in milliseconds in postgres 8.3.7 (usually 50,
100ms), and now it takes a full 10 minutes
This is postgresql 9.0.3:
Query:
select
sum(stat_responses) * 100.0 / sum(stat_invites) as stat_response_rate,
sum(real_responses) * 100.0 / sum(real_invites) as real_response_rate
from (
select
ms.invites as stat_invites,
(select count(*)
On Wed, Mar 16, 2011 at 3:34 PM, Shaun Thomas stho...@peak6.com wrote:
If not, it seems like a valid configurable. We set our random_page_cost to
1.5 once the DB was backed by NVRAM. I could see that somehow influencing
precedence of a backwards index scan. But even then, SSDs and their ilk
On Thu, Mar 17, 2011 at 1:42 PM, J Sisson sisso...@gmail.com wrote:
On Thu, Mar 17, 2011 at 10:13 AM, Jeff thres...@torgo.978.org wrote:
hey folks,
Running into some odd performance issues between a few of our db boxes.
We've noticed similar results both in OLTP and data warehousing
On Fri, Mar 18, 2011 at 7:52 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
18.03.11 09:15, Anssi Kääriäinen написав(ла):
Hello.
If your queries work on single attribute, you can try adding partial indexes
for different attributes. Note that in this case parameterized statements
may prevent
On Fri, Mar 18, 2011 at 3:19 AM, Jesper Krogh jes...@krogh.cc wrote:
* Dependent on your workload of-course, you're typically not
bottlenecked by the amount of cpu-cores, so strive for fewer
faster cores.
Depending on your workload again, but faster memory is even more
important than faster
On Wed, Mar 23, 2011 at 2:12 PM, Josh Berkus j...@agliodbs.com wrote:
Folks,
...
It really seems like we should be able to detect an obvious high-risk
situation like this one. Or maybe we're just being too optimistic about
discarding subplans?
Why not letting the GEQO learn from past
On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus j...@agliodbs.com wrote:
On 3/23/11 10:35 AM, Claudio Freire wrote:
* consider plan bailout: execute a tempting plan, if it takes too
long or its effective cost raises well above the expected cost, bail
to a safer plan
That would actually solve
On Wed, Mar 23, 2011 at 6:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Claudio Freire klaussfre...@gmail.com writes:
In my head, safer = better worst-case performance.
If the planner starts operating on the basis of worst case rather than
expected-case performance, the complaints will be far
On Thu, Mar 24, 2011 at 5:30 PM, Nathan Boley npbo...@gmail.com wrote:
Another approach, that hasn't been suggested yet, is some Bayesian
update method. There, rather than calculating a specific parameter
value ( like ndistinct ), you try to store the entire distribution and
choose the plan
On Tue, Mar 29, 2011 at 7:16 PM, Samuel Gendler
sgend...@ideasculptor.com wrote:
Is there any performance benefit to revamping the workload such that it issues
a single:
insert into (...) select ... UNION select ... UNION select
as opposed to 3 separate insert into () select ... statements.
I
I actually implemented a statistical system for measuring these kinds
of correlations.
It's complex, but it might be adaptable to pgsql. Furthermore, one of
the latest projects of mine was to replace the purely statistical
approach with SVCs.
Too bad I won't be able to devote any time to that
On Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein
dieter.rehb...@skiline.cc wrote:
Hi everybody,
I have a performance-problem with a query using a LIMIT. There are other
threads rergading performance issues with LIMIT, but I didn't find useful
hints for our problem and it might
be
On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein
dieter.rehb...@skiline.cc wrote:
I just executed a VACUUM ANALYZE and now everything performs well. hm,
strange.
That probably means you need more statistics - try increasing the
newsfeed's statistics target count.
ALTER TABLE newsfeed_item
On Tue, Apr 12, 2011 at 6:40 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Well, that pretty much clinches it. Your RAM access tops out at 16
processors. It appears that your processors are spending most of
their time waiting for and contending for the RAM bus.
It tops, but it
On Wed, Apr 13, 2011 at 4:32 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
If you model the costing to reflect the reality on your server, good
plans will be chosen.
Wouldn't it be better to derive those costs from actual performance
data measured at runtime?
Say, pg could measure
On Wed, Apr 13, 2011 at 10:16 PM, Tomas Vondra t...@fuzzy.cz wrote:
You mean Support Vector Classifiers? Interesting idea, although I don't
see how to apply that to query planning, especially with non-numeric
inputs. Could you share more details on that statistical system and how
do you think
On Wed, Apr 13, 2011 at 11:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Getting numbers that mean much of anything is a slow, expensive
process. You really don't want the database trying to do that for you.
Once you've got them, you *really* don't want the database
editorializing on them.
So
On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra t...@fuzzy.cz wrote:
Another issue is that when measuring multiple values (processing of
different requests), the decisions may be contradictory so it really
can't be fully automatic.
I don't think it's s dependant on workload. It's
On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra t...@fuzzy.cz wrote:
Workload A: Touches just a very small portion of the database, to the
'active' part actually fits into the memory. In this case the cache hit
ratio can easily be close to 99%.
Workload B: Touches large portion of the database,
On Thu, Apr 14, 2011 at 10:05 PM, Scott Carey sc...@richrelevance.com wrote:
Huge Pages helps caches.
Dual-Pivot quicksort is more cache friendly and is _always_ equal to or
faster than traditional quicksort (its a provably improved algorithm).
If you want a cache-friendly sorting algorithm,
On Fri, Apr 15, 2011 at 12:42 AM, Scott Carey sc...@richrelevance.com wrote:
I do know that dual-pivot quicksort provably causes fewer swaps (but the
same # of compares) as the usual single-pivot quicksort. And swaps are a
lot slower than you would expect due to the effects on processor
On Mon, Apr 18, 2011 at 7:14 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
# REINDEX INDEX new_idx_userid;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was
On Mon, Apr 18, 2011 at 8:39 AM, Claudio Freire klaussfre...@gmail.com wrote:
Aside from a bug, that's the only reason I can think for a pg backend
to bail out like that. Well, the connection could have been cut off by
other means (ie: someone tripped on the cable or something), but lets
On Tue, Apr 19, 2011 at 11:07 AM, Robert Klemme
shortcut...@googlemail.com wrote:
I find that slightly contradictory: either you do care about the
values then your business requirements dictate the aggregate function.
If you only want to pick any value actually in the table but do not
care
On Thu, Apr 21, 2011 at 4:05 AM, Brendan Jurd dire...@gmail.com wrote:
IMMUTABLE indicates that the function cannot modify the database and
always returns the same result when given the same argument values
Emphasis on always. If the result of the function, given the same
argument values,
On Thu, Apr 21, 2011 at 9:30 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Apr 21, 2011 at 4:05 AM, Brendan Jurd dire...@gmail.com wrote:
IMMUTABLE indicates that the function cannot modify the database and
always returns the same result when given the same argument values
On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost sfr...@snowman.net wrote:
There's probably something else that's trying to grab all the memory and
then tries to use it and PG ends up getting nailed because the kernel
over-attributes memory to it. You should be looking for that other
process..
On Thu, Apr 21, 2011 at 2:53 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost sfr...@snowman.net wrote:
There's probably something else that's trying to grab all the memory and
then tries to use it and PG ends up getting nailed because the kernel
On Thu, Apr 21, 2011 at 5:50 PM, Tory M Blue tmb...@gmail.com wrote:
# - Checkpoints -
checkpoint_segments = 100
max_connections = 300
shared_buffers = 2500MB # min 128kB or max_connections*16kB
max_prepared_transactions = 0
work_mem = 100MB
maintenance_work_mem = 128MB
fsync = on
On Thu, Apr 21, 2011 at 6:15 PM, Tory M Blue tmb...@gmail.com wrote:
While I don't mind the occasional slap of reality. This configuration
has run for 4+ years. It's possible that as many other components each
fedora release is worse then the priors.
I'd say you've been lucky.
You must be
On Fri, Apr 22, 2011 at 6:45 PM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
Are you sure it is a PAE kernel ? You look limited to 4GB.
If my memory/knowledge serves me right, PAE doesn't remove that limit.
PAE allows more processes, and they can use more memory together, but
one
On Wed, Apr 27, 2011 at 10:27 PM, Robert Haas robertmh...@gmail.com wrote:
What if the user is using an SSD or ramdisk?
Admittedly, in many cases, we could probably get somewhat useful
numbers this way. But I think it would be pretty expensive.
gettimeofday() is one of the reasons why
As for the question in the title, no, if the server starts, shared
buffers should not be the reason for a subsequent crash.
In debian, it is common that the maximum allowed shared memory setting
on your kernel will prevent a server from even starting, but I guess
that's not your problem (because
On Fri, Apr 29, 2011 at 11:37 PM, Greg Smith g...@2ndquadrant.com wrote:
Anyway, how to collect this data is a separate problem from what should be
done with it in the optimizer. I don't actually care about the collection
part very much; there are a bunch of approaches with various trade-offs.
On Mon, May 2, 2011 at 10:54 PM, Mark marek.bal...@seznam.cz wrote:
but the result have been worst than before. By the way is there a posibility
to create beeter query with same effect?
I have tried more queries, but this has got best performance yet.
Well, this seems to be the worst part:
On Mon, May 9, 2011 at 10:32 PM, Chris Hoover revo...@gmail.com wrote:
So, does anyone have any suggestions/experiences in benchmarking storage
when the storage is smaller then 2x memory?
Try writing a small python script (or C program) to mmap a large chunk
of memory, with MAP_LOCKED, this
On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure mmonc...@gmail.com wrote:
Point being: cranking buffers
may have been the bee's knees with, say, the 8.2 buffer manager, but
present and future improvements may have render that change moot or
even counter productive.
I suggest you read the docs
On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure mmonc...@gmail.com wrote:
The point is what we can prove, because going through the
motions of doing that is useful.
Exactly, and whatever you can prove will be workload-dependant.
So you can't prove anything generally, since no single setting is
On Fri, May 27, 2011 at 9:24 PM, Maciek Sakrejda msakre...@truviso.com wrote:
Another +1. While I understand that this is not simple, many users
will not look outside of standard docs, especially when first
evaluating PostgreSQL. Merlin is right that the current wording does
not really mention
On Wed, Jun 8, 2011 at 7:08 AM, anthony.ship...@symstream.com wrote:
What seems odd to me is that the only difference between the two is the limit
clause
Why would that seem odd?
Of course optimally executing a plan with limit is a lot different
than one without.
Just... why are you sorting
Something like this[0] ?
[0] http://archives.postgresql.org/pgsql-hackers/2007-04/msg01037.php
On Mon, Jun 20, 2011 at 9:57 AM, Didik Prasetyo
prasetyodidi...@yahoo.com wrote:
hai friend i have problem with performance database in postgre, how to know
slowly query in postgre,
i want kill or
On Thu, Jun 23, 2011 at 5:05 PM, Harry Mantheakis
harry.manthea...@riskcontrollimited.com wrote:
TOP, FREE and VMSTAT utilities indicate that only about half of the 6GB of
memory is being used, so I have no reason to believe that the server is
struggling.
You have a hinky idea of server load.
On Fri, Jun 24, 2011 at 1:19 PM, Harry Mantheakis
harry.manthea...@riskcontrollimited.com wrote:
there are lots of ways in which it could be struggling...
I have been monitoring the server with IOSTAT -d and IOSTAT -c and I cannot
see anything alarming.
If iostat doesn't show disk load,
On Fri, Jul 8, 2011 at 12:48 PM, Dean Rasheed dean.a.rash...@gmail.com wrote:
Yes, it's the same issue that affects deferrable PK and FK
constraints, but even non-deferrable FKs use AFTER ROW triggers that
suffer from this problem. These triggers don't show up in a \d from
psql, but they are
On Fri, Sep 9, 2011 at 5:38 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
This is getting back to that issue of using only enough processes at
one time to keep all the bottleneck resources fully utilized. Some
people tend to assuem that if they throw a few more concurrent
processes
On Fri, Sep 9, 2011 at 3:16 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Add together the shared_buffers setting and whatever the OS tells
you is used for cache under your normal load. It's usually 75% of
RM or higher. (NOTE: This doesn't cause any allocation of RAM; it's
a hint to
On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov ichu...@gmail.com wrote:
Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6
configuration.
They are managed by a 3WARE 9750 RAID CARD.
I would say that I am not very concerned with linear relationship of read
speed to disk
On Sun, Sep 11, 2011 at 4:16 PM, Andy Colson a...@squeakycode.net wrote:
Upgrading to major versions of PG may or may not be painful. (mysql
sometimes works seamlessly between versions, it appears brilliant. But I
have had problems with an update, and when it goes bad, you dont have a lot
of
On Sun, Sep 11, 2011 at 4:21 PM, Igor Chudov ichu...@gmail.com wrote:
Quantitatively, what would you say is the write speed difference between
RAID 10 and RAID 6?
https://support.nstein.com/blog/archives/73
There you can see a comparison with 4 drives, and raid 10 is twice as fast.
Since raid
On Mon, Sep 12, 2011 at 1:16 AM, Igor Chudov ichu...@gmail.com wrote:
I could, say, set work_mem to 30 GB? (64 bit linux)
I don't think you'd want that. Remember, work_mem is the amount of
memory *per sort*.
Queries can request several times that much memory, once per sort they
need to perform.
On Thu, Sep 15, 2011 at 5:00 PM, Merlin Moncure mmonc...@gmail.com wrote:
HM, what if you junked the current hash indexam, and just implemented
a wrapper over btree so that the 'hash index' was just short hand for
hashing the value into a standard index?
I'm doing this (only by hand, indexing
On Fri, Sep 16, 2011 at 12:38 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I'm not entirely following this eagerness to junk that AM, anyway.
We've put a lot of sweat into it over the years, in the hopes that
it would eventually be good for something. It's on the edge of
being good for something
On Fri, Sep 16, 2011 at 3:00 AM, Merlin Moncure mmonc...@gmail.com wrote:
c:\Program Files\PostgreSQL\9.0\datadir/s | grep 16525
09/15/2011 07:46 PM 224,641,024 16525
c:\Program Files\PostgreSQL\9.0\datadir/s | grep 16526
09/15/2011 07:49 PM 268,451,840 16526
That's not
It's not an issue for me (it's not really impacting performance), but
since it was odd I thought I might ask.
I have this supermegaquery:
SELECT
t.date AS status_date, lu.id AS memberid, lu.username AS
username, u.url AS url, ub.url_pattern AS urlpattern, lu.email AS
email,
2011/9/19 Ondrej Ivanič ondrej.iva...@gmail.com:
BTW, Does Postgres use Bloom filter anywhere?
I saw patches for at least in-memory bloom filters (for hash joins)
Not sure they're committed. I think so.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On Mon, Sep 19, 2011 at 12:54 PM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
19.09.11 18:19, Robert Klemme написав(ла):
I still haven't seen a solution to locking when a hash table needs
resizing. All hashing algorithms I can think of at the moment would
require a lock on the whole beast
On Mon, Sep 19, 2011 at 3:43 PM, Merlin Moncure mmonc...@gmail.com wrote:
To make the test into i/o bound, I change the setrandom from 10 to
1000; this produced some unexpected results. The hash index is
pulling about double the tps (~80 vs ~ 40) over the hybrid version.
Well, unless
2011/9/21 Hellmuth Vargas hiv...@gmail.com:
SAN
VMWare
CentOS 6-64bits
PostgreSQL 9-64bits
Tengo virtualizada la maquina para atender otros servicios, entonces
mi pregunta es.. los parámetros y consideraciones que establecen en el
articulo son aplicables al esquema visualizado especialmente
2011/9/29 Numael Vacca Duran numae...@hotmail.com:
Hola!!
Tengo un serio inconveniente, estoy trabajando con postgresql 8.2 y tomcat
5.5.20 en un equipo con Centos 5.3
1- 8.2 es viejito
2- Hacen falta muchos más datos. Las consultas en sí, un EXPLAIN y
EXLAIN ANALYZE de las consultas,
On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler
sgend...@ideasculptor.com wrote:
The original question doesn't actually say that performance has gone down,
only that cpu utilization has gone up. Presumably, with lots more RAM, it is
blocking on I/O a lot less, so it isn't necessarily
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem correspondingly slower.
On Tue, Oct 11, 2011 at 5:02 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
The initial change (add more memory) are maded by a technical person of Dell
and him told us that he use the same especification in memory chips.
But, you know how it works... ;)
Yeah, but different size ==
On Tue, Oct 11, 2011 at 5:33 PM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
That really depends on the chipset/server. The current intel E56xx-chips
(and previous E55xx) basically just expect groups of 3 modules per
processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4
On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Question 2) Regardless of the answer to Question 1 - if another_id is not
guaranteed to be unique, whereas pkey_id is – there any value to changing
the order of declaration (more generally, is there a
On Mon, Oct 24, 2011 at 6:31 PM, Merlin Moncure mmonc...@gmail.com wrote:
2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A
single SSD should be sufficient (will get you several thousand TPS on
pgbench for your DB size).
Also, raid controllers interfere with TRIM.
On Mon, Oct 24, 2011 at 11:37 PM, David Boreham david_l...@boreham.org wrote:
What about redundancy?
How do you swap an about-to-die SSD?
Software RAID-1?
The approach we take is that we use 710 series devices which have predicted
reliability similar to all the other components in the
On Thu, Oct 27, 2011 at 4:42 AM, Gregg Jaskiewicz gryz...@gmail.com wrote:
What does 'select * from pg_stat_activity' say, more precisely - the
waiting column.
Whether that particular process is waiting for it to be granted some
kind of database-level lock.
--
Sent via pgsql-performance
On Fri, Oct 28, 2011 at 12:28 PM, Marcus Engene meng...@engene.se wrote:
Hi list,
Every now and then I have write peaks which causes annoying delay on my
website. No particular reason it seems, just that laws of probability
dictates that there will be peaks every now and then.
Anyway,
On Mon, Oct 31, 2011 at 2:08 PM, Robert Haas robertmh...@gmail.com wrote:
Multicolumn indices on (c1, c2, ..., cn) can only be used on where
clauses involving c1..ck with kn.
I don't think that's true. I believe it can be used for a query that
only touches, say, c2. It's just extremely
On Mon, Oct 31, 2011 at 3:24 PM, Robert Haas robertmh...@gmail.com wrote:
Sure it does:
rhaas=# create table baz (a bool, b int, c text, primary key (a, b));
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
baz_pkey for table baz
CREATE TABLE
rhaas=# insert into baz select
On Wed, Nov 2, 2011 at 12:13 PM, Robert Haas robertmh...@gmail.com wrote:
I wonder if we need to rethink, though. We've gotten a number of
reports of problems that were caused by single-use CTEs not being
equivalent - in terms of performance - to a non-CTE formulation of the
same idea. It
Hi list,
I've been experiencing a weird performance issue lately.
I have a very simple (and usually very fast) query:
SELECT track_logs.id
FROM track_logs
WHERE track_logs.track_id = some id AND track_logs.track_status_id =
1 AND track_logs.date = now() - interval '1 hours'
FOR UPDATE
On Thu, Nov 3, 2011 at 2:51 PM, Claudio Freire klaussfre...@gmail.com wrote:
What other information should I provide?
Forgot all the usual details:
Server is postgresql 9.0.3, running in linux, quite loaded (load
average ~7), WAL on raid 1 2 spindles, data on raid 10 4 spindles, 16G
RAM.
Could
On Thu, Nov 3, 2011 at 3:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Claudio Freire klaussfre...@gmail.com writes:
But I cannot figure out which transaction it would be. There *are*, in
fact, connections in idle in transaction state, which makes me think
those would be the culprit
On Thu, Nov 3, 2011 at 4:29 PM, Claudio Freire klaussfre...@gmail.com wrote:
Next time I find it blocking, I will check pg_locks directly and post
the output.
Here it is, two instances of the query, while blocked:
select * from pg_locks where pid = 22636;
locktype| database | relation
On Thu, Nov 3, 2011 at 8:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
But before pursuing that idea, probably first you should
back up and confirm whether the process is actually waiting, or running,
or just really slow due to CPU contention. It might be useful to see
what strace has to say
On Fri, Nov 4, 2011 at 1:26 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
As already pointed out, SELECT FOR UPDATE will require a disk write
of the tuple(s) read. If these are glutting, increasing
shared_buffers would tend to make things worse.
I thought shared_buffers improved write
On Fri, Nov 4, 2011 at 2:07 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Before anything else, you might want to make sure you've spread your
checkpoint activity as much as possible by setting
checkpoint_completion_target = 0.9.
We have
shared_buffers = 2G
bgwriter_delay = 1000ms
On Fri, Nov 4, 2011 at 3:26 PM, Shaun Thomas stho...@peak6.com wrote:
On 11/04/2011 12:22 PM, Claudio Freire wrote:
bgwriter_delay = 1000ms
wal_writer_delay=2000ms
commit_delay=1
!?
snip
Setting commit_delay can only help when there are many concurrently
committing transactions
On Fri, Nov 4, 2011 at 3:54 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Nov 4, 2011 at 2:45 PM, Claudio Freire klaussfre...@gmail.com wrote:
I don't think 1 second can be such a big difference for the bgwriter,
but I might be wrong.
Well, the default value is 200 ms. And I've never
On Fri, Nov 4, 2011 at 4:07 PM, Claudio Freire klaussfre...@gmail.com wrote:
Here again, you've set it to ten times the default value. That
doesn't seem like a good idea. I would start with the default and
tune down.
Already did that. Waiting to see how it turns out.
Nope, still happening
On Thu, Nov 17, 2011 at 11:17 AM, Aidan Van Dyk ai...@highrise.ca wrote:
But remember, you're doing all that in a single query. So your disk
subsystem might even be able to perform even more *througput* if it
was given many more concurrent request. A big raid10 is really good
at handling
On Thu, Nov 17, 2011 at 2:10 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
What sorts of things should I be looking at to improve the performance
of this query? Is my interpretation of that log line totally off base?
You'll have to post some more details.
Like a query and an explain/explain
On Thu, Nov 17, 2011 at 10:07 PM, Greg Matthews
gregory.a.matth...@nasa.gov wrote:
if (smoothed_alloc = (float) recent_alloc)
smoothed_alloc = recent_alloc;
else if (smoothed_alloc = 0.1)
smoothed_alloc += ((float) recent_alloc - smoothed_alloc)
On Wed, Nov 23, 2011 at 7:24 PM, Gary Warner g...@cis.uab.edu wrote:
See that Seq Scan on link_url? We can't figure out why that is there! We
should be scanning for a matching urlid and we have an index on urlid?
When this is happening in a two table version of this problem, we can get
On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky sitr...@email.com wrote:
work_mem = 128MB (tried 257MB, didn't change anything)
This is probably your problem.
Without an EXPLAIN output, I cannot be sure, but 'work_mem' is not the
total amount of memory a query can use, it's the amount of
On Tue, Dec 27, 2011 at 1:00 PM, Scott Marlowe scott.marl...@gmail.com wrote:
He can lower it for just that query but honestly, even on a machine
with much more memory I'd never set it as high as he has it. On a
busy machine with 128G RAM the max I ever had it set to was 16M, and
that was
On Wed, Dec 28, 2011 at 3:02 PM, Matteo Beccati p...@beccati.com wrote:
The query eventually completed in more than 18h. For comparison a normal
run doesn't take more than 1m for that specific step.
Do you think that bad stats and suboptimal plan alone could explain such
a behaviour?
Did you
On Thu, Dec 29, 2011 at 11:33 AM, Aleksej Trofimov
aleksej.trofi...@ruptela.lt wrote:
Postgres Streaming replication is WAL based replication, so using this type
of replication you will have absolutely identical database servers, what is
best choice for HA and scaling reads. Also this choice is
On Wed, Jan 25, 2012 at 6:18 AM, sridhar bamandlapally
sridhar@gmail.com wrote:
I just want to illustrate an idea may possible for bringing up
parallel process in PostgreSQL at SQL-Query level
The PARALLEL option in Oracle really give great improvment in
performance, multi-thread concept
On Wed, Jan 25, 2012 at 5:16 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Wed, Jan 25, 2012 at 7:43 AM, Claudio Freire klaussfre...@gmail.com
wrote:
I know squat about how to implement this, but I've been considering
picking the low hanging fruit on that tree and patching up PG to try
On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
I don't think there's anything particular in postgres that would make it a
poor choice on a small system, as far as CPU usage is concerned anyway. But
inserting rows in a database is certainly slower
On Sat, Jan 28, 2012 at 2:21 PM, Jayashankar K B
jayashankar...@lnties.com wrote:
./configure
CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc
CFLAGS='-fmessage-length=0 -fpack-struct -mcpu=54418 -msoft-float'
--host=i686-pc-linux-gnu
On Sun, Jan 29, 2012 at 7:48 PM, Ron Arts ron.a...@gmail.com wrote:
Hi list,
I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver)
host.
This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.
The problem is: it's running very slow compared to running it
On Mon, Jan 30, 2012 at 6:27 AM, Saurabh saurabh@gmail.com wrote:
Hi all,
I am using Postgresql database for our project and doing some
performance testing. We need to insert millions of record with indexed
columns. We have 5 columns in table. I created index on integer only
then
On Mon, Jan 30, 2012 at 2:46 PM, Saurabh saurabh@gmail.com wrote:
max_connections = 100
shared_buffers = 32MB
wal_buffers = 1024KB
checkpoint_segments = 3
That's a default config isn't it?
You'd do well to try and optimize it for your system. The defaults are
really, reeallly
On Mon, Jan 30, 2012 at 4:13 PM, Alessandro Gagliardi
alessan...@path.com wrote:
So, here's the query:
SELECT private, COUNT(block_id) FROM blocks WHERE created 'yesterday' AND
shared IS FALSE GROUP BY private
What confuses me is that though this is a largish table (millions of rows)
with
On Mon, Jan 30, 2012 at 5:35 PM, Alessandro Gagliardi
alessan...@path.com wrote:
To answer your (non-)question about Heroku, it's a cloud service, so I don't
host PostgreSQL myself. I'm not sure how much I can mess with things like
GUC since I don't even have access to the postgres database on
On Mon, Jan 30, 2012 at 5:55 PM, Alessandro Gagliardi
alessan...@path.com wrote:
Hm. Well, it looks like setting enable_seqscan=false is session specific, so
it seems like I can use it with this query alone; but it sounds like even if
that works, it's a bad practice. (Is that true?)
Yep
My
1 - 100 of 337 matches
Mail list logo