On Thu, 2006-02-16 at 12:35 +0100, Steinar H. Gunderson wrote:
glibc-2.3.5/stdlib/qsort.c:
/* Order size using quicksort. This implementation incorporates
four optimizations discussed in Sedgewick:
I can't see any references to merge sort in there at all.
stdlib/qsort.c defines
On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:
It seems clear that our qsort.c is doing a pretty awful job of picking
qsort pivots, while glibc is mostly managing not to make that mistake.
I haven't looked at the glibc code yet to see what they are doing
differently.
glibc qsort is
On Fri, 2006-01-20 at 18:14 +0900, James Russell wrote:
I am looking to speed up performance, and since each page executes a
static set of queries where only the parameters change, I was hoping
to take advantage of stored procedures since I read that PostgreSQL's
caches the execution plans
On Fri, 2006-01-13 at 15:10 -0500, Michael Stone wrote:
OIDs seem to be on their way out, and most of the time you can get a
more helpful result by using a serial primary key anyway, but I wonder
if there's any extension to INSERT to help identify what unique id a
newly-inserted key will get?
On Mon, 2005-12-05 at 09:42 +0200, Howard Oblowitz wrote:
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql the following error occurs:
calloc : Cannot allocate memory
That's precisely what I'd expect: the backend will
On Mon, 2005-07-11 at 19:07 +0100, Enrico Weigelt wrote:
I've got a similar problem: I have to match different datatypes,
ie. bigint vs. integer vs. oid.
Of course I tried to use casted index (aka ON (foo::oid)), but
it didn't work.
Don't include the cast in the index definition, include
On Sun, 2005-23-10 at 21:36 -0700, Josh Berkus wrote:
SELECT id INTO v_check
FROM some_table ORDER BY id LIMIT 1;
IF id 0 THEN
... that says pretty clearly to code maintainers that I'm only interested in
finding out whether there's any rows in the table, while making sure I use
On Fri, 2005-21-10 at 07:34 -0500, Martin Nickel wrote:
Let's say I do the same thing in Postgres. I'm likely to have my very
fastest performance for the first few queries until memory gets filled up.
No, you're not: if a query doesn't hit the cache (both the OS cache and
the Postgres
On Mon, 2005-26-09 at 12:54 -0500, Announce wrote:
Is there an performance benefit to using int2 (instead of int4) in cases
where i know i will be well within its numeric range?
int2 uses slightly less storage space (2 bytes rather than 4). Depending
on alignment and padding requirements, as
Pryscila B Guttoski wrote:
On my master course, I'm studying the PostgreSQL's optimizer.
I don't know if anyone in this list have been participated from the
PostgreSQL's Optimizer development, but maybe someone can help me on this
question.
pgsql-hackers might be more appropriate.
Cristian Prieto wrote:
Anyway, do you know where could I get more info and theory about
database optimizer plan? (in general)
Personally I like this survey paper on query optimization:
http://citeseer.csail.mit.edu/371707.html
The paper also cites a lot of other papers that cover
Jignesh Shah wrote:
Now the question is why there are so many calls to MemoryContextSwitchTo
in a single SELECT query command? Can it be minimized?
I agree with Tom -- if profiling indicates that MemoryContextSwitchTo()
is the bottleneck, I would be suspicious that your profiling setup is
Jim C. Nasby wrote:
Actually, from what I've read 4.2BSD actually took priority into account
when scheduling I/O.
FWIW, you can set I/O priority in recent versions of the Linux kernel
using ionice, which is part of RML's schedutils package (which was
recently merged into util-linux).
-Neil
Gnanavel S wrote:
reindex the tables separately.
Reindexing should not affect this problem, anyway.
-Neil
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining
Tom Arthurs wrote:
I just puhsd 8.0.3 to production on Sunday, and haven't had a time to
really monitor it under load, so I can't tell if it's helped the context
switch problem yet or not.
8.0 is unlikely to make a significant difference -- by current sources
I meant the current CVS HEAD
Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
Shared buffers in Postgres _are_ used for caching, they just form a
secondary cache on top of the kernel's IO cache. Postgres does IO
through the filesystem, which is then cached by the kernel. Increasing
Mark Rinaudo wrote:
I'm running the Redhat Version of Postgresql which came pre-installed
with Redhat ES. It's version number is 7.3.10-1. I'm not sure what
options it was compiled with. Is there a way for me to tell?
`pg_config --configure` in recent releases.
Should i just compile my own
Mark Stosberg wrote:
I've used PQA to analyze my queries and happy overall with how they are
running. About 55% of the query time is going to variations of the pet
searching query, which seems like where it should be going. The query is
frequent and complex. It has already been combed over for
On Sun, 2005-05-29 at 16:17 -0400, Eric Lauzon wrote:
So OID can be beneficial on static tables
OIDs aren't beneficial on static tables; unless you have unusual
requirements[1], there is no benefit to having OIDs on user-created
tables (see the default_with_oids GUC var, which will default to
mark durrant wrote:
PostgreSQL Machine:
Aggregate (cost=140122.56..140122.56 rows=1 width=0)
(actual time=24516.000..24516.000 rows=1 loops=1)
- Index Scan using day on mtable
(cost=0.00..140035.06 rows=35000 width=0) (actual
time=47.000..21841.000 rows=1166025 loops=1)
Index Cond:
Tom Lane wrote:
Performance?
I'll run some benchmarks tomorrow, as it's rather late in my time zone.
If anyone wants to post some benchmark results, they are welcome to.
I disagree completely with the idea of forcing this behavior for all
datatypes. It could only be sensible for fairly wide
Josh Berkus wrote:
Don't hold your breath. MySQL, to judge by their first clustering
implementation, has a *long* way to go before they have anything usable.
Oh? What's wrong with MySQL's clustering implementation?
-Neil
---(end of broadcast)---
Joshua D. Drake wrote:
Neil Conway wrote:
Oh? What's wrong with MySQL's clustering implementation?
Ram only tables :)
Sure, but that hardly makes it not usable. Considering the price of
RAM these days, having enough RAM to hold the database (distributed over
the entire cluster) is perfectly
Ying Lu wrote:
May I know for simple = operation query, for Hash index vs. B-tree
index, which can provide better performance please?
I don't think we've found a case in which the hash index code
outperforms B+-tree indexes, even for =. The hash index code also has
a number of additional
Christopher Petrilli wrote:
This being the case, is there ever ANY reason for someone to use it?
Well, someone might fix it up at some point in the future. I don't think
there's anything fundamentally wrong with hash indexes, it is just that
the current implementation is a bit lacking.
If not,
Jim C. Nasby wrote:
Having indexes that people shouldn't be using does add confusion for
users, and presents the opportunity for foot-shooting.
Emitting a warning/notice on hash-index creation is something I've
suggested in the past -- that would be fine with me.
Even if there is some kind of
Jim C. Nasby wrote:
No, hash joins and hash indexes are unrelated.
I know they are now, but does that have to be the case?
I mean, the algorithms are fundamentally unrelated. They share a bit of
code such as the hash functions themselves, but they are really solving
two different problems (disk
Tom Lane wrote:
On the other hand, once you reach the target index page, a hash index
has no better method than linear scan through all the page's index
entries to find the actually wanted key(s)
I wonder if it would be possible to store the keys in a hash bucket in
sorted order, provided that
Tom Lane wrote:
I have a gut reaction against that: it makes hash indexes fundamentally
subservient to btrees.
I wouldn't say subservient -- if there is no ordering defined for the
index key, we just do a linear scan.
However: what about storing the things in hashcode order? Ordering uint32s
Keith Worthington wrote:
- Seq Scan on tbl_current (cost=0.00..1775.57 rows=76457
width=31) (actual time=22.870..25.024 rows=605 loops=1)
This rowcount is way off -- have you run ANALYZE recently?
-Neil
---(end of broadcast)---
TIP 4:
Tom Lane wrote:
Not too many releases ago, there were several columns in pg_proc that
were intended to support estimation of the runtime cost and number of
result rows of set-returning functions. I believe in fact that these
were the remains of Joe Hellerstein's thesis on expensive-function
Tom Lane wrote:
The larger point is that writing an estimator for an SRF is frequently a
task about as difficult as writing the SRF itself
True, although I think this doesn't necessarily kill the idea. If
writing an estimator for a given SRF is too difficult, the user is no
worse off than they
Adam Palmblad wrote:
can I actually look at the call tree that occurs when my function is
being executed or will I be limited to viewing calls to functions in
the postmaster binary?
You're the one with the gprof data, you tell us :)
It wouldn't surprise me if gprof didn't get profiling data for
Magnus Hagander wrote:
You can *never* get above 80 without using write cache, regardless of
your OS, if you have a single disk.
Why? Even with, say, a 15K RPM disk? Or the ability to fsync() multiple
concurrently-committing transactions at once?
-Neil
---(end of
On Sat, 2005-02-05 at 14:42 -0500, Tom Lane wrote:
Marinos Yannikos [EMAIL PROTECTED] writes:
Some more things I tried:
You might try the attached patch (which I just applied to HEAD).
It cuts down the number of acquisitions of the BufMgrLock by merging
adjacent bufmgr calls during a GIST
On Fri, 2004-11-26 at 14:37 +1300, Andrew McMillan wrote:
In PostgreSQL the UPDATE will result
internally in a new record being written, with the old record being
marked as deleted. That old record won't be re-used until after a
VACUUM has run, and this means that the on-disk tables will have
Josh Berkus wrote:
I was under the impression that work_mem would be used for the index if there
was an index for the RI lookup. Wrong?
Yes -- work_mem is not used for doing index scans, whether for RI
lookups or otherwise.
-Neil
---(end of
On Fri, 2004-11-05 at 06:20, Steinar H. Gunderson wrote:
You mean, like, open(filename, O_DIRECT)? :-)
This disables readahead (at least on Linux), which is certainly not we
want: for the very case where we don't want to keep the data in cache
for a while (sequential scans, VACUUM), we also want
On Thu, 2004-11-04 at 23:29, Pierre-Frédéric Caillaud wrote:
There is also the fact that syncing after every transaction could be
changed to syncing every N transactions (N fixed or depending on the data
size written by the transactions) which would be more efficient than the
On Fri, 2004-11-05 at 02:47, Chris Browne wrote:
Another thing that would be valuable would be to have some way to say:
Read this data; don't bother throwing other data out of the cache
to stuff this in.
This is similar, although not exactly the same thing:
On Mon, 2004-11-01 at 11:01, Josh Berkus wrote:
Gist indexes take a long time to create as compared
to normal indexes is there any way to speed them up ?
(for example by modifying sort_mem or something temporarily )
More sort_mem will indeed help.
How so? sort_mem improves index
On Mon, 2004-10-25 at 17:17, Curt Sampson wrote:
When you select all the columns, you're going to force it to go to the
table. If you select only the indexed column, it ought to be able to use
just the index, and never read the table at all.
Perhaps in other database systems, but not in
Matt Clark wrote:
I'm thinking along the lines of an FS that's aware of PG's strategies and
requirements and therefore optimised to make those activities as efiicient
as possible - possibly even being aware of PG's disk layout and treating
files differently on that basis.
As someone else noted,
On Fri, 2004-10-15 at 04:38, Igor Maciel Macaubas wrote:
I have around 100 tables, and divided them in 14 different schemas,
and then adapted my application to use schemas as well.
I could percept that the query / insert / update times get pretty much
faster then when I was using the old
On Thu, 2004-10-14 at 04:57, Mark Wong wrote:
I have some DBT-3 (decision support) results using Gavin's original
futex patch fix.
I sent an initial description of the futex patch to the mailing lists
last week, but it never appeared (from talking to Marc I believe it
exceeded the size limit
On Thu, 2004-10-07 at 08:26, Paul Ramsey wrote:
The shared_buffers are shared (go figure) :). It is all one pool shared
by all connections.
Yeah, I thought this was pretty clear. Doug, can you elaborate on where
you saw the misleading docs?
The sort_mem and vacuum_mem are *per*connection*
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
Now I'm reading an article, written by the same author that ispired the magic 300
on analyze.c, about Self-tuning Histograms. If this is implemented, I understood
we can take rid of vacuum analyze for mantain up to date the statistics.
Have
On Thu, 2004-09-23 at 05:59, Tom Lane wrote:
I think this would allow the problems of cached plans to bite
applications that were previously not subject to them :-(.
An app that wants plan re-use can use PREPARE to identify the
queries that are going to be re-executed.
I agree; if you want to
On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:
According to the manpage, O_DIRECT implies O_SYNC:
File I/O is done directly to/from user space buffers. The I/O is
synchronous, i.e., at the completion of the read(2) or write(2)
system call, data is guaranteed to
Tom Lane wrote:
Markus Schaber [EMAIL PROTECTED] writes:
So, now my question is, why does the query optimizer not recognize that
it can throw away those non-unique Sort/Unique passes?
Because the issue doesn't come up often enough to justify expending
cycles to check for it.
How many cycles are we
Rosser Schwarz wrote:
PostgreSQL uses the operating system's disk cache.
... in addition to its own buffer cache, which is stored in shared
memory. You're correct though, in that the best practice is to keep the
PostgreSQL cache small and give more memory to the operating system's
disk cache.
Christopher Browne wrote:
One of our sysadmins did all the configuring OS stuff part; I don't
recall offhand if there was a need to twiddle something in order to
get it to have great gobs of shared memory.
FWIW, the section on configuring kernel resources under various
Unixen[1] doesn't have any
Eugeny Balakhonov wrote:
I tries to run simple query:
select * from files_t where parent =
Use this instead:
select * from files_t where parent = '';
(parent = ::int8 would work as well.)
PostgreSQL ( 7.5) won't consider using an indexscan when the predicate
involves an integer
On Fri, 2004-05-14 at 17:08, Jaime Casanova wrote:
is there any diff. in performance if i use smallint in place of integer?
Assuming you steer clear of planner deficiencies, smallint should be
slightly faster (since it consumes less disk space), but the performance
difference should be very
On Wed, 2004-05-12 at 05:02, Shridhar Daithankar wrote:
I agree. For shared buffers start with 5000 and increase in batches on 1000. Or
set it to a high value and check with ipcs for maximum shared memory usage. If
share memory usage peaks at 100MB, you don't need more than say 120MB of
On Mon, 2004-04-05 at 11:36, Josh Berkus wrote:
Unfortunately, these days only Tom and Neil seem to be seriously working on
the query planner (beg pardon in advance if I've missed someone)
Actually, Tom is the only person actively working on the planner --
while I hope to contribute to it in
Andrew Sullivan wrote:
Intended, no. Expected, yes. This topic has had the best
Postgres minds work on it, and so far nobody's come up with a
solution.
Actually, this has already been fixed in CVS HEAD (as I mentioned in
this thread yesterday). To wit:
nconway=# create table t1 (a int8);
Steven Butler wrote:
I've recently converted a database to use bigint for the indices. Suddenly
simple queries like
select * from new_test_result where parent_id = 2
are doing full table scans instead of using the index.
This is fixed in CVS HEAD. In the mean time, you can enclose the
integer
Mike Nolan wrote:
Is there a way to copy a table INCLUDING the check constraints? If not,
then that information is lost, unlike varchar(n).
pg_dump -t should work fine, unless I'm misunderstanding you.
-Neil
---(end of broadcast)---
TIP 5: Have
Josh Berkus wrote:
Hmmm. I was told that it was this way for 7.4 as well; that's why it's in
the docs that way.
No such statement is made in the docs AFAIK: they merely say If
nonzero, turn on WAL-related debugging output.
I invented a new #ifdef symbol when making this change in CVS HEAD, so
Simon Riggs wrote:
Josh Berkus wrote
Simon Riggs wrote
Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.
I'm pretty sure that WAL_DEBUG requires a compile-time option.
I'm surprised, but you are right, the manual does SAY this requires a
compile time option; it is unfortunately not
Shridhar Daithankar [EMAIL PROTECTED] writes:
Right now, it is hotly debated on HACKERS about adding a NOWAIT
clause to SELECT FOR UPDATE. If you think your application
deployment is away for months and can try CVS head, you can expect
some action on it in coming few days.
You can also try
Loeke [EMAIL PROTECTED] writes:
do views exist fysically a separate table, or are they generated
on the fly whenever they are queried?
Views are implementing by rewriting queries into the appropriate query
on the view's base tables.
scott.marlowe [EMAIL PROTECTED] writes:
Yes, previously run query should be faster, if it fits in kernel
cache.
Or the PostgreSQL buffer cache.
Plus, the design of Postgresql is such that it would have to do a
LOT of cache checking to see if there were any updates to the
underlying data
Harald Fuchs [EMAIL PROTECTED] writes:
Does anyone know how to access the affected values for
statement-level triggers? I mean what the old and new
pseudo-records are for row-level triggers.
Yeah, I didn't get around to implementing that. If anyone wants this
feature, I'd encourage them to
Sean P. Thomas [EMAIL PROTECTED] writes:
1. Is there any performance difference for declaring a primary or
foreign key a column or table contraint? From the documentation,
which way is faster and/or scales better:
CREATE TABLE distributors (
did integer,
name
David Shadovitz [EMAIL PROTECTED] writes:
I'm running PG 7.2.2 on RH Linux 8.0.
Note that this version of PostgreSQL is quite old.
I'd like to know why VACUUM ANALYZE table is extemely slow (hours) for
certain tables.
Is there another concurrent transaction that has modified the table
but
Tom Lane [EMAIL PROTECTED] writes:
I don't believe anyone has proposed removing the facility
altogether. There's a big difference between making the default
behavior be not to have OIDs and removing the ability to have OIDs.
Right, that's what I had meant to say. Sorry for the inaccuracy.
Mark Kirkwood [EMAIL PROTECTED] writes:
Note : The Pgbench runs were conducted using -s 10 and -t 1000 -c
1-64, 2 - 3 runs of each setup were performed (averaged figures
shown).
FYI, the pgbench docs state:
NOTE: scaling factor should be at least as large as the largest
number of
LIANHE SHAO [EMAIL PROTECTED] writes:
Hello, I use php as front-end to query our database. When I use
System Monitor to check the usage of cpu and memory, I noticed that
the cpu very easily gets up to 100%. Is that normal? if not, could
someone points out possible reason?
You haven't given us
Steve Wampler [EMAIL PROTECTED] writes:
PG: 7.2.3 (RedHat 8.0)
You're using PG 7.2.3 with the PG 7.1 JDBC driver; FWIW, upgrading to
newer software is highly recommended.
The two sites were performing at comparable speeds until a few days
ago, when we deleted several million records from
Josh Berkus [EMAIL PROTECTED] writes:
1) to keep it working, you will probably need to run ANALZYE more
often than you have been;
I'm not sure why this would be the case -- can you elaborate?
4) Currently, pg_dump does *not* back up statistics settings.
Yes, it does.
-Neil
LIANHE SHAO [EMAIL PROTECTED] writes:
We will have a very large database to store microarray data (may
exceed 80-100G some day). now we have 1G RAM, 2G Hz Pentium 4, 1
CPU. and enough hard disk.
Could anybody tell me that our hardware is an issue or not?
IMHO the size of the DB is less
Stefan Champailler [EMAIL PROTECTED] writes:
So here's my trouble : some DELETE statement take up to 1 minute to
complete (but not always, sometimes it's fast, sometimes it's that
slow). Here's a typical one : DELETE FROM response_bool WHERE
response_id = '125' The response_bool table has no
Torsten Schulz [EMAIL PROTECTED] writes:
Our Server:
Dual-CPU with 1.2 GHz
1.5 GB RAM
What kind of I/O subsystem is in this machine? This is an x86 machine,
right?
Has anyone an idea what's the best configuration for thta server?
It is difficult to say until you provide some information on
[EMAIL PROTECTED] writes:
But it was not this bad in 7.3 as far as i understand.
No, I believe this behavior is present in any recent release of
PostgreSQL.
-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an
Tom Lane [EMAIL PROTECTED] writes:
(I believe the previous discussion also agreed that we wanted to
postpone the freezing of now(), which currently also happens at
BEGIN rather than the first command after BEGIN.)
That doesn't make sense to me: from a user's perspective, the start
of the
Suchandra Thapa [EMAIL PROTECTED] writes:
I was thinking using about using a raid 1+0 array to hold the
database but since I can use different array types, would it be
better to use 1+0 for the wal logs and a raid 5 for the database?
It has been recommended on this list that getting a RAID
Patrick Hatcher [EMAIL PROTECTED] writes:
Do you have an index on ts.bytes? Josh had suggested this and after I put
it on my summed fields, I saw a speed increase.
What's the reasoning behind this? ISTM that sum() should never use an
index, nor would it benefit from using one.
-Neil
Marc G. Fournier [EMAIL PROTECTED] writes:
- Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213
width=16) (actual time=0.29..5562.25 rows=462198 loops=1)
Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without
time zone)
Interesting that
[EMAIL PROTECTED] writes:
The \timing psql command gives different time for the same query executed
repeatedly.
That's probably because executing the query repeatedly results in
different execution times, as one would expect. \timing returns the
exact query response time, nevertheless.
-Neil
On Tue, 2003-11-04 at 09:49, [EMAIL PROTECTED] wrote:
How do we measure the response time in postgresql?
In addition to EXPLAIN ANALYZE, the log_min_duration_statement
configuration variable and the \timing psql command might also be
useful.
-Neil
---(end of
On Fri, 2003-10-31 at 11:37, Greg Stark wrote:
My understanding is that the case where HT hurts is precisely your case. When
you have two real processors with HT the kernel will sometimes schedule two
jobs on the two virtual processors on the same real processor leaving the two
virtual
On Sat, 2003-10-25 at 13:49, Reece Hart wrote:
Having to explicitly cast criterion is very non-intuitive. Moreover,
it seems quite straightforward that PostgreSQL might incorporate casts
This is a well-known issue with the query optimizer -- search the
mailing list archives for lots more
On Mon, 2003-10-27 at 10:15, Tarhon-Onu Victor wrote:
select count(*) from items where channel
5000; will never use any of the current indexes because none matches
your WHERE clause (channel appears now only in multicolumn indexes).
No -- a multi-column index can be used to answer queries
On Sun, 2003-10-26 at 22:49, Greg Stark wrote:
What version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an
int8 to store its count so it's not limited to 4 billion records.
Unfortunately int8 is somewhat inefficient as it has to be dynamically
allocated repeatedly.
Uh, what?
On Fri, 2003-10-24 at 20:11, Allen Landsidel wrote:
However, I do the same thing with the reindex, so I'll definitely be taking
it out there, as that one does lock.. although I would think the worst this
would do would be a making the index unavailable and forcing a seq scan..
is that not
On Mon, 2003-10-27 at 13:52, Tom Lane wrote:
Greg is correct. int8 is a pass-by-reference datatype and so every
aggregate state-transition function cycle requires at least one palloc
(to return the function result).
Interesting. Is there a reason why int8 is pass-by-reference? (ISTM that
On Mon, 2003-10-13 at 15:43, David Griffiths wrote:
Here are part of the contents of my sysctl.conf file (note that I've
played with values as low as 60 with no difference)
kernel.shmmax=14
kernel.shmall=14
This is only a system-wide limit -- it either allows the shared
On Wed, 2003-10-08 at 21:44, Bruce Momjian wrote:
Agreed. Do we set them all to -O2, then remove it from the ones we
don't get successful reports on?
I took the time to compile CVS tip with a few different machines from
HP's TestDrive program, to see if there were any regressions using the
new
On Wed, 2003-10-08 at 10:48, Andrew Sullivan wrote:
My worry about this test is that it gives us precious little
knowledge about concurrent connection slowness, which is where I find
the most significant problems.
As Jeff points out, the second set of results is for 20 concurrent
connections.
On Wed, 2003-10-08 at 11:46, Jeff wrote:
Yeah - like I expected it was able to generate much better code for
_bt_checkkeys which was the #1 function in gcc on both sun linux.
and as you can see, suncc was just able to generate much nicer code.
What CFLAGS does configure pick for gcc? From
On Wed, 2003-10-08 at 14:05, Josh Berkus wrote:
Hmmm ... both, I think. The Install Docs should have:
Here are the top # things you will want to adjust in your PostgreSQL.conf:
1) Shared_buffers link
2) Sort_mem link
3) effective_cache_size link
4) random_page_cost link
5) Fsync link
On Wed, 2003-10-08 at 15:38, Jeff wrote:
Huh. I could have sworn Tom did something like that.
Perhaps I am thinking of something else.
You had to enable some magic GUC.
Perhaps you're thinking of the new GUC var join_collapse_limit, which is
related, but doesn't effect the reordering of outer
On Wed, 2003-10-08 at 14:31, Bruce Momjian wrote:
Well, this is really embarassing. I can't imagine why we would not set
at least -O on all platforms.
ISTM the most legitimate reason for not enabling compilater
optimizations on a given compiler/OS/architecture combination is might
cause
On Sat, 2003-10-04 at 11:22, Andrew Sullivan wrote:
Also, a vacuum pretty much destroys your shared buffers, so you have
to be aware of that trade-off too.
True, although there is no reason that this necessary needs to be the
case (at least, as far as the PostgreSQL shared buffer goes). As has
On Sun, 2003-10-05 at 19:43, Tom Lane wrote:
This would be relatively easy to fix as far as our own buffering is
concerned, but the thing that's needed to make it really useful is
to prevent caching of seqscan-read pages in the kernel disk buffers.
True.
I don't know any portable way to do
On Fri, 2003-10-03 at 17:47, Rob Nagler wrote:
They don't deadlock normally,
only with reindex and vacuum did I see this behavior.
If you can provide a reproducible example of a deadlock induced by
REINDEX + VACUUM, that would be interesting.
(FWIW, I remember noticing a potential deadlock in
On Fri, 2003-10-03 at 17:34, Christopher Browne wrote:
Not surprising either. While the reindex takes place, updates to that
table have to be deferred.
Right, but that's no reason not to let SELECTs proceed, for example.
(Whether that would actually be *useful* is another question...)
-Neil
On Wed, 2003-10-01 at 13:45, Dimitri Nagiev wrote:
template1=# explain analyze select * from mytable where
mydate='2003-09-01';
QUERY PLAN
1 - 100 of 111 matches
Mail list logo