Mem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free
Swap: 368M Total, 2852K Used, 366M Free
It's right that I can figure that I can use 384M (total RAM) - 72M
(wired) - 48M (buf) = 264M for PostgreSQL.
Hence, if I set effective_cache_size to 24M (3072 8K
musings also more than welcome. Comments upon my sanity will be
referred to my doctor.
If the best price/performance option is a second hand 32-cpu Alpha
running VMS I'd be happy to go that way ;-)
Many thanks for reading this far.
Matt
---(end of broadcast
You probably, more than anything, should look at some kind of
superfast, external storage array
Yeah, I think that's going to be a given. Low end EMC FibreChannel
boxes can do around 20,000 IOs/sec, which is probably close to good
enough.
You mentioned using multiple RAID controllers as a
Don't know how cheap they are.
I have an app that does large batch updates. I found that if I dropped
the indexes, did the updates and recreated the indexes, it was faster
than doing the updates while the indexes were intact.
Yeah, unfortunately it's not batch work, but real time financial
-
the box just has to get bigger!
I doubt if the suggestions I've made are going to get you 10x, but they
may get you 2x, and then you only need the hardware to do 5x.
It all helps :-) A few percent here, a few percent there, pretty soon
you're talking serious improvements...
Thanks
Matt
Are you *sure* about that 3K updates/inserts per second xlates
to 10,800,000 per hour. That, my friend, is a WHOLE HECK OF A LOT!
Yup, I know!
During the 1 hour surge, will SELECTs at 10 minutes after the
hour depend on INSERTs at 5 minutes after the hour?
Yes, they do. It's a
Josh, the disks in the new system should be substantially faster than
the old. Both are Ultra160 SCSI RAID 5 arrays, but the new system has
15k RPM disks, as opposed to the 10k RPM disks in the old system.
Spindle speed does not correlate with 'throughput' in any easy way. What
controllers
It seems, that if I know the type and frequency of the queries a
database will be seeing, I could split the database by hand over
multiple disks and get better performance that I would with a RAID array
with similar hardware.
Unlikely, but possible if you had radically different hardware for
Squid also takes away the work of doing SSL (presuming you're running it
on a different machine). Unfortunately it doesn't support HTTP/1.1 which
means that most generated pages (those that don't set Content-length) end
up forcing squid to close and then reopen the connection to the web
Just how big do you expect your DB to grow? For a 1GB disk-space
database, I'd probably just splurge for an SSD hooked up either via
SCSI or FibreChannel. Heck, up to about 5Gb or so it is not that
expensive (about $25k) and adding another 5Gb should set you back
probably another $20k. I
Ok.. I would be surprised if you needed much more actual CPU power. I
suspect they're mostly idle waiting on data -- especially with a Quad
Xeon (shared memory bus is it not?).
In reality the CPUs get pegged: about 65% PG and 35% system. But I agree that memory
throughput and latency is an
Actually, referring down to later parts of this thread, why can't this
optimisation be performed internally for built-in types? I understand the
issue with aggregates over user-defined types, but surely optimising max()
for int4, text, etc is safe and easy?
Sorry, missed the bit about
Matt Clark [EMAIL PROTECTED] writes:
Actually, referring down to later parts of this thread, why can't this
optimisation be performed internally for built-in types? I
understand the
issue with aggregates over user-defined types, but surely
optimising max()
for int4, text, etc is safe
the machine will be dealing with lots of inserts, basically as many as we can
throw at it
If you mean lots of _transactions_ with few inserts per transaction you should get a
RAID controller w/ battery backed write-back
cache. Nothing else will improve your write performance by nearly as
...
#effective_cache_size = 1000# typically 8KB each
That's horribly wrong. It's telling PG that your OS is only likely to cache
8MB of the DB in RAM. If you've got 1GB of memory it should be between
64000 and 96000
---(end of
of 3 indexes per table affect that?
Cheers
Matt
Postscript: I may have answered my own question while writing this mail.
Under the current stress test load about 10% of the key tables' tuples are
updated between sequential vacuum-analyzes, so the received wisdom on
intervals suggests '0' in my
the DB server is under no more
apparent load. I can only assume some kind of locking issue. Is that fair?
M
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of
scott.marlowe
Sent: 17 September 2003 20:55
To: Matt Clark
Cc: [EMAIL PROTECTED]
Subject
2) Are you sure that ANALYZE is needed? Vacuum is required
whenever lots of
rows are updated, but analyze is needed only when the *distribution* of
values changes significantly.
You are right. I have a related qn in this thread about random vs. monotonic
values in indexed fields.
3) using
to something sensibly large?
You could also try decreasing cpu_index_tuple_cost and cpu_tuple_cost. These will
affect all your queries though, so what you gain
on one might be lost on another.
Matt
---(end of broadcast)---
TIP 8: explain analyze
latest-and-greatest performance for a single channel.
HTH
Matt
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Richard
Jones
Sent: 27 September 2003 18:25
To: [EMAIL PROTECTED]
Subject: [PERFORM] advice on raid controller
Hi, i'm on the verge
to be slower than
dump restore.
Matt
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
cost
to going ahead and vacuuming as often as you feel like it. Go crazy, and
speed up your DB!
OK, that's on a quad CPU box with goodish IO, so maybe there are issues on
very slow boxen, but in a heavy-update environment the advantages seem to
easily wipe out the costs.
Matt
p.s. Sorry to sound
_not_ to raise it then I'm all ears!
Matt
--
-Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Also, if you find that you need to run VACUUM FULL often, then
you need to
raise your max_fsm_pages.
Yes and no. If it's run often enough then the number of tracked pages
shouldn't need to be raised, but then again...
Oops, sorry, didn't pay attention and missed the mention of FULL. My
On Sat, Oct 04, 2003 at 12:29:55AM +0100, Matt Clark wrote:
My real world experience on a *very* heavily updated OLTP type
DB, following
advice from this list (thanks guys!), is that there is
essentially zero cost
to going ahead and vacuuming as often as you feel like it. Go
crazy
might as well do raw disk access from the
get-go. Portability vs. Performance, the age old quandary. FWIW I and many
others stand back in pure amazement at the sheer _quality_ of PostgreSQL.
Rgds,
Matt
---(end of broadcast)---
TIP 2: you can get
I ended up going back to a default postgresql.conf and reapplying the
various tunings one-by-one. Turns out that while setting fsync = false
had little effect on the slow IDE box, it had a drastic effect on this
faster SCSI box and performance is quite acceptable now (aside from the
expected
.
There is no way I know of to get indexes preferentially cached over data
though.
Matt
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Nick
Fankhauser
Sent: 17 December 2003 19:57
To: [EMAIL PROTECTED] Org
Subject: [PERFORM] Adding RAM: seeking advice
have write
caching turned on, which is good for speed, but
not crash-safe.
matt
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
will most
likely beat the Sun by some distance, although
what the Sun lacks in CPU power it may make up a bit in memory bandwidth/latency.
Matt
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Subbiah,
Stalin
Sent: 23 March 2004 18:41
To: 'Andrew Sullivan
Now if these vendors could somehow eliminate downtime due to human error
we'd be talking *serious* reliablity.
You mean making the OS smart enough to know when clearing the arp
cache is a bonehead operation, or just making the hardware smart
enough to realise that the keyswitch really
54 14 32
2 1 0 23380 577680 105920 2145140 0 0 032 156 117549 50 16 35
HTH
Matt
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: 20 April 2004 01:02
To: [EMAIL PROTECTED]
Cc: Joe Conway; scott.marlowe; Bruce Momjian
How about iSCSI? This is exactly what it's for - presenting a bunch of
remote SCSI hardware as if it were local.
There are several reference implementations on SourceForge from Intel, Cisco
others.
I've never tried it myself, but I would if I had the need. And let's face
it there are some
This is normal. My personal workstation has been up for 16
days, and it shows 65 megs used for swap. The linux kernel
looks for things that haven't been accessed in quite a while
and tosses them into swap to free up the memory for other uses.
This isn't PostgreSQL's fault, or anything
And this is exactly where the pgpool advantage lies.
Especially with the
TPC-W, the Apache is serving a mix of PHP (or whatever CGI
technique is
used) and static content like images. Since the 200+ Apache
kids serve
any of that content by random and the emulated browsers very much
It is likely that you are missing an index on one of those foreign
key'd items.
I don't think that is too likely as a foreign key reference
must be a unique key which would have an index.
I think you must be thinking of primary keys, not foreign keys. All
one-to-many relationships
I've looked at PREPARE, but apparently it only lasts
per-session - that's worthless in our case (web based
service, one connection per data-requiring connection).
That's a non-sequitur. Most 'normal' high volume web apps have persistent
DB connections, one per http server process. Are you
As for vendor support for Opteron, that sure looks like a
trainwreck... If you're going through IBM, then they won't want to
respond to any issues if you're not running a bog-standard RHAS/RHES
release from Red Hat. And that, on Opteron, is preposterous, because
there's plenty of the bits of
In the MySQL manual it says that MySQL performs best with Linux 2.4 with
ReiserFS on x86. Can anyone official, or in the know, give similar
information regarding PostgreSQL?
I'm neither official, nor in the know, but I do have a spare moment! I
can tell you that any *NIX variant on any modern
trainwreck... If you're going through IBM, then they won't want to
respond to any issues if you're not running a
bog-standard RHAS/RHES
release from Red Hat.
... To be fair, we keep on actually running into things that
_can't_ be backported, like fibrechannel drivers that were
SELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION
FROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con
WHERE cmp.BARCODE=con.BARCODE
AND cmp.WELL_INDEX=con.WELL_INDEX
AND cmp.MAT_ID=con.MAT_ID
AND cmp.MAT_ID = 3
Basically you set a default in seconds for the HTML results to be
cached, and then have triggers set that force the cache to regenerate
(whenever CRUD happens to the content, for example).
Can't speak for Perl/Python/Ruby/.Net/Java, but Cache_Lite sure made a
believer out of me!
Nice to have
More to the point though, I think this is a feature that really really
should be in the DB, because then it's trivial for people to use.
How does putting it into PGPool make it any less trivial for people to
use?
The answers are at
Any competently written application where caching results would be a
suitable performance boost can already implement application or
middleware caching fairly easily, and increase performance much more
than putting result caching into the database would.
I guess the performance increase is
OK, that'd work too... the point is if you're re-connecting
all the time it doesn't really matter what else you do for
performance.
Yeah, although there is the chap who was asking questions on the list
recently who had some very long-running code on his app servers, so was best
off closing
Hello, I've thought it would be nice to index certain aspects of my
apache log files for analysis. I've used several different techniques
and have something usable now, but I'd like to tweak it one step
further.
My first performance optimization was to change the logformat into a
CSV format. I
On Tue, 19 Oct 2004 15:49:45 -0400, Jeremy Dunn [EMAIL PROTECTED] wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Matt Nuzum
Sent: Tuesday, October 19, 2004 3:35 PM
To: pgsql-performance
Subject: [PERFORM] Speeding up this function
You are asking the wrong question. The best OS is the OS you (and/or
the customer) knows and can administer competently. The real
performance differences between unices are so small as to be ignorable
in this context. The context switching bug is not OS-dependent, but
varys in severity
How would I turn that off? In the kernel config? Not too familiar with
that. I have a 2 proc xeon with 4 gigs of mem on the way for postgres,
so I hope HT isn't a problem. If HT is turned off, does it just not
use the other half of the processor? Or does the processor just work
as one unit?
The real
performance differences between unices are so small as to be ignorable
in this context.
<>
Well, at least the difference between Linux and BSD. There are
substantial
tradeoffs should you chose to use Solaris or UnixWare.
Yes, quite right, I should have said
OT
Hyperthreading is actually an excellent architectural feature that
can give significant performance gains when implemented well and used
for an appropriate workload under a decently HT aware OS.
IMO, typical RDBMS streams are not an obviously appropriate workload,
Intel didn't implement it
performance for the WAL?
* optimised scattered writes for checkpointing?
* Knowledge that FSYNC is being used for preserving ordering a lot of the
time, rather than requiring actual writes to disk (so long as the writes
eventually happen in order...)?
Matt
Matt Clark
Ymogen Ltd
P: 0845 130 4531
W
I don't have iostat on that machine, but vmstat shows a lot of writes to
the drives, and the runnable processes are more than 1:
6 1 0 3617652 292936 279192800 0 52430 1347 4681 25
19 20 37
Assuming that's the output of 'vmstat 1' and not some other delay,
50MB/second of
and certainly anyone who's been around a computer more than a week or
two knows which direction in and out are customarily seen from.
regards, tom lane
Apparently not whoever wrote the man page that everyone copied ;-)
Interesting. I checked this on several machines. They actually
Title: Message
The best way to get all the stuff needed by a query into
RAM is to run the query. Is it more that you want to 'pin' the data in RAM
so it doesn't get overwritten by other
queries?
-Original Message-From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
I have a dual processor system that can support over 150 concurrent
connections handling normal traffic and load. Now suppose I setup
Apache to spawn all of it's children instantly, what will
...
This will spawn 150 children in a short order of time and as
this takes
Doctor, it hurts
Apache::DBI overall works better to what I require, even if
it is not a
pool per sey. Now if pgpool supported variable rate pooling like
Apache does with it's children, it might help to even things
out. That
and you'd still get the spike if you have to start the webserver and
Case in point: A first time visitor hits your home page. A
dynamic page is generated (in about 1 second) and served
(taking 2 more seconds) which contains links to 20 additional
The gain from an accelerator is actually even more that that, as it takes
essentially zero seconds for Apache to
Correct the 75% of all hits are on a script that can take
anywhere from
a few seconds to a half an hour to complete.The script
essentially
auto-flushes to the browser so they get new information as it arrives
creating the illusion of on demand generation.
This is more like a
1- You have a query that runs for half an hour and you spoon feed
the results to the client ?
(argh)
2- Your script looks for new data every few seconds, sends a
packet, then sleeps, and loops ?
If it's 2 I have a readymade solution for you, just ask.
I'm guessing (2) - PG
These are CGI scripts at the lowest level, nothing more and nothing
less. While I could probably embed a small webserver directly into
the perl scripts and run that as a daemon, it would take away the
portability that the scripts currently offer.
If they're CGI *scripts* then they just use
In your webpage include an iframe with a Javascript to refresh it
every five seconds. The iframe fetches a page from the server which
brings in the new data in form of generated JavaScript which writes
in the parent window. Thus, you get a very short request every 5
seconds to fetch
- ITEM table will, grow, grow, grow (sold items are not deleted)
WHERE PRODUCT.SECTION_USED_FK IS NOT NULL AND ITEM.STATUS=1 and
(ITEM.KIND=2 or ITEM.KIND=3)
Partial index on item.status ?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe
All 3 plans have crappy estimates.
Run ANALYZE in production, then send another explain analyze (as an
attachment please, to avoid linewrap).
Er, no other possible answer except Rod's :-)
---(end of broadcast)---
TIP 2: you can get off all lists
Javascript is too powerful to turn for any random web page. It is only
essential for web pages because people write their web pages to only
work with javascript.
Hmm... I respectfully disagree. It is so powerful that it is impossible
to ignore when implementing a sophisticated app. And it
A note though : you'll have to turn off HTTP persistent
connections in your server (not in your proxy) or youre back to
square one.
I hadn't considered that. On the client side it would seem to be up to
the client whether to use a persistent connection or not. If it does,
then yeah, a
Pierre-Frédéric Caillaud wrote:
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
cela m'a fait le sourire :-)
(apologies for bad french)
M
---(end of broadcast)---
TIP 7: don't forget to increase
For some reason it's a requirement that partial wildcard
searches are done on this field, such as SELECT ... WHERE
field LIKE 'A%'
I thought an interesting way to do this would be to simply
create partial indexes for each letter on that field, and it
works when the query matches the
To me, these three queries seem identical... why doesn't the first one
(simplest to understand and write) go the same speed as the third one?
I'll I'm trying to do is get statistics for one day (in this case,
today) summarized. Table has ~25M rows. I'm using postgres 7.3.? on
rh linux 7.3 (note
With that many rows, and a normal index on the field,
postgres figures the best option for say I% is not an index
scan, but a sequential scan on the table, with a filter --
quite obviously this is slow as heck, and yes, I've run
analyze several times and in fact have the vacuum analyze
Am I right to assume that writeback is both fastest and at
the same time as safe to use as ordered? Maybe any of you
did some benchmarks?
It should be fastest because it is the least overhead, and safe because
postgres does it's own write-order guaranteeing through fsync(). You should
also
Another man working to the bitter end this Christmas!
There could be many reasons, but maybe first you should look at the amount
of RAM available? If the tables fit in RAM on the production server but not
on the dev server, then that will easily defeat the improvement due to using
the native DB
vmstat 10. (ignore the first line)
Keep an eye on the pi and po parameters. (kilobytes paged in and out)
HTH,
Matt
--
Matt Casters [EMAIL PROTECTED]
i-Bridge bvba, http://www.kettle.be
Fonteinstraat 70, 9400 Okegem, Belgium
Phone +32 (0) 486/97.29.37
---(end
This page may be of use:
http://www.serverworldmagazine.com/monthly/2003/02/solaris.shtml
From personal experience, for god's sake don't think Solaris' VM/swap
implementation is easy - it's damn good, but it ain't easy!
Matt
Kevin Schroeder wrote:
I think it's probably just reserving them. I
documents immediately, are there any fine manuals to
read on data
warehouse performance tuning on PostgreSQL?
Thanks in advance for any help you may have, I'll do my best to keep
pgsql-performance up to date
on the results.
Best regards,
Matt
--
Matt Casters [EMAIL PROTECTED]
i-Bridge bvba, http
level to get maximum performance out of the system.
Mmmm. This is going to be a though one to crack. Perhaps it will be
possible to get some extra juice out of placing the indexes on the smaller
disks (150G) and the data on the bigger ones?
Thanks!
Matt
-Oorspronkelijk bericht-
Van
Joshua,
Actually that's a great idea!
I'll have to check if Solaris wants to play ball though.
We'll have to see as we don't have the new disks yet, ETA is next week.
Cheers,
Matt
-Oorspronkelijk bericht-
Van: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Verzonden: donderdag 20 januari
my best to keep
pgsql-performance up to date on the results.
Best
regards,
Matt
___
Matt Casters
i-Bridge bvba, http://www.kettle.be
Fonteinstraat 70, 9400 OKEGEM, Belgium
Tel. 054/25.01.37
GSM 0486/97.29.37
fine as long as the optimiser supports it to prune correctly.
Cheers,
Matt
--
Matt Casters [EMAIL PROTECTED]
i-Bridge bvba, http://www.kettle.be
Fonteinstraat 70, 9400 Okegem, Belgium
Phone +32 (0) 486/97.29.37
---(end of broadcast)---
TIP 6
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select doesn't cause an update,
and you can't be
. Real partitioning will take care of correct partition
selection.
This IS bad news. It would mean a serious change in the ETL.
I think I can solve the other problems, but I don't know about this one...
Regards,
Matt
---(end of broadcast)---
TIP 4
that pgpool couldn't make a good guess in the majority
of cases!
M
Joshua D. Drake wrote:
Matt Clark wrote:
Presumably it can't _ever_ know without being explicitly told,
because even for a plain SELECT there might be triggers involved that
update tables, or it might be a select of a stored
, the observed performance is not what I
would hope for.
Regards,
Matt Olson
Ocean Consulting
http://www.oceanconsulting.com/
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
_rough_
benchmarking. This is a production app, so I can't get too much in the way
of the daily batches.
--
Matt Olson
Ocean Consulting
http://www.oceanconsulting.com/
On Tuesday 10 May 2005 11:13 am, Greg Stark wrote:
Matt Olson writes:
I've done other things that make sense, like using
up, a data warehouse, the observed performance is not what I
would hope for.
Regards,
Matt Olson
Ocean Consulting
http://www.oceanconsulting.com/
---(end of broadcast)---
TIP 8: explain analyze is your friend
from
writing to data/base
but the strange load pattern remains. (system is ~30% of the overall
load, vs 3% before)
So, my question is, what happened, and how can I get it back to the same
load pattern
7.4.6 had, and the same pattern I had for 4 hours before it went crazy?
Matt
My company is purchasing a Sunfire x4500 to run our most I/O-bound databases,
and I'd like to get some advice on configuration and tuning. We're currently
looking at:
- Solaris 10 + zfs + RAID Z
- CentOS 4 + xfs + RAID 10
- CentOS 4 + ext3 + RAID 10
but we're open to other suggestions.
From
dispatches only
1 logical I/O request at a time.
Dimitri [EMAIL PROTECTED] 03/23/07 2:28 AM
On Friday 23 March 2007 03:20, Matt Smiley wrote:
My company is purchasing a Sunfire x4500 to run our most I/O-bound
databases, and I'd like to get some advice on configuration and tuning.
We're currently
Hi Dimitri,
First of all, thanks again for the great feedback!
Yes, my I/O load is mostly read operations. There are some bulk writes done in
the background periodically throughout the day, but these are not as
time-sensitive. I'll have to do some testing to find the best balance of read
Hi David,
Thanks for your feedback! I'm rather a newbie at this, and I do appreciate the
critique.
First, let me correct myself: The formulas for the risk of loosing data when
you loose 2 and 3 disks shouldn't have included the first term (g/n). I'll
give the corrected formulas and tables
will be created.
Considering a single table would grow to 10mil+ rows at max, and this
machine will sustain about 25mbps of insert/update/delete traffic 24/7 -
365, will I be saving much by partitioning data like that?
--
-Matt
http://twiki.spimageworks.com/twiki/bin/view/Software/CueDevelopment
I'm trying to fine tune this query to return in a reasonable amount of time
and am having difficulties getting the query to run the way I'd like. I
have a couple of semi-related entities that are stored in individual tables,
say, A and B. There is then a view created that pulls together the
width=517)
Index Cond: (c.id = outer.listing_fid)
On Thu, Apr 3, 2008 at 7:19 PM, Tom Lane [EMAIL PROTECTED] wrote:
Matt Klinker [EMAIL PROTECTED] writes:
I new I'd forget something! I've tried this on both 8.2 and 8.3 with
the
same results.
Then you're going to have to provide
)
On Thu, Apr 3, 2008 at 11:49 PM, Tom Lane [EMAIL PROTECTED] wrote:
Matt Klinker [EMAIL PROTECTED] writes:
Sorry for not including this extra bit originally. Below is the explain
detail from both the query to the view that takes longer and then the
query
directly to the single table
:
Matt Klinker [EMAIL PROTECTED] writes:
--Joined View:
CREATE OR REPLACE VIEW directory_listing AS
SELECT school.id, school.name, school.description, 119075291 AS
listing_type_fid
FROM school
UNION ALL
SELECT company.id, company.name, company.description, 119074833
Hi David,
Early in this thread, Pavel suggested:
you should partial index
create index foo(b) on mytable where a is null;
Rather, you might try the opposite partial index (where a is NOT null) as a
replacement for the original unqualified index on column A. This new index
will be ignored
Tom Lane [EMAIL PROTECTED] writes:
Matt Smiley [EMAIL PROTECTED] writes:
So an Index Scan is always going to have a higher cost estimate than
an equivalent Seq Scan returning the same result rows (unless
random_page_cost is 1). That's why I think the planner is always
Tom Lane [EMAIL PROTECTED] writes:
I'm not sure offhand whether the existing correlation stats would be of use
for
it, or whether we'd have to get ANALYZE to gather additional data.
Please forgive the tangent, but would it be practical to add support for
gathering statistics on an arbitrary
|
|
(2 rows)
Note that unless you run this query as a superuser (e.g. postgres), the
columns from pg_stat_activity will only be visible for sessions that belong to
you. To rollback this example prepared transaction, you'd type:
ROLLBACK PREPARED 'my_prepared_transaction1';
Hope this helps!
Matt
Alvaro Herrera wrote:
Move the old clog files back where they were, and run VACUUM FREEZE in
all your databases. That should clean up all the old pg_clog files, if
you're really that desperate.
Has anyone actually seen a CLOG file get removed under 8.2 or 8.3? How about
8.1?
I'm probably
1 - 100 of 123 matches
Mail list logo