Bruno Wolff III <[EMAIL PROTECTED]> wrote:
>
> This really should have been asked on pgsql-performance and would probably
> get a better response there..
>
> On Sun, Nov 26, 2006 at 16:35:52 +,
> Michael Simms <[EMAIL PROTECTED]> wrote:
> > PostgreSQL version: 8.1.4
> > Operating system: L
sible for the most
common queries, then force the developers to use "set work_mem to x"
to adjust it for big queries.
--
Bill Moran
Collaborative Fusion Inc.
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 9: In ve
ase can fit in 100M, that 900M
might be better used as work_mem, or something else.
I haven't been able to find anything regarding how much of the
shared buffer space PostgreSQL is actually using, as opposed to
simply allocating.
--
Bill Moran
Collaborative Fusion Inc.
[EMAIL PROTE
gt; not running a system optimized from the ground up for its HW do not see.
http://www.potentialtech.com/wmoran/source.php
You get an idea of how old these tests are by the fact that the latest
and greatest was FreeBSD 4.9 at the time, but I suppose it may still
be relevent.
--
Bill Moran
tsuraan <[EMAIL PROTECTED]> wrote:
>
> I'm writing a webmail-type application that is meant to be used in a
> corporate environment. The core of my system is a Postgres database
> that is used as a message header cache. The two (relevant) tables
> being used are pasted into the end of this messag
In response to Tom Lane <[EMAIL PROTECTED]>:
> Bill Moran <[EMAIL PROTECTED]> writes:
> > Does the creation of a temp file trigger any logging?
>
> No; but it wouldn't be hard to add some if you wanted. I'd do it at
> deletion, not creation, so you co
In response to Stephen Frost <[EMAIL PROTECTED]>:
> * Bill Moran ([EMAIL PROTECTED]) wrote:
> > What I'm fuzzy on is how to discretely know when I'm overflowing
> > work_mem? Obviously, if work_mem is exhausted by a particular
> > query, temp files will be
that the docs on this are very good. If the
amount of data that changes between runs of autovacuum is greater
than max_fsm_pages, then vacuum will be unable to reclaim all the
space.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
improvements in both the usage of shared memory, and the
optimization of count(*) since 7.4, so the first suggestion I have is to
upgrade your installation.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
"Campbell, Lance" <[EMAIL PROTECTED]> wrote:
>
> I have been researching how to improve my overall performance of
> postgres. I am a little confused on the reasoning for how work_mem is
> used in the postgresql.conf file. The way I understand the
> documentation is you define with work_mem how mu
h exists, but
> when I do the queries they don't appear.
[snip]
Just a guess, but perhaps your index is damaged. Have you tried
REINDEXing?
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
would appear to be enough space to do the entire sort in memory.
The second is with work_mem set to 128k.
More interesting is that that actual runtime doesn't differ by nearly
that much: 3100ms vs 2200ms. (I've realized that my setting for
random_page_cost is too damn high for this hardwar
ally did more tests,
you should provide that information, otherwise, people will criticize your
testing strategy instead of looking at the problem.
The other side to this is that you haven't shown enough information about
your alleged problem to even start to investigate it.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
t the case, and large shared_buffers improve performance.
I've yet to do any in-depth testing on this, though.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desir
ou consent to this
policy and agree that it overrides any disclaimers or policies that may
exist elsewhere."
I have no idea if that's legally binding or not, but I've talked to a few
associates who have some experience in law, and they all argue that email
disclaimers
> protected by law, and any opinions expressed are those of the
> > individualsender. Internet e-mail guarantees neither the
> > confidentiality nor the proper receipt of the message sent.
> > If the addressee of this
setup, my only suggestion would be to double-verify
that your postgresql.conf settings are correct on the 64 bit system.
--
Bill Moran
Collaborative Fusion Inc.
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
IMPORTANT: This message c
In response to David Brain <[EMAIL PROTECTED]>:
>
> Thanks for the response.
> Bill Moran wrote:
> > In response to David Brain <[EMAIL PROTECTED]>:
> >> I recently migrated one of our large (multi-hundred GB) dbs from an
> >> Intel 32bit platform (D
In response to David Brain <[EMAIL PROTECTED]>:
> Bill Moran wrote:
>
> >
> > Install the pg_bufferstats contrib module and take a look at how shared
> > memory is being use. I like to use MRTG to graph shared buffer usage
> > over time, but you can just do
field foo.id in Schema 1 superfluous? For example, wouldn't the
> referencing from bar to foo really be done "behind the scenes" through some
> hidden field (oid?) instead of through the frobnitz text field? Which of
> the two schemas would give better perfornance?
--
launching
the DB.
> Can anyone else confirm this? I don't know if this is a windows-only
> issue, but I don't know of a way to check fragmentation in unix.
I can confirm that it's only a Windows problem. No UNIX filesystem
that I'm aware of suffers from fragmentation
is and prevent it from becoming
a performance issue.
> There's a tool for Linux in the e2fsprogs package called filefrag that
> shows the fragmentation of a file, but I've never used it myself.
Interesting. However, the existence of a tool does not particularly
indicated the _need_ f
ght also need bumped.
Figure out which postgresql.conf your system is using and get it dialed
in for your hardware. You can make all the indexes you want, but if
you've told Postgres that it only has 8M of RAM to work with, performance
is going to suck. I don't see hardware specs on th
ple.
Personally, I wouldn't be opposed to more automagic stuff, just as long as
I have the option to disable it. There are some cases where I still
disable autovac.
> In closing, I am not bashing PG! I love it and swear by it. These comments
> are
> purely from an advocacy perspective. I'd love to see PG user base continue
> to grow.
I expect that part of the problem is "who's going to do it?"
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
ion about which OS to use ;). I would stick with the long supported
> versions of Linux, thus CentOS 5, Debian 4, Ubuntu Dapper.
There used to be a prominent site that recommended FreeBSD for Postgres.
Don't know if that's still recommended or not -- but bringing it up is
likely to st
ason I can see for vacuum full/reindex is if you _can_.
For example, if there is a period that you know the database will be
unused that it sufficiently long that you know these operations can
complete. Keep in mind that both reindex and vacuum full create performance
problems while they are
act?
>
> # alter database test set lc_collate = 'C';
> ERROR: parameter "lc_collate" cannot be changed
How would that command handle UTF data that could not be converted to C?
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL
want take a backseat to your
rendering process, why not just nice the initial PostgreSQL daemon? All
children will inherit the nice value, and there's no chance of priority
inversion because all the PostgreSQL backends are running at the same
priority.
Just a thought.
--
my $uq = $dbh2->prepare(sprintf("SELECT db.name, accession,
> version, is_current from feature_dbxref fd, dbxref dx, db where fd.feature_id
> = %d and fd.dbxref_id = dx.dbxref_id and dx.db_id = db.db_id and db.name =
> '%s'",$pr{feature_id},$uds{$uh{$rec}{stflag}}
ters control).
> > > >
> > > >That actually has nothing at all to do with pg_stats or pg_statistics.
> > > >Those deal with statistics about the data in the database, and not
> > about
> > > >statistics from the engine (which is what the pg_stat*
Dave Cramer <[EMAIL PROTECTED]> wrote:
> Since PITR has to enable archiving does this not increase the amount
> of disk I/O required ?
It does increase the required amount of I/O.
--
Bill Moran
http://www.potentialtech.com
---(end
.
>
> Would that scenario work, or I am simply too naive considering it?
It won't work, unfortunately. The on-disk representation of the data is
different between ia32 and amd64.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Ph
it
done any faster, but there's just not enough information in your
post to know for sure.
Post some explains of some problem queries. Let us know more about
your IO load. Give us some snapshots of top under load. Find out
how large the database is. Provide the output of vacuum verbose.
to shared_buffers (only a few 100 meg?).
As a result, PostgreSQL is constantly asking the OS for disk blocks that it
doesn't have cached, but the OS has those disk blocks cached in RAM.
If my guess is right, you'll probably see improved performance by allocating
more share
In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> On 6/8/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> >
> > In response to "Chris Hoover" <[EMAIL PROTECTED]>:
> >
> > > I need some help. I have started taking snapshots of perf
l backend processes will allocate more memory above shared_buffers
for processing individual queries. See work_mem.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)--
o see how well autovacuum is keeping up. There's
no problem with running manual vacuum and autovacuum together, and you'll
be able to gather _some_ information about how well autovacuum is
keeping up.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collab
uring initial testing of
our DB) I would run vacuum verbose once a day to watch sizes and what
not. After a while, I'd switch to once a week, then probably settle on
once a month to ensure nothing ever gets out of hand. Put it in a cron
job and have the output mailed.
--
Bill Moran
Col
that doesn't apply here.
Why not? I see no reason why an appropriate autovaccum schedule would not
apply to your scenario. I'm not saying it does, only that your response
does not indicate that it doesn't, and thus I'm concerned that you're
writing auto
exes --- VACUUM FULL isn't good at that.
> >
> > regards, tom lane
> >
> > ---(end of
> > broadcast)---
> > TIP 4: Have you searched our list archives?
> >
> >
G's shared_buffers -- if that's the case, have
a look at the pg_buffercache contrib module.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 6: explain analyze is your friend
(for me anyway) on a system I don't know so
> much about.
:D Isn't Open Source great!
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
iodic.conf (just like other
periodic job):
daily_pgsql_vacuum_enable="YES"
daily_pgsql_backup_enable="NO"
are the defaults.
> Hmmm... I wonder why this would just start now, three days ago. Everything
> seemed to be normal for the last two weeks.
Someone alter /etc/periodic.conf? Perhaps it
> #cpu_operator_cost = 0.0025 # (same)
> #---
> # LOCK MANAGEMENT
> #---
>
> #deadlock_timeout = 1000
prove/disprove that theory:
http://www.postgresql.org/docs/8.2/interactive/view-pg-locks.html
If none of those are the case, then please describe the actual problem
you are having.
HTH.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTEC
r problem, then
you either need to implement a data expiration policy to get rid of
old data, or increase the amount of storage to accommodate the data
you want to keep.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
that process, unless you're running
a 486dx2. You didn't mention your hardware or your postgresql.conf
settings. What other activity is occurring during this long count()?
Can you give us a shot of the iostat output and/or top during this
phenomenon?
>
> Jozsef
>
>
>
fers = 16
> >
> > checkpoint_segments = 16
> >
> >
> >
> >
> >
> > top reports
> >
> >
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> >
> > 19478 postgres 25 0 740m 721m 536m R 99.7 4
t;
> > checkpoint_segments = 16
> >
> >
> >
> >
> >
> > top reports
> >
> >
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> >
> > 19478 postgres 25 0 740m 721m 536m R 99.7 4.4 609:41.16
>
UM seems to fix. A FULL seems to fix that, but it appears
to bloat the indexes, thus a REINDEX helps.
I would expect that setting fillfactor to 100 will encourage indexs to bloat
faster, and would only be recommended if you didn't expect the index contents
to change?
--
Bill
In response to "Decibel!" <[EMAIL PROTECTED]>:
> On Wed, Aug 08, 2007 at 03:27:57PM -0400, Bill Moran wrote:
> > I've had similar experience. One thing you didn't mention that I've noticed
> > is that VACUUM FULL often bloats indexes. I've mad
e any difference ?
Does the documentation leave anything unanswered?
http://www.postgresql.org/docs/8.2/static/runtime-config-locks.html
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end o
ons on a server if needed, the
> ability to quickly revert to a previous version, etc however this is a
> discussion for another day - I only mention it in case the question "why not
> just use RPM's?" arises...
>
>
> So here's my questions:
>
> 1
> You can't use these techniques for a major version upgrade.
> Use pg_dump piped to psql. That will also eliminate all bloat.
If you can't afford any downtime, you may be able to use Slony to
do your upgrade. However, slony adds overhead, and if this system
is tapped out already, i
ou autovacuum or vacuum manually?
> Tell us more...
>
>
> Bye,
> Chris.
>
>
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql
re not obvious to me. The explain analyze output is
going to be key to working this out -- unless it's something like
your postgresql.conf isn't properly tuned.
> I vacuum and
> reindex the database daily.
>
> I'd prefer not to have to rewrite th
ay with a lot less
space than many other usages, so they might be very practical.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
mance related stuff.
>
> Thank you all in advance
To add to Mikko's comments:
Periodic vacuuming and analyzing is a mandatory part of running a
PostgreSQL database server. You'll probably be best served to configure
the autovacuum daemon to handle this for you. See the postgre
x27;ll see detailed log
messages about its activities.
There were discussions on other lists about improving autovacuum's log
messages, I'm pretty sure it will log more helpful information in 8.3.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PR
gt; Generally it's faster to run more commands in a single transaction but what
> I'm worried about is that you may have a transaction open which you aren't
> committing for a long time. This can stop vacuum from being able to clean up
> dead space and if it's in t
anner faster.
>
> Do you think it could be reasonable?
Based on the information you've given and the responses you've made,
I think you're as likely to roll a 1d6 and get the right solution as
anything else.
Good luck.
> -Messaggio originale-
> Da: Bil
>
> Index Cond: (qa.answer_id = "outer".id)
>
>
> -> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never e
ery, in the old one, in the new one), without difference and I only
> retrieve the first 100 records (I didn't count the network time in any case).
> But the weird thing is running the query in the new server the are many disk
> access and cpu usage. And wi
in the same server there
> aren't networks delays!
Not network, no. But the results of your explains seem to show that the
query is executing much faster on the new system than the old, so the
problem still becomes, "what is happening after the query completes that
is so slow?"
ep up the vacuum schedule you've already
established. You may want to (as has already been suggested) explicitly
vacuum this table after large delete operations as well.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
us the explain.
However, 2 guesses:
1) You never analyzed the table, thus PG has awful statistics and
doesn't know how to pick a good plan.
2) You have so few rows in the table that a seq scan is actually
faster than an index scan, which is why PG uses it
78 kB shared
> memory.
> VACUUM
This doesn't look problematic, so I doubt your vacuum policy is to blame.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
2000
> deadlock_timeout = 1
> max_locks_per_transaction = 640
> add_missing_from = on
>
> As I mentioned, any insights into changing the configuration to optimize
> performance are most welcome.
>
> Thanks
>
> Ron
>
> ---(end of broadc
In response to Ron St-Pierre <[EMAIL PROTECTED]>:
> Bill Moran wrote:
> > In response to Ron St-Pierre <[EMAIL PROTECTED]>:
> >
> >
> >> We vacuum only a few of our tables nightly, this one is the last one
> >> because it takes longer to run
CPUs seem to be the limiting
factor. As a result, this decision on what kind of CPUs to get in
the next round of servers is pretty important.
Any advice is much appreciated.
--
Bill Moran
Collaborative Fusion Inc.
IMPORTANT: Th
s done once to get you back on track, assuming that #1 is
done properly.
A little bit of wasted space in the database is OK, and lazy vacuum done
on a reasonable schedule will keep the level of wasted space to an
acceptable level.
--
Bill Moran
Potential Technologies
http://www.potentialtech.
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
> > At this point, it seems like you need to do 2 things:
> > 1) Schedule lazy vacuum to run, or configure autovacuum.
> > 2) Schedule some downtime to r
s that we did. We pulled the plug before doing any extensive
testing, because it just didn't seem as if it was going to be worth it.
--
Bill Moran
I already know the ending it's the part that makes your face implode.
I don't know what makes your face implode, but that's the
ose some problems - e.g. effective_cache_size.
The only reason I can see for doing this is when you need to run two
different versions of PostgreSQL. Which is what I've been forced to
do on one of our servers.
It works, but it's a pain to admin. If you can just put all the databases
in
P. If you've got a cheap,
laggy switch, not amount of TCP or PostgreSQL tuning is going to overcome
it.
Hope some of this is helpful.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
made almost no difference in our tests. If your DB is
very IO intensive, you may have different results.
--
Bill Moran
Collaborative Fusion Inc.
IMPORTANT: This message contains confidential information and is
intended only for
In response to "Gourish Singbal" <[EMAIL PROTECTED]>:
If you look in the mail headers:
List-Unsubscribe: <mailto:[EMAIL PROTECTED]>
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 6: explain analyze is your friend
r:
http://www.postgresql.org/docs/8.1/interactive/manage-ag-templatedbs.html
--
Bill Moran
Collaborative Fusion Inc.
IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of
autovac, you may want to experiment with
vacuum_cost_delay and associated parameters, which can lessen the
impact of vacuuming.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
w do you define 20% load? 20% of the
CPU? Does that mean that it's OK for autovac to use 3% cpu and 100% of
your IO? Ok, so we need to calculate an average of IO and CPU -- which
disks? If your WAL logs are on one disk, and you've used tablespaces
to spread the rest of you
gging. This would
give you a record of when things run and how long they take. A little
work analyzing should show you which queries are running when your
favorite query slows down.
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)--
In response to "Jim C. Nasby" <[EMAIL PROTECTED]>:
> On Thu, Sep 28, 2006 at 11:28:43AM -0400, Bill Moran wrote:
> > In response to Matthew Schumacher <[EMAIL PROTECTED]>:
> > >
> > > What I really need is a way to profile my proc when it r
of data
in a table, manually running analyze is a good idea. If you know that
you're creating a lot of dead tuples, manually vacuuming is a good
idea. Especially during a big data load where these changes might be
taking place faster than autovac notices.
--
Bill Moran
Collaborative Fus
cuum analyze it
> went to 511.
>From your attached config file:
#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before
# vacuum
--
Bill Moran
Collaborative Fusion Inc.
---(end of broadcast)---
TIP 6: explain analyze is your friend
wer it may make up a bit in memory bandwidth/
latency.
Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do
better than you think, comparitively.On all the Dell servers I've used
so
far, I've not seen performance that comes even close to the hardware sp
3 pages of the admin manual, as it contains an excellent
description of why databases need vacuumed, that one can use to determine how
often vacuuming is necessary.
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
; RAM), can anyone give me some advice?
Have you referenced this document?:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 3:
t this doesn't seem like a wise thing to do.
Your thoughts appreciated in advance!
- Jeremy
7+ years experience in Oracle performance-tuning
relatively new to postgresql
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)--
the server for it to become
reasonable fast.
If you've only got one disk, then a second disk for OS/logging.
Difficult to say more without knowing numbers of users/activity etc.
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)
Rajesh Kumar Mallah wrote:
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty
ng easier than "CLUSTER ON
"
Since CLUSTER recreates the table, it implicitly removes the dead tuples.
However, it's going to be a LOT slower than vacuum, so if dead tuples are the
main problem, vacuum is still the way to go.
--
Bill Moran
Potential Technologies
http://www.potentialtec
query running have a lock on that table?
This may be a dumb question (but only because I don't know the answer)
Doesn't/shouldn't vacuum have some kind of timeout so if a table is locked
it will give up eventually (loudly complaining when it does so)?
--
Bill Moran
Potential Te
aybe a cron
that vacuums that table every 5 minutes.
You could also do a combination, i.e. enable autovacuum with conservative
settings and set a cron to vacuum the table every 10 minutes.
Vacuuming once a day is usually only enough if you have very minimal
updates.
--
Bill Moran
Collabora
> involved and there are no joins. It's testing very low-level aspects of
> performance.
Actually, what it's really showing is parallelism, and I've always
expected PostgreSQL to come out on top in that arena.
--
Bill Moran
Potential Technologies
http://www.potentialtech
re to track when sorts don't fit in
work_mem.
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
ram the
process holding the transaction open to do a vacuum full of that table
when it's done with it's work.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
*
ates materialized_topics
any time the first table is altered. Thus you always have fast lookups.
Of course, this may be non-optimal if that table sees a lot of updates.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTED]
Phone: 412-422-3463x4023
---(end of broadcast)---
TIP 6: explain analyze is your friend
it hasn't moved from
8.1 -> 8.2. The comment is that it's bad because it hasn't updated a
major branch with the latest bug fixes. i.e. it hasn't moved from 8.1.4
to 8.1.5.
If this is indeed the case, I agree that such a distro isn't worth using.
--
Bill Moran
Collab
g = sql_ascii# actually, defaults to
> database
> # encoding
>
> # These settings are initialized by initdb -- they might be changed
> lc_messages = 'it_IT.UTF-8' # locale for system
> error message
>
r faster disks will
improve this situation are probably 90% or better.
Other things that could cause this problem are poor schema design, and
unreasonable expectations.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
[EMAIL PROTECTE
1 - 100 of 200 matches
Mail list logo