Arnau wrote:
Hi all,
>
> COPY FROM a file with all the ID's to delete, into a temporary
table, and do a joined delete to your main table (thus, only one query).
I already did this, but I don't have idea about how to do this join,
could you give me a hint ;-) ?
Thank you very much
ad service requests to minimize rotational latency, but
you can't improve bandwidth.
- Luke
For Solaris's software raid, the default settings for raid-1 sets is:
round-robin read, parallel write. I assumed this mean't it would give
similar read performance to r
, the best way to
know what VPS specs you need is to do your own tests/benchamarks.
http://www.cl.cam.ac.uk/Research/SRG/netos/xen/performance.html
-Kevin
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
memory.
Yes, throwing more ram at it is usually the better solution, but it's nice
linux gives you that knob to turn when adding ram isn't an option, at
least for me.
--
Kevin
---(end of broadcast)---
TIP 9: In versions belo
was just
taking to long having to do 4 db transactions just to mirror one
command. I have eserv but was never really a java kind of guy.
Alright then - back to my code. Again thanks for the help and info.
Kevin
---(end of broadcast)---
TIP 8: exp
ity hasn't really been an issue, at least as far as
the basics go, I still agree with your general sentiment -- stay away
from the Dells, at least if they have the Perc3/Di controller. You'll
probably get much better performance out of something else.
--
Kevin Brown
le volumes can cause head movement where the kernel
might be treating the volumes as completely independent. But that
just means that you can't be dumb about how you configure your RAID
setup.
So what gives? Given the above, why is SCSI so much more efficient
than plain, dumb SATA? And why w
t of view of the kernel).
The specification doesn't require that the kernel act on the calls
immediately or write only the blocks referred to by the call in
question.
--
Kevin Brown [EMAIL PROTECTED]
---(end of broadcast)-
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > I really don't see how this is any different between a system that has
> > tagged queueing to the disks and one that doesn't. The only
> > difference is where the queueing happens. In the case of
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> The reason this is so much more of a win than it was when ATA was
> >> designed is that in modern drives the kernel has very little clue about
> >> the physical geometry of
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > In the case of pure random reads, you'll end up having to wait an
> > average of half of a rotation before beginning the read.
>
> You're assuming the conclusion. The above is true if the disk i
Vivek Khera wrote:
>
> On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
>
> >Now, bad block remapping destroys that guarantee, but unless you've
> >got a LOT of bad blocks, it shouldn't destroy your performance, right?
> >
>
> ALL disks have bad blocks
Rosser Schwarz wrote:
> while you weren't looking, Kevin Brown wrote:
>
> [reordering bursty reads]
>
> > In other words, it's a corner case that I strongly suspect
> > isn't typical in situations where SCSI has historically made a big
> > differen
ut have psql ignore the output?
> If so, you could use \timing.
Would timing "SELECT COUNT(*) FROM (query)" work?
--
Kevin Brown [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Josh Berkus wrote:
> Jim, Kevin,
>
> > > Hrm... I was about to suggest that for timing just the query (and not
> > > output/data transfer time) using explain analyze, but then I remembered
> > > that explain analyze can incur some non-trivial overhead with the t
h data and load.
Thanks for any info you can provide.
-Kevin
uct works for inner or outer joins and works
regardless of how complex the logic for picking the best choice is. I
think one reason this tends to optimize well is that an EXISTS test can
finish as soon as it finds one matching row.
-Kevin
>>> Junaili Lie <[EMAIL PROTECTED]>
rformance, and
preliminary tests look good. We should be able to get some load testing
going within a week, and we're shooting for slipping these machines into
the mix around the end of this month. (We've gone to some lengths to
keep our code portable.)
-Kevin
>>> Richard H
thing which is often necessary for performance.
If you add a column to the person table for "last_food_id" and triggers
to maintain it when the food table is modified, voila! You have a
simple and fast way to get the results you want.
-Kevin
>>> Junaili Lie <[EMAIL P
I've done a lot of work with a bookkeeping system where we have such
redundancy built in. The auditors, however, need to be able to generate
lists of the financial transaction detail to support balances. These
reports are among the most demanding in the system. I shudder to think
how unacceptabl
It sure would be nice if the optimizer would consider that it had the
leeway to add any column which was restricted to a single value to any
point in the ORDER BY clause. Without that, the application programmer
has to know what indexes are on the table, rather than being able to
just worry about
optimizer, then the indexes are modified such that that
column is no longer useful. Whether this merits the programming effort
and performance hit you describe seems highly questionable, though.
-Kevin
>>> Tom Lane <[EMAIL PROTECTED]> 06/13/05 11:22 AM >>>
I don't thin
.
So, it looks like this is right except for the assertion that the index
is still available for other queries.
-Kevin
>>> Tobias Brox <[EMAIL PROTECTED]> 06/21/05 2:46 PM >>>
[John A Meinel - Tue at 10:14:24AM -0500]
> I believe if you drop the indexes inside a t
appropriate for returning query results.
How this issue comes into play in PostgreSQL is beyond my ken, but
hopefully this observation is helpful to someone.
-Kevin
>>> "Merlin Moncure" <[EMAIL PROTECTED]> 06/30/05 9:10 AM >>>
> My collegue spent some
Just out of curiosity, does it do any better with the following?
SELECT ...
FROM a
JOIN b ON (a.key = b.key)
LEFT JOIN c ON (c.key = a.key)
LEFT JOIN d ON (d.key=a.key)
WHERE (b.column <= 100)
>>> "Dario Pudlo" <[EMAIL PROTECTED]> 07/06/05 4:54 PM >>>
(first at all,
You might want to set join_collapse_limit high, and use the JOIN
operators rather than the comma-separated lists. We generate the WHERE
clause on the fly, based on user input, and this has worked well for us.
-Kevin
>>> "Dario" <[EMAIL PROTECTED]> 07/18/05 2:24
than it needs to be, but when we got to something which ran well,
we stopped tinkering. The default value clearly caused a bottleneck.
You might find this page useful:
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html
-Kevin
>>> Oliver Crosby <[EMAIL PR
This is sounding suspiciously similar to behavior I've seen with other types of
TCP database connections when the tcp-no-delay option is not on. Is it
possible that the ODBC driver for Windows is not successfully setting this up?
-Kevin
>>> Dalibor Sramek <[EMAIL PROTECTE
lready doing so.
-Kevin
>>> Dalibor Sramek <[EMAIL PROTECTED]> 09/14/05 8:02 AM >>>
On Tue, Sep 13, 2005 at 11:05:00AM -0400, Merlin Moncure wrote:
> 5. do select array_accum(q::text) from generate_series(1,1) q;
I made the tests you suggested and the pattern
Have you tried the "best choice" pattern -- where you select the set of
candidate rows and then exclude those for which a better choice
exists within the set? I often get better results with this pattern than
with the alternatives. Transmuting your query to use this patter gives:
select PlayerI
I can't help wondering how a couple thousand context switches per
second would affect the attempt to load disk info into the L1 and
L2 caches. That's pretty much the low end of what I see when the
server is under any significant load.
---(end of broadcast)---
ence.
Feel free to laugh at the old fart who decided to sip his Bushmill's
while reading through this thread and try to run with the young lions.
As someone else recently requested, though, please don't point while
you laugh -- that's just rude. :-)
-Kevin
>>> Ron
cle. They are using Sybase Adaptive Server Enterprise (ASE). I
believe named caches were added in version 12.0, long after Microsoft
split off with their separate code stream based on the Sybase effort.
-Kevin
>>> "Dario" <[EMAIL PROTECTED]> 10/05/05 6:16 AM >
with slowness, it could be a useful clue.
-Kevin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
conds in the
optimizer finding the sub-second plan.
-Kevin
>>> Tom Lane <[EMAIL PROTECTED]> >>>
Rich Doughty <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> The reason these are different is that the second case constrains
only
>> the last-to-be-jo
the right
side of a UNION are immaterial.
Unless we have reason to believe that PostgreSQL is
non-compliant on this point, I don't think it is a good idea to
slow the query down with the subquery.
-Kevin
>>> "Merlin Moncure" <[EMAIL PROTECTED]> >>>
>
Just as an FYI, if you want to reassure yourself that the ORDER BY
is being applied as intended, you could do the following:
(
select 1 as hint, start_time as when [...]
union all
select 2 as hint, end_time as when [...]
) order by seq, when
This is ANSI/ISO standard, and works in PostgreSQL (
That sure seems to bolster the theory that performance is degrading
because you exhaust the cache space and need to start reading
index pages. When inserting sequential data, you don't need to
randomly access pages all over the index tree.
-Kevin
>>> Kelly Burkhart <[EMAIL PR
cations on the disk should remain the same, as should their data
blocks (roughly, depending on the implementation of the filesystem, of
course).
--
Kevin Brown [EMAIL PROTECTED]
---(end of broadcast)--
I'll just start by warning that I'm new-ish to postgresql.
I'm running 8.1 installed from source on a Debian Sarge server. I have a
simple query that I believe I've placed the indexes correctly for, and I
still end up with a seq scan. It makes sense, kinda, but it should be able
to use the in
On Wednesday 14 December 2005 16:47, you wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > I'm running 8.1 installed from source on a Debian Sarge server. I have a
> > simple query that I believe I've placed the indexes correctly for, and I
> > still en
On Wednesday 14 December 2005 17:23, you wrote:
> what hardware?
Via 800 mhz (about equiv to a 300 mhz pentium 2)
128 mb of slow ram
4200 rpm ide hard drive.
Told you it was slow. :-)
This is not the production system. I don't expect this to be "fast" but
everything else happens in under 2 sec
On Wednesday 14 December 2005 17:30, Mark Kirkwood wrote:
> You scan 60 rows from to_ship to get about 25000 - so some way to
> cut this down would help.
Yup. I'm open to anything too, as this is the only real part of the system
that cares. So either maintaining a denormalized copy column,
nable
> with the index on to_ship.ordered_product_id, right? Given the
> conditions on paid and suspended_sub.
>
> If you (Kevin) try adding such an index, ideally it would get used given
> that you're only pulling out a small fraction of the rows in to_ship.
> If it doesn&
On Wednesday 14 December 2005 18:36, you wrote:
> Well - that had no effect at all :-) You don't have and index on
> to_ship.ordered_product_id do you? - try adding one (ANALYZE again), and
> let use know what happens (you may want to play with SET
> enable_seqscan=off as well).
I _DO_ have an ind
specific situation, my opinion is that the proper
modification to PostgreSQL would be to give it (if it isn't already
there) the ability to include the cost of functions in the plan. The
cost needn't be something that it automatically measures -- it could
be specified at function
ably give the user greater
incentive to report the problem than use of planner hints.
--
Kevin Brown [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: explain analyze is your friend
the next transaction.
In essence, the backend would be "polling" itself every second or so
and recording its state at that time, rather than on every
transaction.
Assuming that doing all that wouldn't screw something else up...
--
Kevin Brown
Craig A. James wrote:
> Kevin Brown wrote:
> >>Hints are dangerous, and I consider them a last resort.
> >
> >If you consider them a last resort, then why do you consider them to
> >be a better alternative than a workaround such as turning off
> >enable_seqsc
On Wednesday 21 December 2005 20:14, Stephen Frost wrote:
> * Madison Kelly ([EMAIL PROTECTED]) wrote:
> > If the performace difference comes from the 'COPY...' command being
> > slower because of the automatic quoting can I somehow tell PostgreSQL
> > that the data is pre-quoted? Could the perfo
at being
in tables which are insert-only except for a weekly purge of data over a
year old. We do nightly vacuums on the few tables with update/delete
activity, and a weekly vacuum of the whole database -- right after the
delete of old rows from the big tables.
-Kevin
-
Hi List;
I have a client looking to host/co-locate multiple PostgreSQL clusters
(inclusive of PL/pgSQL application code) per server. I did some co-location
work several years back with one of the bigger telco's and remember there
were dire consequences for not carefully evaluating the expected
ld be fixed
in a PostgreSQL environment, but it might give you another avenue to
search.
-Kevin
---(end of broadcast)---
TIP 6: explain analyze is your friend
index bloat which can degrade performance. You should probably look
at whether the long running transaction could be broken down into a
number of smaller ones.
-Kevin
>>> On Fri, Dec 22, 2006 at 10:52 AM, in message
<[EMAIL PROTECTED]>, "Jeremy
Haile"
<[EMAIL P
>>> On Fri, Dec 22, 2006 at 12:14 PM, in message
<[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
>> As I understand it, the log space accumulates for the oldest
transaction
>> which is sti
Hi List(s);
I'm wanting to find more details per the following methods, can someone
explain to me exactly what each of these methods is, how its implemented in
postgres or point me to some docs or README's that explain these methods?
Some of 'em are obviously no-brainers but I'm writing a postg
On 26 Dec 2006 at 2:55p -0500, Tom Lane wrote:
Kevin Hunter <[EMAIL PROTECTED]> writes:
A friend has asked me about creating a unique table for individual users
that sign up for his site. (In essence, each user who signs up would
essentially get a set of CREATE TABLE {users,friends,
wever, I'm hoping that I can give my stats/assumptions to
the list and someone would give me a configuration file that would /most
likely/ be best? I can search the documentation/archives, but I'm
hoping to get head start and tweak from there.
Any and all advice would be /much/ appr
On 19 Jan 2007 at 8:45a -0500, Merlin Moncure wrote:
On 1/17/07, Kevin Hunter [hunteke∈earlham.edu] wrote:
I am in the process of learning some of the art/science of benchmarking.
Given novnov's recent post about the comparison of MS SQL vs
PostgresQL, I felt it time to do a benc
On 19 Jan 2007 at 10:56a -0600, Bruno Wolff III wrote:
On Fri, Jan 19, 2007 at 09:05:35 -0500,
Kevin Hunter <[EMAIL PROTECTED]> wrote:
Seriously though, that would have bitten me. Thank you, I did not know
that. Does that mean that I can't publish the results outside of my
wo
cold queries. It would be fantastic for testing to be
able to signal both PG and the operating system to invalidate their caches.
-Kevin Murphy
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
c
;."countyNo" =
"H"."countyNo" AND "D"."caseNo" LIKE '2006TR%')
WHERE "H"."tranType" = 'A'
AND "A"."date" > DATE '2006-01-01'
AND "
>>> On Fri, Mar 23, 2007 at 4:49 PM, in message <[EMAIL PROTECTED]>,
Tom Lane <[EMAIL PROTECTED]> wrote:
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
>> explain analyze
>> SELECT "A"."adjustmentNo", "A"."
I don't understand -- TRUE OR UNKNOWN evaluates to TRUE, so why would the IN
need to continue? I'm not quite following the rest; could you elaborate or
give an example? (Sorry if I'm lagging behind the rest of the class here.)
-Kevin
>>> Martijn van Oosterhout
>>> On Fri, Mar 23, 2007 at 5:26 PM, in message
<[EMAIL PROTECTED]>, "Kevin Grittner"
<[EMAIL PROTECTED]> wrote:
> I tried something which seems
> equivalent, but it is running for a very long time. I'll show it with just
> the explain while I w
>>> On Fri, Mar 23, 2007 at 6:04 PM, in message
<[EMAIL PROTECTED]>, "Peter Kovacs"
<[EMAIL PROTECTED]> wrote:
> On 3/23/07, Kevin Grittner <[EMAIL PROTECTED]> wrote:
> [...]
>> That's the good news. The bad news is that I operate u
> order the load file by the index field in the table before loading it?
If you load the rows in the same order that the index would read them during
the cluster, there is no need to cluster and no benefit from doing so.
-Kevin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
gainst this much of Y data, and working under these
constraints. You might get better performance (in this area ... ) if
you altered the the configurations options like so: ..."
Certainly not for the masters, but perhaps for standard installation
sort of deals, sort of liking
with whom I
talk prefer MySQL? Because PostgreSQL is so "slooow" out of the box?*
Thanks,
Kevin
* Not trolling; I use PostgreSQL almost exclusively.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an a
>>> On Thu, Jun 14, 2007 at 6:14 PM, in message <[EMAIL PROTECTED]>,
Craig James <[EMAIL PROTECTED]> wrote:
> Looking for replication solutions, I find:
>
> Slony-I
> Slony-II
> PGReplication
> PGCluster
You wouldn't guess it from the name, but pgpool actually supports replication:
http://pg
Postgres uses this for ...
Not something that necessarily needs to be spelled out in the .conf
file, but would, IMVHO, help minimally educate.
Kevin
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an a
wal_buffers = 160kB # min 32kB
# (change requires restart)
checkpoint_segments = 10# in logfile segments, min 1, 16MB each
Since you're on 8.0 I think you'll need to specify wal-buffers as a number of
8KB pages.
-Kevin
--
tovac itself doesn't create or delete any files, so what's
> up here?
Have you ruled out checkpoints as the culprit?
-Kevin
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
searchName filter to the Bitmap
Index Scan phase, and save 97.5% of the page retrievals in the Bitmap Heap Scan.
-Kevin
cc=> explain analyze select * from "Warrant" where "soundex" = 'S530' and
"searchName" like '%,G%' and "county
regular basis. We do it nightly on most of our databases.
Without proper maintenance, dead space will accumulate and destroy your
performance.
Also, I don't generally recommend VACUUM FULL. If a table needs agressive
maintenance, I recommend using CLUSTER, follow
FULL tends to cause index bloat, so you will probably see performance
issues in other queries at the moment. You will probably need to REINDEX
the table or use CLUSTER to clean that up.
-Kevin
---(end of broadcast)---
TIP 9: In versions below 8.0
asure to help make this easier
to read. You could, for example, say:
shared_buffers = 96MB
effective_cache_size = 1200MB
-Kevin
---(end of broadcast)---
TIP 6: explain analyze is your friend
asurements"."MetricID"
WHERE "AnalysisModules"."AnalysisModuleName"::text = 'NMF'::text
AND "ParameterNames"."ParameterName"::text = 'NMF'::text
AND "PrintSamples"."TestPatternName"::text ~~ 'IQAF-TP8%'::text
AND "tblColors"."ColorID" <> 3
GROUP BY
"PrintSamples"."MachineID",
"PrintSamples"."PrintCopyID",
"tblColors"."ColorID"
;
I'd also be inclined to simplify the FROM clause by eliminating the parentheses
and putting the ON conditions closer to where they are used, but that would be
more for readability than any expectation that it would affect the plan.
-Kevin
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
be maintained on an ongoing basis
(possibly using triggers) or could be materialized periodically or prior to
running a series of reports or queries.
Such redundancies violate the normalization rules which are generally used
in database design, but some denormalization is often needed for acc
>>> On Mon, Aug 13, 2007 at 4:25 PM, in message
<[EMAIL PROTECTED]>, "Kevin Grittner"
<[EMAIL PROTECTED]> wrote:
>>>> On Mon, Aug 13, 2007 at 4:00 PM, in message
> <[EMAIL PROTECTED]>, "Relyea,
> Mike" <[EMAIL PROTECTED]>
"tblColors"."ColorID" <> 3
AND "PrintSamples"."TestPatternName" LIKE 'IQAF-TP8%';
)
I fear you may really want it evaluate to:
HAVING ("PrintSamples"."MachineID" = 4741 OR "PrintSamples".
issue in my prior email, I don't see any test for 4745 in the
EXPLAIN ANALYZE output, which makes me think it doesn't go with the posted
query.
-Kevin
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ructure and the results of EXPLAIN ANALYZE (rather than just EXPLAIN).
-Kevin
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
st=0.00..34.26
rows=1026 width=74)
(9 rows)
Any thoughts, comments, Ideas for debugging, etc would be way helpful...
Thanks in advance.
/Kevin
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
s?" arises...
So here's my questions:
1) Does this sound like a good plan?
2) Are there other steps I should be taking, other Issues I should be
concerned about short-term, etc?
3) Does anyone have any additional advice for managing either this initial
mess, or the system(s) lon
>>> On Fri, Aug 24, 2007 at 2:57 PM, in message
<[EMAIL PROTECTED]>, Kevin Kempter
<[EMAIL PROTECTED]> wrote:
>c) setup WAL archiving on the 8.1.4 cluster
>
>d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4
> cluster
>
>
On Friday 24 August 2007 15:39:22 Tom Lane wrote:
> Kevin Kempter <[EMAIL PROTECTED]> writes:
> > The development folks that have been here awhile tell me that it seems
> > like when they have a query (not limited to vacuum processes) that has
> > been running for a lon
? A good raid 10 array with
> write caching should be able to handle a 200G database fairly well
What other details were you looking for?
-Kevin
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
On Monday 27 August 2007 15:56:33 Kevin Grittner wrote:
> >>> Decibel! <[EMAIL PROTECTED]> 08/27/07 4:00 PM >>>
> >>>
> > > > They're running version 8.1.4
> >
> > As for your pg_dump idea... why not just do a CREATE TABLE AS SE
On Monday 27 August 2007 15:00:41 you wrote:
> On Fri, Aug 24, 2007 at 04:41:44PM -0400, Bill Moran wrote:
> > In response to Kevin Kempter <[EMAIL PROTECTED]>:
> > > Hi List;
> > >
> > > I've just started working with a client that has been runni
On Monday 27 August 2007 16:04:39 Decibel! wrote:
> On Mon, Aug 27, 2007 at 04:56:33PM -0500, Kevin Grittner wrote:
> > >>> Decibel! <[EMAIL PROTECTED]> 08/27/07 4:00 PM >>>
> > >>>
> > > > > They're running version 8.1.4
>
rvers are 4-CPU intel boxes (not dual-core) with 4Gig of memory and
attached to raid-10 array's
Any thoughts on where to start?
Below are the current/relevant/changed postgresql.conf settings.
Thanks in advance...
/Kevin
== postgresql.c
>>> On Mon, Aug 27, 2007 at 11:13 PM, in message
<[EMAIL PROTECTED]>, Kevin Kempter
<[EMAIL PROTECTED]> wrote:
> Each night during the nightly batch processing several of the servers (2 in
> particular) slow to a crawl - they are dedicated postgres database servers.
&
>>> On Tue, Aug 28, 2007 at 10:22 AM, in message
<[EMAIL PROTECTED]>, "Evan Carroll"
<[EMAIL PROTECTED]> wrote:
> Yes, I ran vacuum full after loading both dbs.
Have you run VACUUM ANALYZE or ANALYZE?
-Kevin
---(end of broadcast
If so, try adding 'where alias is not null' to the query.
-Kevin
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
>>> On Mon, Sep 3, 2007 at 11:15 AM, in message
<[EMAIL PROTECTED]>, Shawn
<[EMAIL PROTECTED]> wrote:
> On Sun, 02 Sep 2007 10:49:09 -0500 "Kevin Grittner"
> <[EMAIL PROTECTED]> wrote:
>
>> >>> On Sat, Sep 1, 2007 at 12:29 PM,
/round
bgwriter_all_maxpages = 600 # 0-1000 buffers max written/round
Don't leave these in effect permanently without close attention to the
overall impact. These settings have worked well for us, but are likely
not to work well for everyone.
-Kevin
-
ealize that each of those
indexes will have a new entry inserted whenever you update a row. If your
indexes are that expensive to maintain, you want to go out of your way
update rows only when something actually changes, which is not the case
for your second update statement yet.
I
y
help - you can bump the default_statistics_target for a single table in the
pg_autovacuum system catalog table.
Hope this helps...
/Kevin
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
r documentation, you could start with this:
http://www.postgresql.org/docs/8.2/interactive/app-postgres.html
-Kevin
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
1 - 100 of 1177 matches
Mail list logo