bles, as the
> database
> is rougly 400GB on disk and this takes a very long time to run.
>
> I have been able to do this with tables, using a helpful view posted to this
> list a few months back, but I'm not sure if I can get the same results on
> indexes.
Use pgstatindex
egligible overall.
Replication solutions currently weigh in significantly more than this
overhead, which is one reason to make me start thinking about log based
replication in future releases.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
-
running DDL, Truncate or CLUSTER? That will
interfere with the operation of VACUUM.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
htt
e? Benchmark ? Availability ? Architecture ?
> Limitation : users, volumes ? Resouces needed ? Support ?
I would suggest you make your comparison based upon your specific needs,
not a purely abstract comparison. If your not sure what your
requirements are, research those first.
--
ackup() is the direct analogue of Oracle's ALTER DATABASE
BEGIN BACKUP. Snapshots work with Oracle too, in much the same way.
After reviewing the manual, if you honestly think there is a problem,
please let me know and I'll work with you to investigate.
--
Simon Riggs
RDBMS any
different?
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
we should tackle the problem for 8.4?
If you see a problem, please explain what it is, after careful review of
the manual.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
verify whether that has happened or
not.
At the very least, you should issue a CHECKPOINT prior to taking the
snapshot, to ensure that the write barriers have gone through.
But that being said, I'm not quite sure why following the Continuous
Archiving procedures is a problem, since they don
error that occurs after the full and correct PostgreSQL hot backup
procedures have been executed, or
ii) present a conjecture that explains in detail how a device level
error might occur
then I will look into this further.
--
Simon Riggs
EnterpriseDB
messages = DEBUG2 and re-run the recovery please.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
G: page 28902 of relation 1663/16384/76718 was uninitialized
> WARNING: page 26706 of relation 1663/16384/76719 was uninitialized
> WARNING: page 26708 of relation 1663/16384/76719 was uninitialized
Those two are interesting because we appear to have two valid pages in
the middle of some unin
L traffic from Greg Stark earlier, based
upon xlogdump.
http://archives.postgresql.org/pgsql-hackers/2007-03/msg01589.php
Other details of WAL volumes are in the code. Further analysis would be
welcome, to assist discussions of where to optimize next.
--
Simon Riggs
EnterpriseDB http://www.en
ow-medium complexity OLTP workloads.
EState is about 8300 bytes, so just above the large allocation limit.
This means that every time we request an EState, i.e. at least once per
statement we need to malloc() and then later free().
Would it be worth a special case in the palloc system to avoid
On Mon, 2007-07-23 at 10:11 -0400, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > EState is about 8300 bytes,
>
> What?
>
> (gdb) p sizeof(EState)
> $1 = 112
>
> This is on a 32-bit machine, but even on 64-bit it wouldn't be
On Mon, 2007-07-23 at 10:54 -0400, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > I looked at this last May and my notes say "ExecutorState". I guess that
> > was wrong, but my analysis showed there was a single malloc of 8228
> >
On Mon, 2007-07-23 at 16:48 +0100, Simon Riggs wrote:
> On Mon, 2007-07-23 at 10:54 -0400, Tom Lane wrote:
> > "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > > I looked at this last May and my notes say "ExecutorState". I guess that
> > > was wr
On Mon, 2007-07-23 at 12:35 -0400, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > Well, I discover there is an allocation of 8232 (inflation...) made once
> > per statement by a memory context called... ExecutorState. Still not
> > sure exac
On Mon, 2007-07-23 at 14:19 -0400, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > currPos and markPos are defined as BTScanPosData, which is an array of
> > BTScanPosItems. That makes BTScanOpaqueData up to 8232 bytes, which
> > seems was
e reasonable evidence that Referential Integrity is the major
performance bottleneck and would like some objective evidence that this
is the case.
No rush, since it will be an 8.4 thing to discuss and improve this
substantially in any of the ways I envisage.
--
Simon Riggs
EnterpriseDB h
On Wed, 2007-07-25 at 15:07 +0200, Mario Weilguni wrote:
> Am Mittwoch 25 Juli 2007 schrieb Simon Riggs:
> > I have reasonable evidence that Referential Integrity is the major
> > performance bottleneck and would like some objective evidence that this
> > is the case.
>
On Wed, 2007-07-25 at 10:09 -0400, Merlin Moncure wrote:
> On 7/25/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
> >
> > Should you get the chance I would appreciate a comparative test for
> > TPC-E.
> >
> > 1. Normal TPC-E versus
> > 2. TPC-E with all
ewhere in the hundreds so we have some statistical validity.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
t for one process, or all processes aggregated in some way?
> CheckpointStartLock Shared 6
> CheckpointStartLock Exclusive 102
That's definitely whacked. Surely we didn't start 102 checkpoints yet
attempt to commit 6 times?
--
Simon R
ren't big enough to hold clog pages for long enough and the SELECT FOR
SHARE processing is leaving lots of additional read locks that are
increasing the number of clog requests for older xids.
Try the enclosed patch.
--
Simon Riggs
EnterpriseDB http://www.enterpris
ctions?
Are you running with commit_delay > 0? Its possible that the call to
CountActiveBackends() is causing pinging of the procarray by other
backends while we're trying to read it during CommitTransaction(). If
so, try the attached patch.
--
Simon Riggs
Enterpri
es`CommitTransaction+0x104
>
> Yeah, ProcArrayLock is pretty contended. I think it would be kinda neat
> if we could split it up in partitions. This lock is quite particular
> though.
Maybe, if we did we should set the partitions according to numbers of
users, so lower num
Will this have a dramatic effect on a particular
benchmark, or for what reason might we need this? Tom has questioned the
use case here, so I think it would be good to explain a little more for
everyone. Thanks.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---
very clear to everybody why an extra 100 users is useful and it
would certainly help your case if you can explain.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:
> protocol C;
Try protocol B instead.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 4: Have you searched our list archi
On Fri, 2007-09-07 at 23:54 +0200, Bernd Helmle wrote:
> --On Freitag, September 07, 2007 20:00:16 +0100 Simon Riggs
> <[EMAIL PROTECTED]> wrote:
>
> > On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:
> >
> >> protocol C;
> >
> > Try
ithout LIMIT and with/without cursor, for both b1
and b2?
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
t use the shared buffers ?
Well, 6.5GB of shared_buffers could be swapped out and need not be
swapped back in to perform those 3 queries.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Fri, 2007-09-21 at 13:29 +0200, Csaba Nagy wrote:
> > Can you plans with/without LIMIT and with/without cursor, for both b1
> > and b2?
>
> The limit is unfortunately absolutely needed part of the query
Understood, but not why I asked...
--
Simon Riggs
2ndQu
On Fri, 2007-09-21 at 14:12 +0200, Csaba Nagy wrote:
> On Fri, 2007-09-21 at 12:34 +0100, Simon Riggs wrote:
> > On Fri, 2007-09-21 at 13:29 +0200, Csaba Nagy wrote:
> >
> > > > Can you plans with/without LIMIT and with/without cursor, for both b1
> > >
ill work with less than 1000 too.
Well, we know there are ways of optimizing ANALYZE.
ISTM we should be able to auto-select stats target based upon the shape
of the frequency distribution of the column values. We'd need to make
some calculations about the index cost model, bu
is so far
> off:
That's true, but its not relevant, since the query would still be fast
even if that estimate was exactly right. With LIMIT 10, it wouldn't
matter how many rows were there as long as there were more than 10. The
true execution cost is limited, the cost model is not.
--
lternatives; nobody will agree without a balanced case that includes
what is best for everyone.
Your EXPLAIN looks edited. Have you also edited the sort costs? They
look slightly higher than we might expect. Please provide the full
normal EXPLAIN output.
--
Simon Riggs
On Fri, 2007-09-21 at 13:53 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Fri, 2007-09-21 at 12:08 -0400, Tom Lane wrote:
> >> The reason you get a bad plan is that this rowcount estimate is so far
> >> off:
>
> > That's tru
On Fri, 2007-09-21 at 19:30 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > That's not my perspective. If the LIMIT had been applied accurately to
> > the cost then the hashjoin would never even have been close to the
> > nested join in the firs
On Mon, 2007-09-24 at 16:04 +0200, Csaba Nagy wrote:
> On Mon, 2007-09-24 at 14:27 +0100, Simon Riggs wrote:
> > Csaba, please can you copy that data into fresh tables, re-ANALYZE and
> > then re-post the EXPLAINs, with stats data.
>
> Well, I can of course. I actually t
in PLs.
We could also force the plan to be -1 when the planning involves
something that would force it to be a one-time plan, e.g. constraint
exclusion. (So we could then get rid of that parameter).
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---
ery row.
>
> Unfortunately, in this case it's not an option to alter the query. I'm
> just trying to figure out an explaination.
In the first query, Postgres cannot use the index because the sort order
of the index does not match the sort order of the query. When you change
the sor
ery run in 1/1000 the time.
Can you confirm the two queries give identical outputs? It isn't clear
to me why the second sort is (never executed) in your second plan, which
I would only expect to see for an inner merge join.
Can you show the details for ControlRecord also.
--
Simon Riggs
On Wed, 2007-10-10 at 09:15 -0500, Kevin Grittner wrote:
> >>> On Wed, Oct 10, 2007 at 1:31 AM, in message
> <[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
> wrote:
> > On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:
> >
> >&g
On Wed, 2007-10-10 at 13:30 -0500, Kevin Grittner wrote:
> >>> On Wed, Oct 10, 2007 at 1:07 PM, in message <[EMAIL PROTECTED]>,
> Tom Lane <[EMAIL PROTECTED]> wrote:
> > Simon Riggs <[EMAIL PROTECTED]> writes:
> >> Basically the planner doesn'
On Wed, 2007-10-10 at 14:07 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Basically the planner doesn't ever optimise for the possibility of the
> > never-executed case because even a single row returned would destroy
> > that assumption.
On Wed, 2007-10-10 at 14:35 -0500, Kevin Grittner wrote:
> >>> On Wed, Oct 10, 2007 at 1:54 PM, in message
> <[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]>
> wrote:
> >
> > But the planner doesn't work on probability. It works on a best-g
o the plan can't take into account
> the value of current_timestamp in forming the plan.
It could, but it doesn't yet. Partition removal can take place in the
executor and this is currently targeted for 9.1.
--
Simon Riggs www.2ndQuadrant.com
--
Sent via pgsql-performance mailing l
its with no
> scanning of ProcArray if the minimum is 0, and allows setting the
> siblings to 0 to enable that shortcut:
Minor patch, no downsides. Docs checked. Committed.
--
Simon Riggs http://www.2ndQuadrant.com/books/
PostgreSQL Development, 24x7 Support, Training and Services
favour the idea of dynamic sampling using a block sampling
approach; that was a natural extension of improving ANALYZE also. We can
use that approach for things such as LIKE, but also use it for
multi-column single-table and join selectivity.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Tue, 2006-01-10 at 17:21 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > I think its OK to use the MCV, but I have a problem with the current
> > heuristics: they only work for randomly generated strings, since the
> > selectivity goes dow
On Tue, 2006-01-10 at 22:40 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > I meant use the same sampling approach as I was proposing for ANALYZE,
> > but do this at plan time for the query. That way we can apply the
> > function directly to th
On Tue, 2006-01-10 at 17:21 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > I think its OK to use the MCV, but I have a problem with the current
> > heuristics: they only work for randomly generated strings, since the
> > selectivity goes dow
EXPLAIN whether GEQO has been used to find
the plan, possibly along with other factors influencing the plan such as
enable_* settings.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
inserted values (Logdate?) (or all 3?). (Requires
changes in the application layer: 3GL or db functions).
The truncates can wait until the data has stopped being used.
I'd be disinclined to using the locking system as a scheduling tool.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: explain analyze is your friend
DBMS.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
for sure.
Thanks,
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
-sweet-spot that we seem to have
> detected.
Agreed. Good analysis...
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
always
> takes a second or two. If I fill the column with all zeros everything is
> still OK.
Aside from the importance of investigating sort behaviour, have you
tried to build a partial index WHERE col > 0 ? That way you wouldn't
even be indexing the zeros.
Best Regards, Simon Rig
you at all when you are
doing final run merging on the external sort. Whatever we do, we should
look at the value high memory settings bring to each phase of a sort
separately from the other phases.
There is work underway on improving external sorts, so I hear (not me).
Plus my WIP on ran
ch made the estimates of rows more
> accurate, but didn't change the choice of plans.
The selectivity seems the same in both - clamped to a minimum of 1 row,
so changing that doesn't look like it would help.
Best Regards, Simon Riggs
---(end of broadcast)-
. The reporting type queries
> after that are basically searches on those tables.
>
> Anybody any ideas?
This is caused by updating the commit status hint bits on each row
touched by the SELECTs. This turns the first SELECT into a write
operation.
Try running a scan of the whole tab
On Wed, 2006-03-15 at 14:17 -0500, Tom Lane wrote:
> It looks to me like this is a matter of bad cost estimation, ie, it's
> thinking the other index is cheaper to use. Why that is is not clear.
> Can we see the pg_stats rows for ctofcNo and calDate?
ISTM that when the BETWEEN constants match we
On Thu, 2006-03-16 at 00:07 +, Simon Riggs wrote:
> On Wed, 2006-03-15 at 14:17 -0500, Tom Lane wrote:
>
> > It looks to me like this is a matter of bad cost estimation, ie, it's
> > thinking the other index is cheaper to use. Why that is is not clear.
> > Can
Once we have the link, clauselist_selectivity() can trivially compare
opclasses for both OpExprs, then retrieve other information for that
opclass for various purposes.
Seems lots of work for such a corner case, but would be worth it if this
solves other problems as well.
Best Regards, Simon R
On Thu, 2006-03-16 at 10:57 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Trying to get the information in the wrong place would be very
> > expensive, I agree. But preparing that information when we have access
> > to it and passing it through the
On Thu, 2006-03-16 at 15:41 -0400, Alvaro Herrera wrote:
> Simon Riggs wrote:
>
> > [BTW how do you add new indexes to system tables? I want to add one to
> > pg_inherits but not sure where to look.]
>
> See src/include/catalog/indexing.h -- I don't remember if there
On Thu, 2006-03-16 at 14:45 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > [BTW how do you add new indexes to system tables? I want to add one to
> > pg_inherits but not sure where to look.]
>
> src/include/catalog/indexing.h
>
> Offhand
cept when using PITR).
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
will just be slowing you down too because
you'll always have more clean buffers than you can use, since you have
132MB of shared_buffers, yet flushing all of them every checkpoint.
Please read you're logfile, which should have relevant WARNING messages.
Best Regards,
index cost calculation knows the whole index will be
cached, which it clearly could be with 4GB RAM.
If the cost is still wrong, it is because the index order doesn't
correlate physically with the key columns. Use CLUSTER.
Best Regards, Simon Riggs
-
it
aggressively enough to keep up.
Your thoughts on this process are welcome...
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTEC
LECTs don't produce WAL.
PITR wants all changes. Without PITR we can optimise certain logging
actions.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: explain analyze is your friend
timating the number of rows retrieved
in both cases, then multiplying them together to make it worse.
Multi-column indexes provide less accurate estimates (right now).
Looks like a hash join might be faster. What is your work_mem set to?
Can you SET enable_nestloop=off and rerun the EXPLAIN ANA
On Tue, 2006-03-28 at 10:22 -0500, george young wrote:
> work_mem= 1024
Set that higher.
Try a couple of other plans using enable_* and let us have the EXPLAIN
ANALYZE plans.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 5: do
e
value. You might want to try using codes rather than textual
KeywordTypes.
You might try using partial indexes also, along the lines of
CREATE INDEX ix_keyword_type ON "Keyword" USING btree ("KeywordType") WHERE
KeywordType IS NOT NULL;
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: explain analyze is your friend
he EXPLAIN as to why the
Index Scan exists.
Perhaps it should be a TODO item to make the EXPLAIN say explicitly when
an Index Scan is being used to provide sorted output?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
#x27;. That
> should only do that table, correct? I'm running 8.0.3.
Read this http://www.postgresql.org/docs/8.0/static/release-8-0-5.html
and you'll probably decide to upgrade.
Best Regards, Simon Riggs
---(end of broadcast)---
CPUs the contention will vary according to what each CPU is doing at any
one time - when they all hit the contention spot, things will get worse.
The pSeries has good CPUs and great caching, so I'd expect contention to
be somewhat more apparent as a bottleneck.
Best Regards, Simon Rigg
, which seems to fill a functional
gap/bug.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com/
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
'll either document this better, or build an overlap into the restore
command processing itself, so the script doesn't need to do this.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com/
---(end of broadcast)---
TIP 1:
ieval of the files. It
> only
> took maybe 1/2 a second to retrieve the file, but often took anywhere
> from
> 5 to 30 seconds to process the file. More so on the longer end of the
> scale.
Sorry, thought you meant the decompression time.
--
Simon Riggs
EnterpriseDB
the risk of only 5
> minutes data loss or much worse?
Thats up to you.
fsync can be turned on and off, so you can make critical changes with
fsync on, then continue with fsync off.
The risk and the decision, are yours. You are warned.
--
Simon Riggs
EnterpriseDB http://ww
ations and many
yet to come cover smaller and smaller sub-cases.
At least internally, we could mark the cost-of-optimization as we go, so
we can play with the external interface later.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of
ls are below.
The rows are not randomly distributed, so the SeqScan takes longer to
find 1 row than the index scan.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
and
what is happening on your system and summarise that as you choose.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subs
e games with ignoring "" messages, but
> otherwise simple (and simplistic...)
The OP wanted statements/sec rather than just a total.
Having stats logged by time would be very useful, but I wouldn't limit
that just to numbers of statements in each time period.
stats_logging_int
ack. That would give us what we
need: pinpoint exactly which part of the plan is off-track and see how
far off track it is. If the limits were configurable, we'd be able to
opt for faster-but-less-accurate or slower-yet-100% accuracy behaviour.
We wouldn't need to worry about t
On Tue, 2006-06-06 at 10:43 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:
> >> Well, it's a big query. If it ought to take a second or two, and
> >> instead is taking an hour or two (
two EXPLAINS that could be examined for
differences. Often, you have just one EXPLAIN and no idea which estimate
is incorrect, or whether they are all exactly correct. That is when an
EXPLAIN ANALYZE becomes essential - yet a *full* execution isn't
required in order to tell y
ply being transaction-aware and doing multiple inserts
> per transaction greatly increases performance, giving an easy order of
> magnitude increase (wrapping 10 inserts in a transaction gives a 10x
> performance boost).
This is exactly the same as most other transacti
On Mon, 2006-06-26 at 17:20 -0400, Michael Stone wrote:
> On Mon, Jun 26, 2006 at 08:33:34PM +0100, Simon Riggs wrote:
> >of the SQL standard, so being unaware of them when using SQL is strange
> >to me.
>
> Welcome to the world of programs designed for mysql. You'l
> min_IO_cost = ceil(indexSelectivity * T) ?
That part is sensible. The min_IO_cost is when the access is sequential,
which by definition has a cost of 1.0.
The bit you might have issue with is how we extrapolate from the
min_IO_cost and correlation to arrive at a cost.
--
Simon Riggs
Enterp
as a result of the regular expression comparison. As a result the
planner thinks it can choose a nested loops scan, though ends up doing
1892 seq scans of persons, when it thought it would do only one.
The under estimation is a known issue. Posting to -perform for the
record.
--
approach will also aid debugging
of the load suite. Dynamic SQL programs are particularly susceptible to
this kind of bug because you can't eyeball the code.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)
and if it fails (and has been failing) then return a
"success" message to the server to allow it reuse files. That means you
start dropping WAL data and hence would prevent a recovery from going
past the point you started dropping files - I'd never do that, but some
have argued previo
On Thu, 2006-09-28 at 15:18 -0700, Ben wrote:
> distinct public.album.id
> group by public.album.id
You can remove the distinct clause for starters...
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of bro
On Fri, 2006-09-29 at 10:29 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > We discussed putting PreallocXlogFiles() in bgwriter once before, but I
> > think last time we discussed that idea it was rejected, IIRC.
>
> We already do that: it's
never be required,
so the Mark can be avoided also and thus the materialization of the sort
can also be avoided. So some way of telling the MJ node that the sort
order is also unique would be very useful.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
specific reason to do so (and AFAICT no version back to 7.0 or so
> has done so either...) So there's something you are not telling us that
> is relevant. Let's see the exact table schema (psql \d output is good),
> the exact query, and EXPLAIN output for that query.
Is WHER
101 - 200 of 392 matches
Mail list logo