optimizations if they don't come at the expense of
scalability under more complex workloads.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgsql-performan
ewer processes
and lower max_connections. If you set max_connections to 64 and eliminate the
wait time you should, in theory, be able to get 100% cpu usage. It would be
very interesting to track down the contention which is preventing that.
--
Gregory Stark
EnterpriseDB http://www.enterpr
MB seems low
but consider the following:
$ units
2445 units, 71 prefixes, 33 nonlinear units
You have: 8kB / .5|7200min
You want: MB/s
* 1.92
/ 0.5208
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postg
Greg Smith writes:
> On Mon, 16 Mar 2009, Gregory Stark wrote:
>
>> Why would checkpoints force out any data? It would dirty those pages and then
>> sync the files marking them clean, but they should still live on in the
>> filesystem cache.
>
> The bulk of the b
really using much RAM before everything has to get forced to disk.
Why would checkpoints force out any data? It would dirty those pages and then
sync the files marking them clean, but they should still live on in the
filesystem cache.
--
Gregory Stark
EnterpriseDB http://www.ente
here is why you
say you weren't able to saturate the 64 threads with 64 processes when they're
all RAM-resident.
>From what I see you still have 400+ processes? Is that right?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - a
th raising as an issue but I didn't realize we were
currently doing prefetching by default? i didn't realize that. Even on a
system with posix_fadvise there's nothing much to be gained unless the data is
on a RAID device, so the original objection holds anyways. We shouldn't do any
pre
Tom Lane writes:
> Alan Stange writes:
>> Gregory Stark wrote:
>>> AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit.
>
>> It's implemented. I'm guessing it's not what you want to see though:
>> http://src.o
()) is 48 lines.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
"Jignesh K. Shah" writes:
> Gregory Stark wrote:
>> Keep in mind when you do this that it's not interesting to test a number of
>> connections much larger than the number of processors you have. Once the
>> system reaches 100% cpu usage it would be a misconfi
() so there's no benefit. It
would be great to hear if you could catch the ear of the right people to get
an implementation committed. Depending on how the i/o scheduler system is
written it might not even be hard -- the Linux implementation of WILLNEED is
all of 20 lines.
--
Gregory Sta
u would want each backend to use the most light-weight primitives such as
threads, and to hold the least possible state in local memory. That would look
like quite a different system. The locking contention is the least of the
issues we would want to deal with to get there.
--
Gregory Stark
E
larger than the number of processors you have. Once the
system reaches 100% cpu usage it would be a misconfigured connection pooler
that kept more than that number of connections open.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support
s under properly configured setups would lead us to make changes
which would have to be tested twice, once with and once without this option.
What do we do if dtrace says some unrelated change helps systems with this
option disabled but hurts systems with it enabled?
--
Gregory Stark
Enterpr
the application. You want
the application, the filesystem, the partition layout, and the raid device
geometry to all consistently maintain alignment with erase blocks.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
l the query completes but
it will have much less information to diagnosis any problems.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgre
hat libpq (or other interface) operations are you doing exactly?
[also, your Mail-Followup-To has a bogus email address in it. Please don't do
that]
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent
that and the main benefit of that was precisely that this type of
expression could use a multi-column index effectively.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent via pgsql-performance maili
e pretty terrible numbers for fifteen 15k rpm drives. They're
about what you would expect if for a PCI-X card which was bus bandwidth
limited. A PCI-e card should be able to get about 3x that from the drives.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask
a mirror set of the same drives under linux kernel
> software RAID.
Why is that surprising? I would expect software raid to be able to handle 8
drives perfectly well assuming you had an controller and bus you aren't
saturating.
--
Gregory Stark
EnterpriseDB http://ww
a merge or hash join between the side
> table and the child table...
It would be neat if we could feed the queued trigger tests into a plan node
like a Materialize and use the planner to determine which type of plan to
generate.
--
Gregory Stark
EnterpriseDB http://www.enterprised
o isn't perfect; the on-disk sort still
uses some ram, for example.
> What happens if you set work_mem to something REALLY big, like 5GB?
Don't set it larger than the available RAM though -- or you'll quite possibly
get an out-of-error error.
--
Gregory Stark
EnterpriseD
ery would generate identical plans. We don't live in an ideal
world and Postgres isn't perfect.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgsql-performa
power cycling rapidly each iteration
drains the battery of the time it takes to save the state but only charges it
for the time the power is on. I wonder how many iterations that gives you.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB'
r there
are a lot of matching index pointers which fail those other constraints.
Assuming it's the latter perhaps some other index definition would let it zero
in on the right tuples more quickly instead of having to grovel through a lot
of irrelevant rows?
--
Gregory Stark
EnterpriseDB
Ron writes:
> At 10:36 AM 1/10/2009, Gregory Stark wrote:
>>
>> Or a system crash. If the kernel panics for any reason when it has dirty
>> buffers in memory the database will need to be restored.
>
> A power conditioning UPS should prevent a building wide or circ
ata written in the meantime or whether they randomly spike upwards.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tery backed cache you
can leave fsync on and not have any significant performance hit since the seek
times are very fast for SSD. They have limited bandwidth but bandwidth to the
WAL is rarely an issue -- just latency.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me
s usually an improvement over just guessing but if that first set is
unusual it could lead to strange results.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent via pgsql-performance mailing list (pg
me graphs would be more informative if you excluded the
ramp-up portion of the test. As it is there are big spikes at the low end
but it's not clear whether they're really part of the curve or due to
ramp-up. This is especially visible in the stock-level graph where it
throws off
ring on a real production system.
I'm not exactly sure what changes it might make though.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
Mark Wong writes:
> On Dec 20, 2008, at 5:33 PM, Gregory Stark wrote:
>
>> "Mark Wong" writes:
>>
>>> To recap, dbt2 is a fair-use derivative of the TPC-C benchmark. We
>>> are using a 1000 warehouse database, which amounts to about 100GB
percentile response time?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.6384
/ 0.61035156
At 1.6MB/s per drive if find Postgres is cpu-bound doing sequential scans at
1GB/s you'll need about 640 drives to keep one cpu satisfied doing random I/O
-- assuming you have perfect read-ahead and the read-ahead itself doesn't add
cpu overhead. Both of which are false
rs, resulting in a lot more random access from disk and more disk bound
> periods of time. Great wonder, this operating system caching, eh?
How do you observe this?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB&
ory to the OS to manage.
Hm, sounds logical. At that point the slow drop-off should become even
shallower and possibly become completely flat. Greater shared_buffers might
start helping again at that point.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me a
ery
iterating over the partitions.
There are several groups working to improve this in different ways but none of
them appear to be on track to be in 8.4 so it will be 8.5 or later before they
appear. Sorry.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about Ente
table so I don't
believe this is from the example posted. I would suggest running ANALYZE
idtellUued at some point before the problematic query.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent v
m curious if
> that choice was due to a bad estimate here.
>
> Nested Loop IN Join (cost=0.00..3850831.86 rows=128266 width=8)
Because it's an IN join it doesn't have to run the inner join to completion.
Once it finds a match it can return the outer tuple and continue to the next
stly just idle curiosity. Don't jump through hoops to
> get a test case.
Assuming it's not a bug...
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n they happen no
data is lost, but media recovery of the affected database block may be
required
That "media recovery" it's referring to sounds like precisely our WAL full
page writes...
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask
cally to the NAS internal organization.
The previous pages discuss limitations of OSes, filesystems and especially NFS
clients which you may have to be concerned with as well.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replicatio
ion
have. That doesn't mean we shouldn't do it but I would expect it to be
improved along with the other uses when we find a better metric.
I did happen to speak to a statistician the other day and was given some terms
to google. I'll investigate and see if I get anything useful.
tual results does postgres get for simple queries like:
EXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE A IS NULL;
EXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE B=21;
EXPLAIN ANALYZE SELECT count(*) FROM my_table WHERE A IS NULL AND B=21;
--
Gregory Stark
EnterpriseDB http:
at you can run the
drives in write-back mode safely. It's that you can cache in the BBU safely.
The drives still need to have their write caches off (ie, in write-through
mode).
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me
E x ALTER id TYPE integer USING id;
which will rewrite the whole table. This is effectively the same as CLUSTER
except it doesn't order the table according to an index. It will still require
enough space to hold two copies of the table but it will be significantly
faster.
--
Gregory
p" join. That's usually a good
choice for small queries against big tables but if you're joining a lot of
data there are other join types which are much faster. You might find the
planner can do a better job if you write your query as a plain SQL query and
let the optimizer
"Gregory Stark" <[EMAIL PROTECTED]> writes:
> <[EMAIL PROTECTED]> writes:
>
>>> If you are completely over-writing an entire stripe, there's no reason to
>>> read the existing data; you would just calculate the parity information from
>
on that large (hopefully battery backed) caches help RAID-5
disproportionately. The larger the cache the more likely it'll be able to wait
until the entire raid stripe is replaced avoid having to read in the old
parity.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
se
> have to work with, so it can improve its decisions. "The values used in
> this join condition are highly correlated".
This sounds familiar:
http://article.gmane.org/gmane.comp.db.postgresql.devel.general/55730/match=hints
Plus ça change...
--
Gregory Stark
Enterpr
gle point in the future but rather whether you plan to
ever have had 2 billion users total over your history. I suspect large
networks like Yahoo or Google might be nearing or past that point now even
though they probably only have a few hundred million current users.
--
Gregory Stark
Ent
ators.
You only need int8 if you might someday have more than 2 *billion* users...
Probably not an urgent issue.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgs
better design. If you use Username
you'll be cursing if you ever want to provide a facility to allow people to
change their usernames. You may not want such a facility now but one day...
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB'
and then just quickly
> sort. Is there any reason the planner doesn't do this?
Yeah, Heikki's suggested having a kind of "branch" plan node that knows how
where the break-point is between two plans and can call the appropriate one.
We don't have anything like that yet.
you could make use of more than 24M for shared
buffers. Probably something in the region 64M-128M if your database is large
enough to warrant it.
And increase the statistics target on header_fields and re-analyze?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me abo
ords are actually requested. The LIMIT prevents the records beyond
11 from ever being requested even though they exist.
While the bitmap heap scan has to fetch all the records even though they don't
all get used, the nested loop only fetches the records as requested.
--
Gregory Stark
Ente
h appear in
"users_processors" but not in "processors". I don't know your data model but
that sounds like broken referential integrity to me.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support
. estimated costs are in
> terms of the cost to sequentially scan a single tuple, while actual
> costs are in milliseconds.
s/tuple/page/
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent via pgsql-per
mory. If that large heap
doesn't actually save any passes and doesn't reduce the number of output tapes
then it's just wasted cpu time to maintain such a large heap. If you have any
clever ideas on how to auto-size the heap based on how many output tapes it
will create or avoid then
he choke points for large numbers of
connections. But I think not because it's limiting your benchmark results --
that would be better addressed by using fewer connections -- just for the sake
of knowing where problems loom on the horizon.
--
Gregory Stark
EnterpriseDB h
e're starting a new tape. Not
sure how that would work out though.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
ng more and more stuff the
array_append becomes more and more expensive (it has to generate a new array
so it has to copy the existing stuff). So actually it woul dbe O(n^2).
The only builtin aggregate which looks like it falls in this category would be
xmlagg()
--
Gregory Stark
EnterpriseDB
WAL logging
trick insert the records pre-frozen. I recall there were problems with that
idea though but I don't recall if they were insurmountable or just required
more work.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Repli
act of course they all returned 0 rows except the correct partition. (This
was in a join so no constraint exclusion)
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's 24x7 Postgres support!
--
Sent via pgsql-performance mailing list (pgsql-per
ace in the heap (fillfactor less than
> 100). My recommendation would be to leave space worth of one row or
> slightly more than that to let first UPDATE be an HOT update.
> Subsequent UPDATEs in the page may reuse the dead row created by
> earlier UPDATEs.
>
> 3. Avoid any long
actual benchmark runs you're referring to?
But I just made an off-hand comment that I doubt 8.2 could pass TPC-E which
has much more stringent requirements. It has requirements like:
the throughput computed over any period of one hour, sliding over the Steady
State by incre
mes and I doubt 8.2 would have been able to pass them. So the
performance limiting factors whether they be i/o, cpu, lock contention, or
whatever don't even come into play. We wouldn't have any conformant results
whatsoever, not even low values limited by contention. 8.3 however should be
ow.
>
> Could somebody remove the latter address from the list, please?
Unfortunately that's not the address causing the problem. This is a
particularly stupid spam filter which is just grabbing that address from your
To line.
--
Gregory Stark
EnterpriseDB http://www.enter
an even do better. If you know the two sets of mid are disjoint you can
use UNION ALL. If not you could remove the two DISTINCTs as the UNION will
take care of removing duplicates.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Pr
e cases where the DISTINCT method wins too.
This is basically a bug, in an ideal world both queries would generate
precisely the same plans since they're equivalent. It's just not a high
priority since we have so many more interesting improvements competing for
time.
--
Gre
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> The transition domain where performance drops dramatically as the database
>> starts to not fit in shared buffers but does still fit in filesystem cache.
>
> It look
scaling factors
as well as the total of the index sizes. And how much memory Linux says is
being used for filesystem buffers.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
--
Sent via pgsql-performance mailing list (pg
ithm to "deadline" improved I/O performance by a factor 4 (!) for
> this specific load test.
What was the algorithm before?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
--
Sent via pgsql-performance m
LECT min(logdate) FROM part_master;
Er, yeah. Unfortunately this is just not a kind of query our planner knows how
to optimize when dealing with a partitioned table... yet. There are several
different pieces missing to make this work. There's some hope some of them
might show up for 8.4 but
bits were set.
Slony's inserts, updates, and deletes count as updates to the table as well.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
--
Sent via pgsql-performance mailing list (pgsql-perform
tgres goes to some effort to keep its reads sequential. So this
sounds like it destroys that feature. Random writes don't matter so much
because Postgres has its WAL which it writes sequentially. Writes to data
files aren't in the critical path and can finish after the transaction is
c
<[EMAIL PROTECTED]> writes:
> On Tue, 18 Mar 2008, Gregory Stark wrote:
>
>> You can have as many parity drives as you want with RAID 5 too.
>
> you can? I've never seen a raid 5 setup with more then a single parity dirve
> (or even the option of having more the
during the load imposed by rebuild onto a
>>> spare) ?
>
> that's why you should use raid6 (allowing for dual failures)
You can have as many parity drives as you want with RAID 5 too.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about E
ceptions
listed in the 8.0.6 release notes which would require a REINDEX but they don't
affect most people.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
--
Sent via pgsql-performance mailing list (pg
y're just a
baroque version of this central counter in this case.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nd doesn't happen.
I meant in this case, not in general. That is, does it introduce a subshell?
Sh traditionally has to introduce to implement some of the logical control and
pipe operators. I'm not sure if a simple && is enough but often it's
surprising how quickly that ha
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Erik Jones <[EMAIL PROTECTED]> writes:
>> On Feb 20, 2008, at 8:14 AM, Gregory Stark wrote:
>>> I would suggest leaving out the && which only obfuscate what's
>>> going on here.
>>>
gt; PGOPTONS="-c maintenance_work_mem=512MB" && pg_restore
>
> now that's just plain cool
>
> /me updates our wiki
I would suggest leaving out the && which only obfuscate what's going on here.
PGOPTIONS=... pg_restore ...
would work just as w
there are no records for the user then b) will be bad because it'll
have to go through all of the records to the beginning of time.
The suggested index lets it scan the records for the given user from most
recent to oldest without seeing any records for any other user.
--
Gregory Stark
Enterp
ld just be that the
distribution is highly skewed which is a hard case to estimate correctly.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
---(end of broadcast)---
ng about them?
(Not meant to be a knock on Solaris, I'm sure there are other cases Linux or
BSD handle poorly too)
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
---(end of broadcast
e this is read only
> and there is no separate disk reader all other processes are idle) and system
> is running at 1/Nth capacity (where N is the number of cores/threads)
Is the whole benchmark like this or is this just one part of it?
Is the i/o system really able to saturate the c
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> >>> On Tue, Jan 29, 2008 at 9:52 AM, in message
> <[EMAIL PROTECTED]>, Gregory Stark <[EMAIL PROTECTED]>
> wrote:
>
> > I got this from a back-of-the-envelope calculation which now that I
"Matthew" <[EMAIL PROTECTED]> writes:
> On Tue, 29 Jan 2008, Gregory Stark wrote:
>>> This was with 8192 random requests of size 8192 bytes from an 80GB test
>>> file.
>>> Unsorted requests ranged from 1.8 MB/s with no prefetching to 28MB/s with
"Matthew" <[EMAIL PROTECTED]> writes:
> On Tue, 4 Dec 2007, Gregory Stark wrote:
>> FWIW I posted some numbers from a synthetic case to pgsql-hackers
>>
>> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00088.php
>...
> This was with 8192 rand
resent. It will also make your
ANALYZEs take a lot longer. I would suggest trying 100 first and incrementally
raising it rather than jumping straight to 1000. And preferably only on the
columns which really matter.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me abo
isn't "off". We could encourage other benchmark software to do
something similar.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
figure documentation. They read configure --help.
Fwiw I think you're all getting a bit caught up in this one context. While the
slowdown is significant when you take out the stopwatch, under normal
interactive use you're not going to notice your queries being especially slow.
--
Gregor
h? I don't think it hurts but that's a
few orders of magnitude higher than what I would expect to help.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
---(end of broadcast)-
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> It is pretty much common knowledge that
I think we have too much "common knowledge".
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!
e set and not the other. If
they're not read-only transactions and fsync=on then the TPS of 3000+ is not
credible and this is likely.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
---(end o
do without the sort if you write some fancy
aggregate functions but for this large a query that's going to be awfully
complex.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL
training
o the normal value (or use SET LOCAL). You don't
have to change it in the config file and restart the whole server.
Also, try replacing the DISTINCT with GROUP BY. The code path for DISTINCT
unfortunately needs a bit of cleaning up and isn't exactly equivalent to GROUP
BY. In particular
"Craig James" <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>
>> And the only reason to do that would be to work around one bug in one small
>> range of glibc versions. If you're going to use a multi-threaded library
>> (which isn't very
x27;s hard to do safely for all those other
reasons) surely using a version of your OS without any thread related bugs is
a better idea.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
--
1 - 100 of 219 matches
Mail list logo