touch. I'd been thinking
about reverse key indexes as a way of relieving the hotspot down the
rightmost edge of an index during heavy insert traffic. I hadn't thought
this would also speed up the access also.
Best Regards, Simon Riggs
---(end of broadcast
no rush to
upgrade on that account - unless you're using temporary tables
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL
Neil Conway
Simon Riggs wrote:
Josh Berkus wrote
Simon Riggs wrote
Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.
I'm pretty sure that WAL_DEBUG requires a compile-time option.
I'm surprised, but you are right, the manual does SAY this requires
a
compile time
Bruno Wolff
Simon Riggs [EMAIL PROTECTED] wrote:
I guess what I'm saying is it's not how many people you've
got working
on the optimizer, its how many accurate field reports of less-than
perfect optimization reach them. In that case, PostgreSQL
is likely in a
better position than
Regards, Simon Riggs
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Jim C. Nasby
On Mon, Sep 13, 2004 at 11:07:35PM +0100, Simon Riggs wrote:
PostgreSQL's functionality is in many ways similar to Oracle
Partitioning.
Loading up your data in many similar tables, then creating a view like:
CREATE VIEW BIGTABLE (idate, col1, col2, col3...) AS
SELECT
Chris Browne [EMAIL PROTECTED] wrote on 15.09.2004, 04:34:53:
[EMAIL PROTECTED] (Simon Riggs) writes:
Well, its fairly straightforward to auto-generate the UNION ALL view,
and
important as well, since it needs to be re-specified each time a new
partition is loaded or an old one is cleared
Joe,
Your application is very interesting. I've just read your OSCON paper. I'd
like to talk more about that. Very similar to Kalido.
...but back to partitioning momentarily: Does the performance gain come from
partition elimination of the inherited tables under the root?
Best Regards, Simon
queries as a
way of speeding up queries against very large tables: UNION ALL with
appended constants.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
as large a
shared_buffers cache as is required by the database workload, and this
should not be constrained to a small percentage of server RAM.
Best Regards,
Simon Riggs
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
Sent: 08 October 2004 22:43
settings, such as the optimizer cost
parameters, bg writer defaults etc.
Best Regards,
Simon Riggs
2nd Quadrant
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
% and 400% say. That way you'd stand a chance of plotting the
curve and thereby assessing how much memory could be allocated. I've got a
few ideas, but I need to check out the code first.
I'll investigate both simple/complex options as an 8.1 feature.
Best Regards, Simon Riggs
haven't seen anything on
what the size should be.
wal_buffers if the databases are heavily updated.
Any help would be great. This server is very very slow at the moment.
Try *very fast disks*, especially for the logs.
Best regards, Simon Riggs
---(end of broadcast
it, but not what I meant.
Best regards,
Simon Riggs
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
= 2
AND p.person_id = 19257;
Which should return the same answer, and also hopefully the same plan.
--
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail
on this matter would be appreciated.
You can limit the number of connections overall?
--
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
On Wed, 2004-11-03 at 21:25, Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
Is there a way to restrict how much load a PostgreSQL server can take
before dropping queries in order to safeguard the server?I was
looking at the login.conf (5
or a vacuum wouldn't force useful data
out of cache.
ARC does almost exactly those two things in 8.0.
Seq scans do get put in cache, but in a way that means they don't spoil
the main bulk of the cache.
--
Best Regards, Simon Riggs
---(end of broadcast
On Thu, 2004-11-04 at 19:34, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
Something like a read_uncached() call...
That would mean that a seq scan or a vacuum wouldn't force useful data
out of cache.
ARC does almost exactly
will show you whether this is the case. It seems likely
that the estimated cardinality of certain joins is incorrect.
--
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http
joining the sub-queries together first.
--
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get
post EXPLAIN ANALYZE output for any further questions.
--
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
compression levels are fast and nearly as good (in my testing)
as full compression. Using compression tends to use up the CPU that
would otherwise be wasted since the pg_dump is disk intensive, and then
saves further I/O by reducing the output file size.
--
Best Regards, Simon Riggs
UNION ALL views in 8.0 that makes them
slightly more inefficient than using a single table. Perhaps that would
explain your results.
There shouldn't be any need to do the 2800 table approach in this
instance.
--
Best Regards, Simon Riggs
---(end of broadcast
the algorithms work if you are worried by what you
see. But mostly, if the system is performing OK, then no need to worry -
if your only measure of that is system performance data then you need to
instrument your application better, so you can look at the data that
really matters.
--
Best Regards, Simon
to updating.
This wouldn't remove the need for vacuuming, but it would act to prevent
severe performance degradation caused by frequent re-updating.
What do you think?
--
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Don't 'kill -9
On Sun, 2005-01-23 at 15:40 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Changing the idea slightly might be better: if a row update would cause
a block split, then if there is more than one row version then we vacuum
the whole block first, then re-attempt the update
if not all of them are easy to
understand why they exist at first glance.
I think it most likely that Phase I should be a simplified blend of both
ideas, with a clear view towards minimum impact and implementability,
otherwise it may not make the cut for 8.1
Best Regards, Simon Riggs
use too large
shared_buffersthat way your data is always in RAM, rather than maybe
in RAM.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Regards, Simon Riggs
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
On Wed, 2005-03-23 at 14:22 -0500, Keith Browne wrote:
Simon Riggs wrote:
The EXPLAINs you've enclosed are for SELECTs, yet your bug report
describes INSERTs as being the things that are slow.
[You may find better performance from using COPY]
We're starting with an empty database
this situation to continue. If users do create a FK like this, it just
becomes another performance problem on list...
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining
On Fri, 2005-03-25 at 03:50 -0700, Karim Nassar wrote:
On Fri, 2005-03-25 at 10:18 +, Simon Riggs wrote:
When running with PostgreSQL 7.4 on a dual-CPU Athlon MP2400+ machine
with a gigabyte of RAM, running Debian Linux version 2.6.8.1, we were
able to insert all this data in 5-7
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I vote to make this an ERROR in 8.1 - I see little benefit in allowing
this situation to continue.
Other than spec compliance, you mean? SQL99 says
... The declared type of each
On Fri, 2005-03-25 at 08:23 -0800, Stephan Szabo wrote:
On Fri, 25 Mar 2005, Simon Riggs wrote:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I vote to make this an ERROR in 8.1 - I see little benefit in allowing
this situation to continue
On Fri, 2005-03-25 at 13:47 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Other than spec compliance, you mean? SQL99 says
... The declared type of each referencing column shall be
comparable to the declared type
On Fri, 2005-03-25 at 16:25 -0500, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
On Fri, 25 Mar 2005, Simon Riggs wrote:
Could it be that because PostgreSQL has a very highly developed sense of
datatype comparison that we might be taking this to extremes? Would any
other RDBMS
and suffering brain
fade today?
Well, I think the original idea had some validity, but clearly
lazy_update_relstats isn't the way to do it even though we thought so at
the time.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Don't 'kill
SET enable_seqscan = off
Thanks,
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
On Mon, 2005-03-28 at 09:37 -0700, Karim A Nassar wrote:
On Mon, 28 Mar 2005, Stephan Szabo wrote:
On Mon, 28 Mar 2005, Simon Riggs wrote:
run the EXPLAIN after doing
SET enable_seqscan = off
...
I think you have to prepare with enable_seqscan=off, because it
effects
switch plans (if you reconnect) and use the index. But not
yet.
There's a few ways to (re)design around it, but the distribution of your
data is not *currently* conducive to the using an index.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP
. Hmmm ... or maintenance_work_mem?
What gets used for FK checks? Simon?
In 8.0, maintenance_work_mem is used for index creation, vacuum and
initial check of FK checks at time of creation. Everything else uses
work_mem as the limit.
Best Regards, Simon Riggs
situation in terms of forcing an absolute, not relative, number of rows
returned.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Tue, 2005-03-29 at 05:50 -0800, Stephan Szabo wrote:
On Tue, 29 Mar 2005, Simon Riggs wrote:
On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:
Each value has 1/13th of the table, which is too many rows per value to
make an IndexScan an efficient way of deleting rows from
then
we need to UPDATE or DELETE all rows that match the criteria
which means we musnt use LIMIT and need to use FOR UPDATE
We know that at CONSTRAINT creation time, which always occurs before
plan preparation time.
Best Regards, Simon Riggs
---(end
On Tue, 2005-03-29 at 10:31 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
If action is NO ACTION or RESTRICT then
we need to SELECT at most 1 row that matches the criteria
which means we can use LIMIT 1
If action is CASCADE, SET NULL, SET DEFAULT then
we
On Tue, 2005-03-29 at 09:40 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
...but, I see no way for OidFunctionCall8 to ever return an answer of
always just 1 row, no matter how big the relation...so tuples_fetched
is always proportional to the size of the relation. Are unique
On Tue, 2005-03-29 at 12:31 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
That implies to me that LIMIT queries are not considered correctly in
the ML formula and thus we are more likely to calculate a too-high cost
for using an index in those circumstancesand thus more
other systems that support this join type, turn off checking for it
by default. We could do the same with enable_starjoin = off.
Anyway, seems like a fair amount of work there... yes?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 9
point.
Q: How many data files are there for these relations? Wouldn't be two,
by any chance, when we have 10 million rows in them?
Q: What is the average row length?
About 150-160 bytes?
Thanks,
Best Regards, Simon Riggs
---(end of broadcast
On Mon, 2005-04-04 at 15:56 -0400, Christopher Petrilli wrote:
On Apr 4, 2005 3:46 PM, Simon Riggs [EMAIL PROTECTED] wrote:
On Mon, 2005-04-04 at 09:48 -0400, Christopher Petrilli wrote:
The point, in the rough middle, is where the program begins inserting
into a new table (inherited
On Mon, 2005-04-04 at 16:18 -0400, Christopher Petrilli wrote:
On Apr 4, 2005 4:11 PM, Simon Riggs [EMAIL PROTECTED] wrote:
I'm very interested in the graphs of elapsed time for COPY 500 rows
against rows inserted. The simplistic inference from those graphs are
that if you only
On Mon, 2005-04-04 at 17:03 -0400, Christopher Petrilli wrote:
On Apr 4, 2005 4:58 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Can you do:
select relname from pg_class where relfilenode = 26488271
and confirm that the name is the table you've been loading...
It is.
Couldn't see all
application load
without the degradation? We don't need to look at the source code to
measure MySQL performance...
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Tue, 2005-04-05 at 16:05 -0400, Christopher Petrilli wrote:
On Apr 5, 2005 3:48 PM, Simon Riggs [EMAIL PROTECTED] wrote:
The indicated fix of course is to increase shared_buffers.
Splitting your tables at 4M, not 10M would work even better.
Unfortunately, given we are talking about
such a construct, and might even
conclude that it was very poorly normalised model. I would suggest that
this is much less common in practical use.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 7: don't forget to increase your free space map
in the
CRC and compression calculations also, an a similar process used to
assist decompression for recovery, in time.
I regret I do not currently have time to pursue further.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 7: don't forget
: PostgreSQL 8.0
How to encrease postgresql speed? Why postgres took only 5.0% of CPU time?
When you say restore...what are you actually doing?
An archive recovery?
A reload?
A file-level restore of database?
Best Regards, Simon Riggs
---(end of broadcast
actually needed ?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Mon, 2005-04-25 at 11:23 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
My suggested hack for PostgreSQL is to have an option to *not* sample,
just to scan the whole table and find n_distinct accurately.
...
What price a single scan of a table, however large, when
,
especially if they include home page, adverts, images etc for each page.
Could it be that we have overlooked this simple explanation and that the
Haas and Stokes equation is actually quite good, but just not being
applied?
Best Regards, Simon Riggs
---(end of broadcast
command
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
and complex. There are many TODO items that have lain untouched for
years, even though adding the feature has been discussed and agreed.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 8: explain analyze is your friend
was
to encourage Alex and other readers to act themselves. If my words seem
arrogant, then I apologise to any and all that think so.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 8: explain analyze is your friend
is, the more memory is required to
avoid doing any repeated I/Os during the scan. Of course, if there are
many users, the available cache may be much reduced.
Best regards, Simon Riggs
---(end of broadcast)---
TIP 2: you can get off all lists
moving to PostgreSQL is it a good idea to move from using
multiple tables to one table for so many rows?
No. All of the same reasoning applies.
Try to keep each table small enough to fit easily in RAM.
Make sure you specify WITHOUT OIDS on the main data tables.
Best Regards, Simon Riggs
that?
Please post your server hardware config all in one go. You have more
than 2 CPUs, yes?
Also, mention what bgwriter settings are. You may need to turn those
down a bit.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 8: explain
to have so many
tables in the first place?
Gotta say, never seen 350 table join before in a real app.
Wouldn't it be possible to smooth out the model and end up with less
tables? Or simply break things up somewhere slightly down from the root
of the class hierarchy?
Best Regards, Simon Riggs
pieces.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
and writer_thread2 etc. That way
your 2 CPUs would be able to independently be able to get through more
work without locking each other out.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
On Mon, 2005-07-11 at 09:07 -0400, Ian Westmacott wrote:
On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:
The ANALYZE commands hold read locks on the tables you wish to write to.
If you slow them down, you merely slow down your write transactions
also, and then the read transactions
On Mon, 2005-07-11 at 15:51 +0100, Simon Riggs wrote:
On Mon, 2005-07-11 at 09:07 -0400, Ian Westmacott wrote:
On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:
The ANALYZE commands hold read locks on the tables you wish to write to.
If you slow them down, you merely slow down your write
(20% each vs. 2%).
Is this associated with high context switching also?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's
it is
the instruction cache that is too small to fit all of the code required
for your application's workload mix.
Use Intel VTune or similar to show the results you seek.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 1: if posting
ones. The problem is the new protocol (that the 8.0.2 driver users) has
a bug where protocol-prepared queries don't get logged properly.
I don't know if it's been fixed...
Yes, there is a fix for this in 8.1
Brent has been sent the details.
Best Regards, Simon Riggs
On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
Ian Westmacott [EMAIL PROTECTED] writes:
On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
It appears not to matter whether it is one of the tables
being written to that is ANALYZEd
know that it has ftindex=true. That way, you'll never have row
versions building up in the main table, which you'll still get even if
you VACUUM.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please
On Thu, 2005-07-14 at 14:29 +0800, Christopher Kings-Lynne wrote:
Try turning on query logging and using the 'pqa' utility on pgfoundry.org.
Have you got that to work for 8 ?
pqa 1.5 doesn't even work with its own test file.
Best Regards, Simon Riggs
---(end
.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
of
such tables.
You can use such tables for the style of ETL known as ELT, but that is
not the only use.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
observe occurs as a result of query planning.
Do your child tables have indexes on them? Indexes are not inherited
onto child tables, so it is possible that there is no index for the
planner to elect to use.
Best Regards, Simon Riggs
---(end of broadcast
query type though...
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
on the postgresql configuration to avoid this ?
Can whe force the planner to use a hash join as it does for the first
joins ?
Not required, IMHO.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please
for all
players at once and store the values in a summary table for when you
need them.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: explain analyze is your friend
references a more constant base view?
Is a 51ms query really such a problem for you?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining
immediately after the same select
which took 390 ms on a quiet system.
If the current value is used so often, use two tables - one with a
current view only of the row maintained using UPDATE. Different
performance issues maybe, but at least not correlated subquery ones.
Best Regards, Simon Riggs
for the tapesort. That way the
memory can be freed for use by other users or the OS while the tapesort
proceeds (which is usually quite a while...).
Feedback, please.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Have you searched our list
:
select distinct on (PlayerID) PlayerID,AtDate from Player a
where PlayerID='0' order by PlayerId, AtDate Desc;
Does that work for you?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 4: Have you searched our list archives
On Fri, 2005-09-23 at 10:09 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
If not, I would propose that when we move from qsort to tapesort mode we
free the larger work_mem setting (if one exists) and allocate only a
lower, though still optimal setting for the tapesort
. Are you talking about amounts so high that you are throwing the OS
into paging and swapping thrash behavior? If not, then the above is weird.
Thanks for your thoughts. I'll retest, on the assumption that there is a
benefit, but there's something wrong with my earlier tests.
Best Regards, Simon
On Fri, 2005-09-23 at 11:31 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Since we know the predicted size of the sort set prior to starting the
sort node, could we not use that information to allocate memory
appropriately? i.e. if sort size is predicted to be more than twice
collecting performance data, so any test results are most
welcome.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
/tapes.
Tom, its your idea, so you have first dibs. I'm happy to code this up if
you choose not to, once I've done my other immediate chores.
That just leaves these issues for a later time:
- CPU and I/O interleaving
- CPU cost of abstract data type comparison operator invocation
Best Regards, Simon
XPath and
friends.
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
unimplemented
features in TODO, where do we list things we regard as bugs?
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 6: explain analyze is your friend
before an EXPLAIN
(not EXPLAIN ANALYZE) chooses the HashAgg plan?
Please be aware that publishing Oracle performance results is against
the terms of their licence and we seek to be both fair and legitimate,
especially within this public discussion forum.
Best Regards, Simon Riggs
releases are not yet production (in comparison to MySQL or Sybase
who have been in beta for a very long time).
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe
, then reset and try the test.
If that is a repeatable way of recreating one manifestation of the
problem then we will be further ahead than we are now.
Thanks,
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 3: Have you checked our extensive
, Simon Riggs
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
1 - 100 of 364 matches
Mail list logo