don't have a good understanding about the interplay between
checkpoints and WAL.
I'd be grateful for any thoughts on how to improve this, and better control
the amount of WAL kept in pg_xlog.
Thank you,
Stefan
[1] https://www.postgresql.org/docs/9.6/static/wal-configuration.html
[2]
https://
On 02.03.2017 02:06, Tom Lane wrote:
Stefan Andreatta <s.andrea...@synedra.com> writes:
The same anti-join using the text fields, however estimates just 1
resulting row, while there are still of course 9,999 of them:
=# explain analyze
select tmp_san_1.id
from tmp
r for
the join.
Thanks for any help,
Stefan
2015-08-31 21:46 GMT+02:00 twoflower wrote:
> I created a new boolean column and filled it for every row in DOCUMENT with
> *(doc.date_last_updated >= date(now() - '171:00:00'::interval))*, reanalyzed
> ...
... and you've put an index on that new boolean column (say
So, if I'm understanding you correctly, we're talking solely about
following clause in the query you gave initially:
WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)
which initially was
WHERE documenttype = 4
and now is being replaced by a temporary (I'd say derived) column
Stefan
Cell : 072-380-1479
Desk : 087-577-7241
On 2014/09/15 03:25 PM, Kevin Grittner wrote:
Van Der Berg, Stefan svanderb...@fnb.co.za wrote:
I get a similar plan selected on the original query if I set
enable_seqscan to off. I much prefer the second result.
My questions are:
1. Why
setting I can set?
(BTW the tables are analyzed, and I currently have no special
settings/attributes set for any of the tables.)
--
Kind Regards
Stefan
Cell : 072-380-1479
Desk : 087-577-7241
To read FirstRand Bank's Disclaimer for this email click on the following
address or copy into your
, which parameter I have to adjust, to get an
query-time like the example width 'enable_seqscan=off'.
Stefan
pd= set enable_seqscan=off;
pd= explain analyze select t.name from product p left join measurements m on
p.productid=m.productid inner join measurementstype t on
m.measurementstypeid
Hi Craig and Shawn
I fully agree with your argumentation.
Who's the elephant in the room who is reluctant to introduce explicit hints?
-S.
2014-04-14 17:35 GMT+02:00 Craig James cja...@emolecules.com:
Shaun Thomas stho...@optionshouse.com wrote:
these issues tend to get solved through
care,
/Stefan
On Thu, Mar 20, 2014 at 11:02 PM, Ilya Kosmodemiansky
ilya.kosmodemian...@postgresql-consulting.com wrote:
Hi Stefan!
Probably you need to rewrite your query like this (check it first):
with RECURSIVE qq(cont_key, anc_key) as
(
select min(a1.context_key), ancestor_key
to introduce planner hinting (like in
EnterpriseDB or Ora)?
Regards, Stefan
2014-03-20 18:08 GMT+01:00 Tom Lane t...@sss.pgh.pa.us:
Stefan Keller sfkel...@gmail.com writes:
I'd like to know from the query planner which query plan alternatives
have been generated and rejected. Is this possible
Hi,
I'd like to know from the query planner which query plan alternatives
have been generated and rejected. Is this possible?
--Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
Hi Kevin
Well, you're right :-) But my use cases are un-specific by design
since I'm using FTS as a general purpose function.
So I still propose to enhance the planner too as Tom Lane and your
colleague suggest based on repeated similar complaints [1].
Yours, Stefan
[1]
http
Hi Yuri and Radu-Stefan
I would'nt give too fast on PostgreSQL!
When looking at your query plan I wonder if one could reformulate the query
to compute the ST_DWithin first (assuming you have an index on the node
geometries!) before it filters the tags.
To investigate that you could formulate
default values for enable_seqscan and set random_page_cost.
Yours, S.
2013/7/19 Stefan Keller sfkel...@gmail.com:
Hi
At 2013/2/8 I wrote:
I have problems with the performance of FTS in a query like this:
SELECT * FROM FullTextSearch WHERE content_tsv_gin @@
plainto_tsquery('english', 'good
a length of more than 8K so it's obviously having to do with
detoasting.
But the thoughts about @@ operators together with this GIN index seem
also to be valid.
I hope this issue is being tracked in preparation for 9.3.
Regards, Stefan
2013/7/19 Marc Mamin m.ma...@intershop.de:
SELECT * FROM
you think?
Yours, Stefan
Lets look at table fulltextsearch:
movies=# \d fulltextsearch
Table public.fulltextsearch
Column | Type | Modifiers
-+-+-
id
On Mon, Jul 8, 2013 at 10:44 AM, Richard Huxton d...@archonet.com wrote:
On 07/07/13 08:28, Radu-Stefan Zugravu wrote:
Each node has a geometry column called geom and a hstore column
called tags. I need to extract nodes along a line that have certain
keys in the tags column. To do that I use
nodes_tags_btree_tourist_idx on nodes USING BTREE ((tags ?
'tourist));
Do you think this could make a difference?
On Mon, Jul 8, 2013 at 1:27 PM, Richard Huxton d...@archonet.com wrote:
On 08/07/13 10:20, Radu-Stefan Zugravu wrote:
Any improvement is welcomed. The overall performance
GIN(tags);
After creating the index I searched again for nodes using the same first
query but there is no change in performance.
How can I properly use GIN and GIST to index tags column so I can faster
search for nodes that have a certain key in tags column?
Thank you,
Radu-Stefan
and a region, and table
promo2mission (each 1 to dozen tupels).
* View all_errors (more than 20'000 tubles, based on table errors
without tupels from table fix)
* Table error_type (7 tupels)
Here's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF
Yours, Stefan
CTE Query:
WITH aggregation1
and a region, and table
promo2mission (each 1 to dozen tupels).
* View all_errors (more than 20'000 tubles, based on table errors
without tupels from table fix)
* Table error_type (7 tupels)
Here's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF
Yours, Stefan
CTE Query
the memory of the entire database
~4GB, and uses the PostgreSQL stock settings.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Jesper and Pavel
Thx for your hints.
I'm rather reluctant in tuning with unwanted side effects, We'll see.
I have to setup my system and db again before I can try out your tricks.
Yours, Stefan
2013/2/8 Jesper Krogh jes...@krogh.cc:
On 08/02/13 01:52, Stefan Keller wrote:
Hi,
I have
years?).
And I found a nice blog here [1] which uses 9.2/9.1 and proposes to
disable sequential table scan (SET enable_seqscan off;). But this is
no option for me since other queries still need seqscan.
Can anyone tell me if is on some agenda here (e.g. as an open item for 9.2)?
Yours, Stefan
[1
databases. And
considering the commonplace conditions leading to it, I would expect
many systems to be affected. But searching the forums and the web I
hardly found any references to it - which amazes me to no end.
Best Regards,
Stefan
On 12/30/2012 07:02 PM, Stefan Andreatta wrote:
On 12/29
memory related value, that need to be
changed for optimal performance. E.g. effective_cache_size can have a
direct effect on use of nested loops. See:
http://www.postgresql.org/docs/9.2/static/runtime-config-query.html
Regards,
Stefan
--
Sent via pgsql-performance mailing list (pgsql
On 12/29/2012 10:57 PM, Peter Geoghegan wrote:
On 29 December 2012 20:57, Stefan Andreatta s.andrea...@synedra.com wrote:
...
The general advice here is:
1) Increase default_statistics_target for the column.
I tried that, but to get good estimates under these circumstances, I
need to set
On 12/29/2012 10:57 PM, Peter Geoghegan wrote:
On 29 December 2012 20:57, Stefan Andreatta s.andrea...@synedra.com wrote:
Now, the 2005 discussion goes into great detail on the advantages and
disadvantages of this algorithm, particularly when using small sample sizes,
and several alternatives
be implemented.
Thanks for your help!
Stefan
*The Long Story:*
When Postgres collects statistics, it estimates the number of distinct
values for every column (see pg_stats.n_distinct). This is one important
source for the planner to determine the selectivity and hence can have
great influence
, Stefan Keller sfkel...@gmail.com wrote:
Say, there is a table with 250 mio. rows split into 250 tables with 1
mio. rows each. And say the the index behavior is O(log n). Then a
search for a key takes O(log(250*n)) or 8.4 time units. What PG (9.1)
currently probably does is a iterative call
Hi,
Given I have a large table implemented with partitions and need fast
access to a (primary) key value in a scenario where every minute
updates (inserts/updates/deletes) are coming in.
Now since PG does not allow any index (nor constraint) on master
table, I have a performance issue (and a
longer than an hour, it delays the next
update.
Any ideas? Partitioning?
Yours, S.
2012/9/3 Ivan Voras ivo...@freebsd.org:
On 03/09/2012 13:03, Stefan Keller wrote:
Hi,
I'm having performance issues with a simple table containing 'Nodes'
(points) from OpenStreetMap:
CREATE TABLE nodes
to know what
choices I have to tune it while keeping the database productive:
cluster index? partition table? use tablespaces? reduce physical block size?
Stefan
[1] http://www.postgresql.org/docs/9.1/static/non-durability.html
--
Sent via pgsql-performance mailing list (pgsql-performance
Hi
2012/8/8 Jeff Janes jeff.ja...@gmail.com:
On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller sfkel...@gmail.com wrote:
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that relation 'p' does not
exist.
Why does PG
Hi
I have an interesting query to be optimized related to this one [1].
The query definition is: Select all buildings that have more than 1
pharmacies and more than 1 schools within a radius of 1000m.
The problem is that I think that this query is inherently O(n^2). In
fact the solution I
Your proposal lacks the requirement that it's the same building from
where pharmacies and schools are reachable.
But I think about.
Yours, S.
2012/8/7 Tomas Vondra t...@fuzzy.cz:
On 7 Srpen 2012, 14:01, Stefan Keller wrote:
Hi
I have an interesting query to be optimized related to this one
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that relation 'p' does not exist.
Why does PG recognize table b in the subquery but not table p?
Any ideas?
-- Stefan
SELECT b.way AS building_geometry
FROM
(SELECT way
2012/3/1 Jeff Janes jeff.ja...@gmail.com:
On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller sfkel...@gmail.com wrote:
2012/2/28 Claudio Freire klaussfre...@gmail.com:
In the OP, you say There is enough main memory to hold all table
contents.. I'm assuming, there you refer to your current system
, that will not work. tar detects the
output is going to the bit bucket, and so doesn't bother to actually
read the data.
Right.
But what about the commands cp $PG_DATA/base /dev/null or cat
$PG_DATA/base /dev/null ?
They seem to do something.
-Stefan
--
Sent via pgsql-performance mailing list (pgsql
2012/2/29 Stefan Keller sfkel...@gmail.com:
2012/2/29 Jeff Janes jeff.ja...@gmail.com:
It's quite possible the vacuum full is thrashing your disk cache due
to maintainance_work_mem. You can overcome this issue with the tar
trick, which is more easily performed as:
tar cf /dev/null $PG_DATA
of that
(besides disk-oriented structures).
-Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2012/2/28 Claudio Freire klaussfre...@gmail.com:
On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller sfkel...@gmail.com wrote:
P.S. And yes, the database is aka 'read-only' and truncated and
re-populated from scratch every night. fsync is off so I don't care
about ACID. After the indexes on name
like an in-memory database?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2012/2/26 Andy Colson a...@squeakycode.net wrote:
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
Yours, Stefan
How about after you load
there's no WAL and locking needed.
But as soon as we allow writes I realize that the in-memory feature
needs to be coupled with other enhancements like replication (which
somehow would avoid WAL).
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
, Stefan
APPENDIX
[0]
CREATE TABLE osm_point (
osm_id integer,
name text,
tags hstore
geom geometry(Point,4326)
);
[1]
SELECT osm_id, name FROM osm_point
WHERE tags @ 'tourism=viewpoint'
AND ST_Contains(
GeomFromText('BOX(8.42 47.072, 9.088 47.431)'::box2d, 4326),
geom)
[2
Hi Stephen
Thanks for your answer and hints.
2011/10/24 Stephen Frost sfr...@snowman.net wrote:
* Stefan Keller (sfkel...@gmail.com) wrote:
Adding more memory (say to total of 32 GB) would only postpone the problem.
Erm, seems like you're jumping to conclusions here...
Sorry. I actually only
?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
exploiting all advantages of a separate hash
index, would'nt it?
Stefan
2011/9/18 Merlin Moncure mmonc...@gmail.com:
On Sat, Sep 17, 2011 at 4:48 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Tue, Sep 13, 2011 at 5:04 PM, Peter Geoghegan pe...@2ndquadrant.com
wrote:
On 14 September 2011 00:04, Stefan
this to encourage ourselves that enhancing hash
indexes could be worthwhile.
Stefan
2011/9/18 Kevin Grittner kevin.gritt...@wicourts.gov:
Stefan Keller wrote:
It's hard for me to imagine that btree is superior for all the
issues mentioned before.
It would be great if you could show
operations.
Stefan
P.S. Disclaimer (referring to my other thread about Hash): I'm not a
btree opposer :- I'm just evaluating index alternatives.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
Hi,
Sorry if this is an odd question:
I assume that Postgres indexes don't store records but only pointers
to the data.
This means, that there is always an additional access needed (real table I/O).
Would an index containing data records make sense?
Stefan
--
Sent via pgsql-performance mailing
... ?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
for more
improvements, so why are the knives out?
No knives from my side. Sorry for the exaggerated subject title.
I'm also in favor for an enhanced hash index for cases where only =
tests are processed and where only few inserts/deletes will occur.
Stefan
--
Sent via pgsql-performance mailing
a ticket?
Stefan
2011/9/14 Tom Lane t...@sss.pgh.pa.us:
Peter Geoghegan pe...@2ndquadrant.com writes:
On 14 September 2011 00:04, Stefan Keller sfkel...@gmail.com wrote:
Has this been verified on a recent release? I can't believe that hash
performs so bad over all these points. Theory tells me
much of work (in man days) do you estimate would this mean for
someone who can program but has to learn PG internals first?
Stefan
2011/9/14 Tom Lane t...@sss.pgh.pa.us:
Peter Geoghegan pe...@2ndquadrant.com writes:
On 14 September 2011 00:04, Stefan Keller sfkel...@gmail.com wrote:
Has this been
/current/interactive/plpgsql-control-structures.html
)
So the doc isn't totally explicit about this. But whatever: What would
be the the function of a subtransaction? To give the possibility to
recover and continue within the surrounding transaction?
Stefan
2011/9/13 Marti Raudsepp ma...@juffo.org
can't believe that hash
performs so bad over all these points. Theory tells me otherwise and
http://en.wikipedia.org/wiki/Hash_table seems to be a success.
Are there any plans to give hash index another chance (or to bury it
with a reason)?
Stefan
--
Sent via pgsql-performance mailing list (pgsql
Shaun,
2011/9/2 Shaun Thomas stho...@peak6.com:
Ironically, this is actually the topic of my presentation at Postgres Open.
Do you think my problem would now be solved with NVRAM PCI card?
Stefan
-- Forwarded message --
From: Stefan Keller sfkel...@gmail.com
Date: 2011/9/3
2011/9/3 Jesper Krogh jes...@krogh.cc:
On 2011-09-03 00:04, Stefan Keller wrote:
It's not that hard to figure out.. take some of your typical queries.
say the one above.. Change the search-term to something you'd expect
the user to enter in a minute, but hasn't been run. (could be museum
2011/9/2 Scott Marlowe scott.marl...@gmail.com:
On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller sfkel...@gmail.com wrote:
How big is your DB?
What kind of reads are most common, random access or sequential?
How big of a dataset do you pull out at once with a query.
SSDs are usually not a big
You mean something like Unlogged Tables in PostgreSQL 9.1 (=
in-memory database) or simply a large ramdisk?
Yours, Stefan
2011/9/1 Jim Nasby j...@nasby.net:
On Aug 30, 2011, at 12:23 PM, Stefan Keller wrote:
I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.
My use
Hi,
I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.
My use case is mainly a read-only database.
Are there any around?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
are
stored in order of (keylength,key) with the key comparison done
bytewise (not locale-dependent). See e.g. function hstoreUniquePairs
in http://doxygen.postgresql.org/ . This ordered property is being
used by some hstore functions but not all - and I'm still wondering
why.
Yours, Stefan
2011/7
' or 'functions'?
What does this reversed from the convention mean concretely?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
arbitrary
queries.
Yours, Stefan
2011/5/25 Pierre C li...@peufeu.com:
You wrote
Try to create a btree index on (bench_hstore-bench_id) WHERE
(bench_hstore-bench_id) IS NOT NULL.
What do you mean exactly?
= CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE
??? IS NOT NULL
PRIMARY KEY, kvps hstore NOT NULL );
So I'm doing something like:
CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);
Stefan
2011/5/23 Pierre C li...@peufeu.com:
Hi Merlin
The analyze command gave the following result:
On the KVP table:
Index Scan using kvpidx on bench_kvp (cost
from OpenStreetMap
(http://wiki.openstreetmap.org/wiki/Database_schema ).
Yours, Stefan
2011/5/17 Jim Nasby j...@nasby.net:
On May 16, 2011, at 8:47 AM, Merlin Moncure wrote:
On Sat, May 14, 2011 at 5:10 AM, Stefan Keller sfkel...@gmail.com wrote:
Hi,
I am conducting a benchmark to compare
KEY, obj hstore NOT NULL );
-- with GIST index on obj
Does anyone have experience with that?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
on speeding up/optimizing such database server?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
find it quite strange that people seem to be surprised by Dell now
starting with that as well (I atually find it really surprising they
have not done that before).
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Stefan Kaltenbrunner wrote:
Devrim GÜNDÜZ wrote:
On Mon, 2009-10-05 at 12:07 +0200, Jean-Michel Pouré wrote:
Go for Debian:
* It is a free community, very active.
Well, we need to state that this is not a unique feature.
* It is guaranteed to be upgradable.
Depends. I had lots of issues
Scott Carey wrote:
On 7/30/09 11:24 AM, Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql
dump + single threaded restore in a pipe: 188min
custom dump to file + parallel restore: 179min
this is without compression, with the default custom dump + parallel
restore is way slower than the simple approach on reasonable hardware.
Stefan
--
Sent via pgsql-performance mailing list (pgsql
- or for the
compression) I would recommend to not use it at all.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Greg Smith wrote:
On Wed, 29 Jul 2009, Stefan Kaltenbrunner wrote:
Well the real problem is that pgbench itself does not scale too well
to lots of concurrent connections and/or to high transaction rates so
it seriously skews the result.
Sure, but that's what the multi-threaded pgbench code
in the lab)
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
InputFunctionCall
37050 1.6433 LWLockAcquire
36853 1.6346 BufferGetBlockNumber
36428 1.6157 heap_compute_data_size
33818 1.5000 DetermineTimeZoneOffset
33468 1.4844 DecodeTime
30896 1.3703 tm2timestamp
30888 1.3700 GetCurrentTransactionId
Stefan
--
Sent via pgsql-performance
Greg Stark wrote:
On Mon, May 11, 2009 at 5:05 PM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
Good to know!!! I imagine that on a PS3 it would be _really_ fast... :-)
well not really - while it is fairly easy to get postgresql running on a PS3
it is not a fast platform. While the main
of the binaries seem to contain a static copy of libpq
or such?
* how many queries per session is the toolkit actually using - some
earlier comments seem to imply you are doing a connect/disconnect cycle
for every query ist that actually true?
Stefan
--
Sent via pgsql-performance mailing list
Dimitri wrote:
Hi Stefan,
sorry, I did not have a time to bring all details into the toolkit -
but at least I published it instead to tell a nice story about :-)
fair point and appreciated. But it seems important that benchmarking
results can be verified by others as well...
The client
what prompted my question on how the benchmark was operating.
For any kind of workloads that contain frequent connection
establishments one wants to use a connection pooler like pgbouncer(as
said elsewhere in the thread already).
Stefan
--
Sent via pgsql-performance mailing list (pgsql
256MB of Ram and a single SATA disk
available(though you could add some USB disks).
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
to improve that though.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
and you should see the difference getting leveled out.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that large ? - try reducing to a say 128MB for
a start and try again.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
scan is not done. the second query runs hundred times faster then
first one which surprising to me.
any ideas?
regards, stefan
_
In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten!
Nur 3,99 EUR/Monat
try to get a
bit more money from the management and build RAID 6 with 12 disks.
Here a good SATA-Controllers for 4/8/12/16-Disks:
http://www.tekram.com/product2/product_detail.asp?pid=51
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
HOT) - it is a fairly neat improvement though ...
Stefan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
is not kicking in which happens way
more often than people would believe) they could still fail at the very
same time
Stefan
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Joshua D. Drake wrote:
Stefan Kaltenbrunner wrote:
Joshua D. Drake wrote:
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
You're right, but the distinction is a small one. What are the chances
of losing two independent servers within a few milliseconds of each
other?
If they're
with:
http://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php
Stefan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do
for:
effective_cache_size
and
random_page_cost
Stefan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
/faqs.FAQ_Solaris.html):
Do not use any flags that modify behavior of floating point operations
and errno processing (e.g.,-fast). These flags could raise some
nonstandard PostgreSQL behavior for example in the date/time computing.
Stefan
---(end of broadcast
or something - I think we
should just include the specific parameters to change.
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
it is
advisable to make it accurate or even a bit less than that in 8.2 and up.
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
doesn't
honor them, for example).
since 8.2 domain constraints are enforced everywhere ...
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
() from /usr/local/pcm170/libcompress.so
/usr/local on RHEL should only contain software installed directly from
source - what exactly is pcm170/libdalkutil ?
beside that - is pg actually compiled with debugging symbols on that
platform ?
Stefan
---(end of broadcast
) or rely on the OS/raidcontroller implementing some sort of
FUA/write barrier feature(which linux for example only does in pretty
recent kernels)
Stefan
---(end of broadcast)---
TIP 6: explain analyze is your friend
1 - 100 of 122 matches
Mail list logo