NSERT's or UPDATE's, but the performance gets slower day by day...
I have no idea where to search for the speed break!
Stefan
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining colum
Rod Taylor wrote:
Lets start with an example. Please send us an EXPLAIN ANALYZE of a
couple of the poorly performing queries.
thanks for your answer. the problem was solved by using FULL(!) VACUUM.
regards,
Stefan
---(end of broadcast)---
TIP 5: Have
uot;cornish game hens" at our frequent dinner
> parties for months.
This method might have been safer (and it works great with Apaches):
http://eagle.auc.ca/~dreid/
cheers
stefan
---(end of broadcast)---
TIP 8: explain analyze is your friend
ostgresql.org/docs/8.0/static/plpgsql-structure.html
http://www.postgresql.org/docs/8.0/static/plperl.html
HTH,
stefan
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ot;ECT" can mean "Ecuador Time" (offset -05) or
"Eastern Caribbean Time" (offset -04).
http://www.worldtimezone.com/wtz-names/timezonenames.html
cheers,
stefan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
out smallish tables like users, groups, *types, etc which
would be needed every 2-3 queries, but might be swept out of RAM by one
large query in between. Keeping a table like "users" on a RAM fs would not
be an option, because the information is not volatile.
cheers,
stefan
update to 8.2 (not 8.1) to get the
>> improved sorting code.
>
> By the way, is the new sorting code any better for platforms that already
> have a decent qsort() (like Linux)?
yes - especially on-disk sorts will get some tremendous speedups in 8.2.
Stefan
---
ory.
wel in recent versions of pg it should be pretty easy to do that from
within SQL by using pg_ls_dir() and pg_stat_file().
maybe something(rough sketch) along the line of:
select sum((pg_stat_file('pg_xlog/' || file)).size) from
pg_ls_dir('pg_xlog
solution(OpenView, IBM
Directory,...) your solution might look different.
Stefan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
after tom's fixes for the
"regressions" in 8.2.0 we are still 30% slower then the -HEAD checkout
from the middle of the 8.2 development cycle ?
Stefan
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Arjen van der Meijden wrote:
> Stefan Kaltenbrunner wrote:
>> ouch - do I read that right that even after tom's fixes for the
>> "regressions" in 8.2.0 we are still 30% slower then the -HEAD checkout
>> from the middle of the 8.2 development cycle ?
>
Tom Lane wrote:
> Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
>> Arjen van der Meijden wrote:
>>> Stefan Kaltenbrunner wrote:
>>>> ouch - do I read that right that even after tom's fixes for the
>>>> "regressions" in 8.2.0 we ar
;
> Has anyone had experience setting up something similar with Nagios? We
> monitor servers using nagios and not having to install additional
> software (cacti/munin) for postgres resource usage monitoring would be
> great.
a lot of nagios plugins can supply performance data in a
lled in SATAII
world) or rely on the OS/raidcontroller implementing some sort of
FUA/write barrier feature(which linux for example only does in pretty
recent kernels)
Stefan
---(end of broadcast)---
TIP 6: explain analyze is your friend
BBU, why would you turn off the cache?
the BBU is usually only protecting the memory of the (hardware) raid
controller not the one in the drive ...
Stefan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire
0x0015243f in critSec::~critSec () from
> /usr/local/pcm170/libdalkutil.so
> #7 0x003a48b8 in Comp_ZipFiles () from /usr/local/pcm170/libcompress.so
/usr/local on RHEL should only contain software installed directly from
source - what exactly is pcm170/libdalkutil ?
bes
aints isn't universal (plpgsql doesn't
> honor them, for example).
since 8.2 domain constraints are enforced everywhere ...
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
; to 5GB - 6GB. You might have to increase the kernel's shared memory
> settings before increasing shared_buffers.
some testing here has shown that while it is usually a good idea to set
effective_cache_size rather optimistically in versions <8.2 it is
advisable to make it accurate or even a bit less than that in 8.2 and up.
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
that the configuration file it generates
seems to look like on for PostgreSQL 7.x or something - I think we
should just include the specific parameters to change.
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
tgresql.org/docs/faqs.FAQ_Solaris.html):
"Do not use any flags that modify behavior of floating point operations
and errno processing (e.g.,-fast). These flags could raise some
nonstandard PostgreSQL behavior for example in the date/time computing."
Stefan
--
is your settings for:
effective_cache_size
and
random_page_cost
Stefan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
s is that proposal different from what got implemented with:
http://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php
Stefan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
cho
are on the same
UPS they are affectively on the same power bus ...
If the UPS fails (or the generator is not kicking in which happens way
more often than people would believe) they could still fail at the very
same time
Stefan
---(end of broadcast)-
Joshua D. Drake wrote:
> Stefan Kaltenbrunner wrote:
>> Joshua D. Drake wrote:
>>> Gregory Stark wrote:
>>>> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>>>>> You're right, but the distinction is a small one. What are the chances
&g
ard).
Other than that - how well is your postgresql instance tuned to your
hardware ?
Stefan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
s already done in -HEAD at the initdb stage:
...
selecting default shared_buffers/max_fsm_pages ... 4000/20
...
Stefan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
gt; aborted. I might as well shut down and restart the database, which is
> an unacceptable solution for a web site.
>
> I'm back to my original question: How do you kill a runaway query
> without bringing down the whole database? Is there really no answer to
> this?
are you maybe
severely
>> limiting factor for postgresql at least?
>
> Actually, its not in this benchmark. Its not a large enough dataset to
> put any pressure on IO, not even with just 2GB of memory.
interesting - so this is a mostly CPU-bound benchmark ?
Out of curiousity have you done any pro
sing multiple processes to load the data will help to scale up to about
900k/s (4 processes on 4 cores).
Stefan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Luke Lonergan wrote:
Stefan,
On 10/30/06 8:57 AM, "Stefan Kaltenbrunner" <[EMAIL PROTECTED]> wrote:
We've found that there is an ultimate bottleneck at about 12-14MB/s despite
having sequential write to disk speeds of 100s of MB/s. I forget what the
latest bottleneck wa
s a normal SCSI-Controller with Softwareraid on top fixed
this for us ...
stefan
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
s. See here:
http://www.spinics.net/lists/pgsql/msg05363.html
In my case it was not enough to create the database with a different
encoding, I had to re-initdb the whole cluster :-/
cheers,
stefan
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi *,
I am looking for the fastest wal_sync_method
(postgres 8, Linux (Redhat) 2.4.29, ext3, SCSI HW-Raid 5).
Any experiences and/or tips?.
Thanks in advance
Stefan
d benefits from
HOT) - it is a fairly neat improvement though ...
Stefan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
27;ll try to get a
bit more money from the management and build RAID 6 with 12 disks.
Here a good SATA-Controllers for 4/8/12/16-Disks:
http://www.tekram.com/product2/product_detail.asp?pid=51
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
m)
Total runtime: *109*ms
so in both querys there are and conditions. there are two and conditions in the
first query and one and condition in the second query. unfortunately i am not
an expert in reading the postgre query plan. basically i am wondering why in
the first query a second index sca
maintenance_work_setting that large ? - try reducing to a say 128MB for
a start and try again.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
l variations a few dozend times both in cached and
uncached state and you should see the difference getting leveled out.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that though.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t only has 256MB of Ram and a single SATA disk
available(though you could add some USB disks).
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e information on how the toolkit talks to the
database - some of the binaries seem to contain a static copy of libpq
or such?
* how many queries per session is the toolkit actually using - some
earlier comments seem to imply you are doing a connect/disconnect cycle
for every query ist that
Dimitri wrote:
Hi Stefan,
sorry, I did not have a time to bring all details into the toolkit -
but at least I published it instead to tell a "nice story" about :-)
fair point and appreciated. But it seems important that benchmarking
results can be verified by others as well...
hence what prompted my question on how the benchmark was operating.
For any kind of workloads that contain frequent connection
establishments one wants to use a connection pooler like pgbouncer(as
said elsewhere in the thread already).
Stefan
--
Sent via pgsql-performance mailing list (
Greg Stark wrote:
On Mon, May 11, 2009 at 5:05 PM, Stefan Kaltenbrunner
wrote:
Good to know!!! I imagine that on a PS3 it would be _really_ fast... :-)
well not really - while it is fairly easy to get postgresql running on a PS3
it is not a fast platform. While the main CPU there is a pretty
53 1.7055 InputFunctionCall
37050 1.6433 LWLockAcquire
36853 1.6346 BufferGetBlockNumber
36428 1.6157 heap_compute_data_size
33818 1.5000 DetermineTimeZoneOffset
33468 1.4844 DecodeTime
30896 1.3703 tm2timestamp
30888 1.3700 GetCurrentTransactionId
Stefan
--
Sent via pgsq
at are closer to what
pgbench does in the lab)
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Greg Smith wrote:
On Wed, 29 Jul 2009, Stefan Kaltenbrunner wrote:
Well the real problem is that pgbench itself does not scale too well
to lots of concurrent connections and/or to high transaction rates so
it seriously skews the result.
Sure, but that's what the multi-threaded pgbench
single threaded restore in a pipe: 188min
custom dump to file + parallel restore: 179min
this is without compression, with the default custom dump + parallel
restore is way slower than the simple approach on reasonable hardware.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance
n) I would recommend to not use it at all.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Carey wrote:
On 7/30/09 11:24 AM, "Stefan Kaltenbrunner" wrote:
Kevin Grittner wrote:
Tom Lane wrote:
"Kevin Grittner" writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to u
Stefan Kaltenbrunner wrote:
Devrim GÜNDÜZ wrote:
On Mon, 2009-10-05 at 12:07 +0200, Jean-Michel Pouré wrote:
Go for Debian:
* It is a free community, very active.
Well, we need to state that this is not a unique feature.
* It is guaranteed to be upgradable.
Depends. I had lots of issues
tream is releasing a security update, I'd like to be able to find
new packages as upstream announces updated sets. Yes, I'm talking about
PostgreSQL here.
This is exactly what Debian does for a while now(at least for PostgreSQL)..
Ie.: Debian Etch aka has 8.1.18 and Debian Lenny has 8.
ng maintenance operations.
I find it quite strange that people seem to be surprised by Dell now
starting with that as well (I atually find it really surprising they
have not done that before).
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
Dear You all,
(please tell me if this has already been discussed, I was unable to find any
convincing information)
I'm developing a small application, tied to a PG 7.4 beta 5 (i didn't
upgrade). The DB i use is roughly 20 tales each of them containing at most 30
records (I'm still in developme
x27;t give me the proper dev environment, bastards :))
Thanks for all the answers.
Stefan
> Stefan Champailler wrote:
> > Dear You all,
> >
> > (please tell me if this has already been discussed, I was unable to find
> > any convincing information)
> >
> > I&
hello
i have postgres 7.3.2.,linux redhat 9.0
a database,and 20 tables
a lot of fields are char(x)
when i have to make update for all the fields except index
postgres works verry hard
what should i've changed in configuration to make it work faster
thanks
bogdan
---(end of b
on speeding up/optimizing such database server?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ng joins, sorts, equality and range (sub-)queries...
=> What are the suggested postgresql.conf and session parameters for
such a "read-only database" to "Whac-A-Mole" (i.e. to consider :->)?
Stefan
2011/4/23 Robert Haas :
> On Apr 18, 2011, at 6:08 PM, Stefan Kelle
IMARY KEY, obj hstore NOT NULL );
-- with GIST index on obj
Does anyone have experience with that?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ase is coming from OpenStreetMap
(http://wiki.openstreetmap.org/wiki/Database_schema ).
Yours, Stefan
2011/5/17 Jim Nasby :
> On May 16, 2011, at 8:47 AM, Merlin Moncure wrote:
>> On Sat, May 14, 2011 at 5:10 AM, Stefan Keller wrote:
>>> Hi,
>>>
>>> I am condu
:
> CREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );
So I'm doing something like:
CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);
Stefan
2011/5/23 Pierre C :
>
>> Hi Merlin
>>
>> The analyze command gave the following result:
>
in application where users can enter arbitrary
queries.
Yours, Stefan
2011/5/25 Pierre C :
>> You wrote
>>>
>>> Try to create a btree index on "(bench_hstore->bench_id) WHERE
>>> (bench_hstore->bench_id) IS NOT NULL".
>>
>> What do you
wed by the core geometric data types!
Why names? Why not rather 'operators' or 'functions'?
What does this "reversed from the convention" mean concretely?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tore are
stored in order of (keylength,key) with the key comparison done
bytewise (not locale-dependent). See e.g. function hstoreUniquePairs
in http://doxygen.postgresql.org/ . This ordered property is being
used by some hstore functions but not all - and I'm still wondering
why.
Yours, S
Hi,
I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.
My use case is mainly a "read-only" database.
Are there any around?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
You mean something like "Unlogged Tables" in PostgreSQL 9.1 (=
in-memory database) or simply a large ramdisk?
Yours, Stefan
2011/9/1 Jim Nasby :
> On Aug 30, 2011, at 12:23 PM, Stefan Keller wrote:
>> I'm looking for summaries (or best practices) on SSD usage with Postg
2011/9/2 Scott Marlowe :
> On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller wrote:
> How big is your DB?
> What kind of reads are most common, random access or sequential?
> How big of a dataset do you pull out at once with a query.
>
> SSDs are usually not a big winner for r
2011/9/3 Jesper Krogh :
> On 2011-09-03 00:04, Stefan Keller wrote:
> It's not that hard to figure out.. take some of your "typical" queries.
> say the one above.. Change the search-term to something "you'd expect
> the user to enter in a minute, but hasn'
Shaun,
2011/9/2 Shaun Thomas :
> Ironically, this is actually the topic of my presentation at Postgres Open.>
Do you think my problem would now be solved with NVRAM PCI card?
Stefan
-- Forwarded message --
From: Stefan Keller
Date: 2011/9/3
Subject: Re: [PERFORM] Summar
ostgresql.org/docs/current/interactive/plpgsql-control-structures.html
)
So the doc isn't totally explicit about this. But whatever: What would
be the the function of a subtransaction? To give the possibility to
recover and continue within the surrounding transaction?
Stefan
2011/9/13 Marti Rauds
ypes.html )
Has this been verified on a recent release? I can't believe that hash
performs so bad over all these points. Theory tells me otherwise and
http://en.wikipedia.org/wiki/Hash_table seems to be a success.
Are there any plans to give hash index another chance (or to bury it
with a reas
ould I open a ticket?
Stefan
2011/9/14 Tom Lane :
> Peter Geoghegan writes:
>> On 14 September 2011 00:04, Stefan Keller wrote:
>>> Has this been verified on a recent release? I can't believe that hash
>>> performs so bad over all these points. Theory tells me other
How much of work (in man days) do you estimate would this mean for
someone who can program but has to learn PG internals first?
Stefan
2011/9/14 Tom Lane :
> Peter Geoghegan writes:
>> On 14 September 2011 00:04, Stefan Keller wrote:
>>> Has this been verified on a rec
e's doubtless room for more
> improvements, so why are the knives out?
No knives from my side. Sorry for the exaggerated subject title.
I'm also in favor for an enhanced hash index for cases where only "="
tests are processed and where only few inserts/deletes will occur.
Stefan
roposal
* more... ?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
xploiting all advantages of a separate hash
index, would'nt it?
Stefan
2011/9/18 Merlin Moncure :
> On Sat, Sep 17, 2011 at 4:48 PM, Jeff Janes wrote:
>> On Tue, Sep 13, 2011 at 5:04 PM, Peter Geoghegan
>> wrote:
>>> On 14 September 2011 00:04, Stefan Keller wrot
collected this to encourage ourselves that enhancing hash
indexes could be worthwhile.
Stefan
2011/9/18 Kevin Grittner :
> Stefan Keller wrote:
>
>> It's hard for me to imagine that btree is superior for all the
>> issues mentioned before.
>
> It would be great i
operations.
Stefan
P.S. Disclaimer (referring to my other thread about Hash): I'm not a
btree opposer :-> I'm just evaluating index alternatives.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
Hi,
Sorry if this is an odd question:
I assume that Postgres indexes don't store records but only pointers
to the data.
This means, that there is always an additional access needed (real table I/O).
Would an index containing data records make sense?
Stefan
--
Sent via pgsql-performance ma
at could I do to speed up such queries (first time, i.e.
without caching) besides simply adding more memory?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Stephen
Thanks for your answer and hints.
2011/10/24 Stephen Frost wrote:
> * Stefan Keller (sfkel...@gmail.com) wrote:
>> Adding more memory (say to total of 32 GB) would only postpone the problem.
> Erm, seems like you're jumping to conclusions here...
Sorry. I actual
tells me that the indexes are used [2].
The problem is that the initial queries are too slow - and there is no
second chance. I do have to trash the buffer every night. There is
enough main memory to hold all table contents.
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are the
Fileystem (tmpfs).
Still, would'nt it be more flexible when I could dynamically instruct
PostgreSQL to behave like an in-memory database?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2012/2/26 Andy Colson wrote:
> On 02/25/2012 06:16 PM, Stefan Keller wrote:
>> 1. How can I warm up or re-populate shared buffers of Postgres?
>> 2. Are there any hints on how to tell Postgres to read in all table
>> contents into memory?
>>
>> Yours, Stefan
>
n they fit into RAM).
Since I have a "read-only" database there's no WAL and locking needed.
But as soon as we allow writes I realize that the in-memory feature
needs to be coupled with other enhancements like replication (which
somehow would avoid WAL).
Yours, Stefan
--
Sent via p
lity which I'll try out.
But what I'm finally after is a solution, where records don't get
pushed back to disk a.s.a.p. but rather got hold in memory as long as
possible assuming that there is enough memory.
I suspect that currently there is quite some overhead because of that
(besides disk-oriented structures).
-Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2012/2/28 Claudio Freire :
> On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller wrote:
>> P.S. And yes, the database is aka 'read-only' and truncated and
>> re-populated from scratch every night. fsync is off so I don't care
>> about ACID. After the inde
many implementations, that will not work. tar detects the
> output is going to the bit bucket, and so doesn't bother to actually
> read the data.
Right.
But what about the commands cp $PG_DATA/base /dev/null or cat
$PG_DATA/base > /dev/null ?
They seem to do something.
-Ste
2012/2/29 Stefan Keller :
> 2012/2/29 Jeff Janes :
>>> It's quite possible the vacuum full is thrashing your disk cache due
>>> to maintainance_work_mem. You can overcome this issue with the tar
>>> trick, which is more easily performed as:
>>>
>>
2012/3/1 Jeff Janes :
> On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller wrote:
>> 2012/2/28 Claudio Freire :
>>>
>>> In the OP, you say "There is enough main memory to hold all table
>>> contents.". I'm assuming, there you refer to your curre
Hi
I have an interesting query to be optimized related to this one [1].
The query definition is: Select all buildings that have more than 1
pharmacies and more than 1 schools within a radius of 1000m.
The problem is that I think that this query is inherently O(n^2). In
fact the solution I propos
Your proposal lacks the requirement that it's the same building from
where pharmacies and schools are reachable.
But I think about.
Yours, S.
2012/8/7 Tomas Vondra :
> On 7 Srpen 2012, 14:01, Stefan Keller wrote:
>> Hi
>>
>> I have an interesting query to be optimi
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that "relation 'p' does not exist".
Why does PG recognize table b in the subquery but not table p?
Any ideas?
-- Stefan
SELECT b.way AS building_g
Hi
2012/8/8 Jeff Janes :
> On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller wrote:
>> Hi Craig
>>
>> Clever proposal!
>> I slightly tried to adapt it to the hstore involved.
>> Now I'm having a weird problem that PG says that "relation 'p' doe
on-durable settings [1] I'd like to know what
choices I have to tune it while keeping the database productive:
cluster index? partition table? use tablespaces? reduce physical block size?
Stefan
[1] http://www.postgresql.org/docs/9.1/static/non-durability.html
--
Sent via pgsql-per
takes longer than an hour, it delays the next
update.
Any ideas? Partitioning?
Yours, S.
2012/9/3 Ivan Voras :
> On 03/09/2012 13:03, Stefan Keller wrote:
>> Hi,
>>
>> I'm having performance issues with a simple table containing 'Nodes'
>> (points)
Hi,
Given I have a large table implemented with partitions and need fast
access to a (primary) key value in a scenario where every minute
updates (inserts/updates/deletes) are coming in.
Now since PG does not allow any index (nor constraint) on "master"
table, I have a performance issue (and a po
apt itself when partitions are attached or removed.
That's probably how Oracle resolves it which knows global indexes
probably since version 8(!) [1]
Yours, S.
[1] http://www.oracle-base.com/articles/8i/partitioned-tables-and-indexes.php
2012/10/14 Jeff Janes :
> On Sat, Oct 13, 2012 at
implemented.
Thanks for your help!
Stefan
*The Long Story:*
When Postgres collects statistics, it estimates the number of distinct
values for every column (see pg_stats.n_distinct). This is one important
source for the planner to determine the selectivity and hence can have
great influence
On 12/29/2012 10:57 PM, Peter Geoghegan wrote:
On 29 December 2012 20:57, Stefan Andreatta wrote:
Now, the 2005 discussion goes into great detail on the advantages and
disadvantages of this algorithm, particularly when using small sample sizes,
and several alternatives are discussed. I do not
1 - 100 of 130 matches
Mail list logo