performance by mounting the database from a RAM disk, or if I would be
better off keeping that RAM free and increasing the effective_cache_size
appropriately.
I'd also be interested in knowing if this is dependant on whether I am
running 7.4, 8.0 or 8.1.
--
Stuart Bishop [EMAIL PROTECTED]
http
=50 loops=1)
- Index Scan using person_sort_key_idx on person (cost=0.00..41129.52
rows=527773 width=553) (actual time=0.079..1274.952 rows=527050 loops=1)
Total runtime: 1999.858 ms
(3 rows)
--
Stuart Bishop [EMAIL PROTECTED] http://www.canonical.com/
Canonical Ltd
Tom Lane wrote:
Stuart Bishop [EMAIL PROTECTED] writes:
I would like to understand what causes some of my indexes to be slower to
use than others with PostgreSQL 8.1.
I was about to opine that it was all about different levels of
correlation between the index order and physical table order
Stuart Bishop wrote:
I would like to understand what causes some of my indexes to be slower to
use than others with PostgreSQL 8.1. On a particular table, I have an int4
primary key, an indexed unique text 'name' column and a functional index of
type text. The function (person_sort_key
on Opteron?
With PG 8.2 and 8.3, is it still pretty much limited to 8 cores making 2 of
the quad core Xeons redundant or detrimental?
I expect we will be running this hardware for 8.2, 8.3 and 8.4. Anyone aware
of anything that might change the landscape for 8.4?
--
Stuart Bishop [EMAIL PROTECTED
can measure, with a guesstimate of
the disk cache hit rate. It would be lovely if these two variables
were separate. It would be even lovelier if the disk cache hit rate
could be probed at run time and didn't need setting at all, but I
suspect that isn't possible on some platforms.
--
Stuart
)
WHERE id = EffectiveId
--
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the load is comparable from when it was
running Ubuntu 10.04.
My big systems are still all on Ubuntu 10.04 (cut over in January I expect).
--
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
(rather than the slave
having trouble, for example). Lowering the number of concurrent
connections in your pgbouncer connection pool could help here.
--
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance
also seem to be very high, but so far have not
posed a problem and may well be correct. I'm trusting pgtune here
rather than my outdated guesses.
--
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
pinned in shared_buffers. Urgh, so many variables.
--
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
,
or is pessimism a dial?
--
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
wn custom storage. A single table at the PG
level, you can shard the data yourself into 8 bazillion separate stores, in
whatever structure suites your read and write operations (maybe reusing an
embedded db engine, ordered flat file+log+index, whatever).
--
Stuart Bishop <stu...@stuartbishop.net>
http://www.stuartbishop.net/
has the sharding key, can calculate exactly which store(s) need to
be hit, and returns the rows and to PostgreSQL it looks like 1 big table
with 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame
yourself :)
--
Stuart Bishop <stu...@stuartbishop.net>
http://www.stuartbishop.net/
t up wal shipping and
point in time recovery on your primary, and rebuild your reporting database
regularly from these backups. You get your fresh reporting database on
demand without overloading the primary, and regularly test your backups.
--
Stuart Bishop <stu...@stuartbishop.net>
http://www.stuartbishop.net/
On 13 January 2017 at 18:17, Ivan Voras <ivo...@gmail.com> wrote:
> On 13 January 2017 at 12:00, Stuart Bishop <stu...@stuartbishop.net>
> wrote:
>
>>
>>
>> On 7 January 2017 at 02:33, Ivan Voras <ivo...@gmail.com> wrote:
>>
>>>
>
16 matches
Mail list logo