I just got this crazy, stupid or maybe genius idea :-)
One thing that I've learned in this thread is that fat indexes (i.e.
some index on some_table(a,b,c,d,e,f)) is to be avoided as much as
possible.
One of our biggest indexes looks like this:
acc_trans(customer_id, trans_type, created)
For
On 29/09/10 19:41, Tobias Brox wrote:
I just got this crazy, stupid or maybe genius idea :-)
Now, my idea is to drop that fat index and replace it with conditional
indexes for a dozen of heavy users - like those:
acc_trans(trans_type, created) where customer_id=224885;
On 29 September 2010 10:03, Mark Kirkwood
mark.kirkw...@catalyst.net.nz Yeah, I think the idea of trying to
have a few smaller indexes for the 'hot'
customers is a good idea. However I am wondering if just using single column
indexes and seeing if the bitmap scan/merge of smaller indexes is
Hi,
After reading lots of documentation, I still don't understand fully how
PG knows if some needed data is in memory or in second storage.
While choosing the best query plan, the optimizer must take this into
account. Does PG consider this? If so, how does it know?
I presume it
Fabrício dos Anjos Silvafabricio.si...@linkcom.com.br wrote:
After reading lots of documentation, I still don't understand
fully how PG knows if some needed data is in memory or in second
storage.
Does PG consider this?
No.
When setting seq_page_cost and random_page_cost, do I have to
Hi,
I have this situation:
Database size: 7,6GB (on repository)
Memory size: 7,7GB
1 CPU: aprox. 2GHz Xeon
Number of tables: 1 (empresa)
CREATE TABLE public.empresa (
cdempresa INTEGER NOT NULL,
razaosocial VARCHAR(180),
cnpj VARCHAR(14) NOT NULL,
ie VARCHAR(13),
endereco
2010/9/29 Fabrício dos Anjos Silva fabricio.si...@linkcom.com.br
When setting seq_page_cost and random_page_cost, do I have to consider
the probability that data will be in memory? Or does seq_page_cost mean
sequential access on disk and random_page_cost mean random access on
disk?
The
Fabrício dos Anjos Silvafabricio.si...@linkcom.com.br wrote:
explain analyze select max(cnpj) from empresa where dtcriacao =
current_date-5;
Result (cost=32.24..32.24 rows=1 width=0) (actual
time=5223.937..5223.938 rows=1 loops=1)
InitPlan 1 (returns $0)
- Limit
Hi,
Are there any significant performance improvements or regressions from 8.4 to
9.0? If yes, which areas (inserts, updates, selects, etc) are those in?
In a related question, is there any public data that compares the performances
of various Postgresql versions?
Thanks
--
Sent
Andy wrote:
Are there any significant performance improvements or regressions from 8.4 to
9.0? If yes, which areas (inserts, updates, selects, etc) are those in?
There were two major rounds of tinkering to the query planner/optimizer
that can impact the types of plans you get from some
10 matches
Mail list logo