I have a DB table with 25M rows, ~3K each (i.e. ~75GB), that together with
multiple indexes I use (an additional 15-20GB) will not fit entirely in
memory (64GB on machine). A typical query locates 300 rows thru an index,
optionally filters them down to ~50-300 rows using other indexes, finally
On Sun, May 31, 2009 at 10:26 PM, S Arvind arvindw...@gmail.com wrote:
Having a doubt, we want to vacuum and reindex some 50 most used tables daily
on specific time. Is it best to have a function in postgres and call it in
cron or is there any other good way to do the two process for specified
On Mon, 1 Jun 2009, Shaul Dar wrote:
1. At any given time how can I check what portion (%) of specific tables and
indexes is cached in memory?
This is a bit tricky. PostgreSQL caches information in its shared_buffers
cache, and you can get visibility into that if you install the
Erik Aronesty e...@q32.com writes:
but why wasn't autovac enough to reclaim at least *most* of the space?
Autovac isn't meant to reclaim major amounts of bloat; it's more in the
line of trying to prevent it from happening in the first place. To
reclaim bloat it would have to execute VACUUM
On Mon, 1 Jun 2009, S Arvind wrote:
Having a doubt, we want to vacuum and reindex some 50 most used tables daily on
specific time. Is it best to have a function in
postgres and call it in cron or is there any other good way to do the two
process for specified tables at specified time?
If
Hi,
I have at column that is a bit array of 16, each bit specifying if a certain
property, out of 16, is present or not. Our typical query select 300
random rows (could be located in different blocks) from the table based on
another column+index, and then filters them down to ~50 based on this
Hi all,
I've been staring at this for hours (if not days). Any hints are appreciated!
At first I thought there must be a way to make postgresql perform on this
thing, but i've lost hope that pgsql actually can deal with it..
The query is:
SELECT DISTINCT
posrel.pos,
questions.number,
Shaul Dar wrote:
Hi,
I have at column that is a bit array of 16, each bit specifying if a certain
property, out of 16, is present or not. Our typical query select 300
random rows (could be located in different blocks) from the table based on
another column+index, and then filters them down to
Shaul Dar shaul...@gmail.com writes:
I have at column that is a bit array of 16, each bit specifying if a certain
property, out of 16, is present or not. Our typical query select 300
random rows (could be located in different blocks) from the table based on
another column+index, and then
2009/6/1 Koen Martens pg...@metro.cx
Now, when I split up the OR in two distinct queries, everything is nice and
fast. Both queries run in sub-second time.
Hi.
PostgreSQL simply do not like ORs (can't use indexes in this case), so
UNION/UNION ALL is your friend.
Best regards, Vitalii
Disclaimer : I'm very much a newbie here!
But I am on the path in my new job to figure this stuff out as well,
and went to PG Con here in Ottawa 2 weeks ago and attended quite a few
lectures on this topic. Have a look at :
http://wiki.postgresql.org/wiki/PgCon_2009
And in particular Database
Hi Smith,
The reason why we need it manually is , we don't need any performance drop
in our production hours. So we figured out the most less usage working time,
most freq used tables and want to perform that on daily . so in weekends we
can vaccum and reindex entire db.. Is the model is not
Robert Haas wrote:
On Fri, May 29, 2009 at 5:57 PM, Anne Rosset aros...@collab.net wrote:
Robert Haas wrote:
On Thu, May 28, 2009 at 6:46 PM, Anne Rosset aros...@collab.net wrote:
- Index Scan using
item_pk on item
On 5/31/09 9:37 AM, Fabrix fabrix...@gmail.com wrote:
2009/5/29 Scott Carey sc...@richrelevance.com
On 5/28/09 6:54 PM, Greg Smith gsm...@gregsmith.com wrote:
2) You have very new hardware and a very old kernel. Once you've done the
above, if you're still not happy with performance,
On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset aros...@collab.net wrote:
The table has 468173 rows and the value for default_statistics_target is
750.
Anne
Hi Robert,
we did a vacuum analyze and the results are the same.
Here are the results of the queries :
SELECT SUM(1) FROM item WHERE
Robert Haas wrote:
On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset aros...@collab.net wrote:
The table has 468173 rows and the value for default_statistics_target is
750.
Anne
Hi Robert,
we did a vacuum analyze and the results are the same.
Here are the results of the queries :
-Original Message-
From: Anne Rosset
Subject: Re: [PERFORM] Unexpected query plan results
SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum
- 1824592
(1
row)
SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641
Dave Dutcher wrote:
-Original Message-
From: Anne Rosset
Subject: Re: [PERFORM] Unexpected query plan results
SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum
- 1824592
(1
row)
SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641
On Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset aros...@collab.net wrote:
On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset aros...@collab.net wrote:
SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum - 1824592 (1
row)
SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641
S Arvind arvindw...@gmail.com wrote:
The reason why we need it manually is , we don't need any
performance drop in our production hours. So we figured out the most
less usage working time, most freq used tables and want to perform
that on daily . so in weekends we can vaccum and reindex
On Mon, Jun 1, 2009 at 8:35 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
S Arvind arvindw...@gmail.com wrote:
The reason why we need it manually is , we don't need any
performance drop in our production hours. So we figured out the most
less usage working time, most freq used tables
Grzegorz Jaśkiewicz wrote:
I thought that's where the difference is between postgresql and oracle
mostly, ability to handle more transactions and better scalability .
Which were you suggesting had this better scalability?
I recall someone summarizing to a CFO where I used to work:
Oracle
On Sat, 30 May 2009, Scott Marlowe wrote:
8.04 was a frakking train wreck in many ways. It wasn't until 8.04.2
came out that it was even close to useable as a server OS, and even
then, not for databases yet. It's still got broken bits and pieces
marked fixed in 8.10... Uh, hello, it's your
23 matches
Mail list logo