On 17/01/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Try a), b), and c) in order on the offending tables as they address the problem at increasing cost...thanks alot for the detailed information! the entire concept of vacuum isn'tyet that clear to me, so your explanations and hints are very
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
know that the necessity of doing full vacuums can be reduced by
increasing the FSM, but in my opinion that is the wrong decision for
many applications. My application does not continuously
insert/update/delete tuples
So my question is: What's the use of an autovacuum daemon if I still
have to use a cron job to do full vacuums? wouldn't it just be a minor
job to enhance autovacuum to be able to perform full vacuums, if one
really wants it to do that - even if some developers think that it's the
wrong
Hi,
did you read my post? In the first part I explained why I don't want to
increase the FSM that much.
Mike
So my question is: What's the use of an autovacuum daemon if I still
have to use a cron job to do full vacuums? wouldn't it just be a minor
job to enhance autovacuum to be able to
Hi,
I already read the documentation for to use the SPI_PREPARE and
SPI_EXEC... but sincerely I don't understand how I will use this
resource in my statements.
I looked for examples, but I din't good examples :(..
Somebody can help me?
Thanks.
Marcos.
---(end of
You should never have to do full vacuums...
I would rather say, You should never have to do full vacuums by any
periodic means. It may be done on a adhoc basis, when you have figured
out that your table is never going to grow that big again.
On 1/17/06, Christopher Kings-Lynne [EMAIL PROTECTED]
On Tue, Jan 17, 2006 at 11:33:02AM +0100, Michael Riess wrote:
did you read my post? In the first part I explained why I don't want to
increase the FSM that much.
Since you didn't quantify it, that wasn't much of a data point. (IOW,
you'd generally have to be seriously resource constrained
Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum.
Because a VACUUM FULL is too invasive. Lazy vacuum is so light on the
system w.r.t. locks that it's generally not a problem to start one at
any time. On the contrary, vacuum full could be a disaster
On Tue, Jan 17, 2006 at 09:14:27AM +0100, Antoine wrote:
think about it - we do very little removing, pretty much only inserts and
selects. I will give it a vacuum full and see what happens.
UPDATES? Remember that, in Postgres, UPDATE is effectively DELETE +
INSERT (from the point of view of
Hi, Tom,
Tom Lane wrote:
in our db system (for a website), i notice performance boosts after a vacuum
full. but then, a VACUUM FULL takes 50min+ during which the db is not really
accessible to web-users. is there another way to perform maintenance tasks
AND leaving the db fully operable and
Hi, Thomas,
[EMAIL PROTECTED] wrote:
Try a), b), and c) in order on the offending tables as they address
the problem at increasing cost...
thanks alot for the detailed information! the entire concept of vacuum
isn't yet that clear to me, so your explanations and hints are very much
I'm curious as to why autovacuum is not designed to do full vacuum.
Because that's terribly invasive due to the locks it takes out.
Lazy vacuum may chew some I/O, but it does *not* block your
application for the duration.
VACUUM FULL blocks the application. That is NOT something that anyone
VACUUM FULL blocks the application. That is NOT something that anyone
wants to throw into the activity mix randomly.
There must be a way to implement a daemon which frees up space of a
relation without blocking it too long. It could abort after a certain
number of blocks have been freed
Well,
I think that the documentation is not exactly easy to understand. I
always wondered why there are no examples for common postgresql
configurations. All I know is that the default configuration seems to be
too low for production use. And while running postgres I get no hints as
to which
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs automatically should ever take an exclusive
lock on the entire database, which is what VACUUM FULL does.
activity. Increasing
Hi,
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs automatically should ever take an exclusive
lock on the entire database, which is what VACUUM FULL does.
I thought that vacuum full only locks the table which it currently
operates on?
On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:
about the FSM: You say that increasing the FSM is fairly cheap - how
should I know that?
Why would you assume otherwise, to the point of not considering changing
the setting?
The documentation explains how much memory is used
On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:
always wondered why there are no examples for common postgresql
configurations.
You mean like this one? (for 8.0):
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
All I know is that the default configuration
On Tue, Jan 17, 2006 at 09:09:02AM -0500, Matthew T. O'Connor wrote:
vacuum. As long as that percentage is small enough, the effect on
performance is negligible. Have you measured to see if things are truly
Actually, as long as the percentage is small enough and the pages are
really empty,
On 1/17/06, Michael Riess [EMAIL PROTECTED] wrote:
about the FSM: You say that increasing the FSM is fairly cheap - howshould I know that?comment from original postgresql.conf file seems pretty obvious:#max_fsm_pages = 2 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min
Michael Riess [EMAIL PROTECTED] writes:
I'm curious as to why autovacuum is not designed to do full vacuum.
Locking considerations. VACUUM FULL takes an exclusive lock, which
blocks any foreground transactions that want to touch the table ---
so it's really not the sort of thing you want being
On Tue, Jan 17, 2006 at 04:04:41PM +0100, Michael Riess wrote:
I thought that vacuum full only locks the table which it currently
operates on? I'm pretty sure that once a table has been vacuumed, it can
be accessed without any restrictions while the vacuum process works on
the next table.
Michael Riess [EMAIL PROTECTED] writes:
But actually I never understood why the database system slows down at
all when there is much unused space in the files.
Perhaps some of your common queries are doing sequential scans? Those
would visit the empty pages as well as the full ones.
On Tue, 2006-01-17 at 09:08, Andrew Sullivan wrote:
On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:
always wondered why there are no examples for common postgresql
configurations.
You mean like this one? (for 8.0):
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs automatically should ever take an exclusive
lock on the entire database, which is
8.1.1, everything vacuumed/analyzed. basically i have two queries that
when executed individually run quite quickly, but if I try to left join
the second query onto the first, everything gets quite a bit slower.
rms=# explain analyze
rms-# SELECT
rms-# software_download.*
rms-# FROM
rms-#
On Tue, Jan 17, 2006 at 09:59:25AM -0600, Scott Marlowe wrote:
I have to admit, looking at the documentation, that we really don't
explain this all that well in the administration section, and I can see
how easily led astray beginners are.
I understand what you mean, but I suppose my reaction
Hi, Michael,
Michael Riess wrote:
But actually I never understood why the database system slows down at
all when there is much unused space in the files. Are the unused pages
cached by the system, or is there another reason for the impact on the
performance?
No, they are not cached as such,
Chris Browne wrote:
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs automatically should ever take an exclusive
lock on
On Tue, 2006-01-17 at 11:16, Andrew Sullivan wrote:
On Tue, Jan 17, 2006 at 09:59:25AM -0600, Scott Marlowe wrote:
I have to admit, looking at the documentation, that we really don't
explain this all that well in the administration section, and I can see
how easily led astray beginners are.
On Tue, Jan 17, 2006 at 11:43:14AM -0500, Chris Browne wrote:
[EMAIL PROTECTED] (Andrew Sullivan) writes:
Because nothing that runs automatically should ever take an exclusive
lock on the entire database,
That's a bit more than what autovacuum would probably do...
Or even VACUUM FULL, as
[EMAIL PROTECTED] (Alvaro Herrera) writes:
Chris Browne wrote:
[EMAIL PROTECTED] (Andrew Sullivan) writes:
On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:
hi,
I'm curious as to why autovacuum is not designed to do full vacuum. I
Because nothing that runs
Chris Browne [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] (Alvaro Herrera) writes:
Even a database-wide vacuum does not take locks on more than one table.
The table locks are acquired and released one by one, as the operation
proceeds.
Has that changed recently? I have always seen vacuumdb
Tom Lane wrote:
Alessandro Baretta [EMAIL PROTECTED] writes:
I am aware that what I am dreaming of is already available through
cursors, but in a web application, cursors are bad boys, and should be
avoided. What I would like to be able to do is to plan a query and run
the plan to retreive a
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
I understand most of these issues, and expected this kind of reply. Please,
allow me to insist that we reason on this problem and try to find a
solution. My reason for doing so is that the future software industry is
likely to
Craig A. James wrote:
Alessandro Baretta [EMAIL PROTECTED] writes:
I think you're trying to do something at the wrong layer of your
architecture. This task normally goes in your middleware layer, not
your database layer.
I am developing my applications in Objective Caml, and I have
Hi,
I have a postges 8.1.1 table with over 29 million rows in it. The colunm
(file_name) that I need to search on has entries like the following:
MOD04_L2.A2005311.1400.004.2005312013848.hdf
MYD04_L2.A2005311.0700.004.2005312013437.hdf
Alessandro Baretta [EMAIL PROTECTED] writes:
* When the cursor state is pushed back to the backend, no new
transaction is instantiated, but the XID of the original transaction
is reused. In the MVCC system, this allows us to achieve a perfectly
consistent view of the database at the instant
On Tue, Jan 17, 2006 at 09:06:53PM +0100, Alessandro Baretta wrote:
Craig A. James wrote:
Alessandro Baretta [EMAIL PROTECTED] writes:
I think you're trying to do something at the wrong layer of your
architecture. This task normally goes in your middleware layer, not
your database
Yantao Shi [EMAIL PROTECTED] writes:
testdbspc=# explain select file_name from catalog where file_name like
'MOD04_L2.A2005311.%.004.2005312013%.hdf';
QUERY PLAN
Seq Scan on catalog (cost=0.00..429.00 rows=1 width=404)
Filter: (file_name ~~ 'MOD04_L2.A2005311.%.004.2005312013%.hdf'::text)
Hmmm, this looks like a planner bug to me:
Hash
Join (cost=870.00..992.56 rows=1 width=96) (actual time=90.566..125.782
rows=472 loops=1) Hash Cond: ((outer.host_id = inner.host_id) AND
(outer.?column2? = inner.mtime)) - HashAggregate
(cost=475.88..495.32 rows=1555 width=16) (actual
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
I understand most of these issues, and expected this kind of reply. Please,
allow me to insist that we reason on this problem and try to find a
solution. My reason for doing so is that the future software industry is
likely
I am developing my applications in Objective Caml, and I have written the
middleware layer myself. I could easily implement a cursor-pooling strategy,
but
there is no perfect solution to the problem of guaranteeing that cursors be
closed. Remember that web applications require the user to
I'm trying to query a table with 250,000+ rows. My
query requires I provide 5 colums in my "order by" clause:
selectcolumn
fromtable
where
column = '2004-3-22 0:0:0'order by
ds.receipt desc,
ds.carrier_id asc,
ds.batchnum asc,
encounternum asc,
ds.encounter_id ASC
limit 100 offset 0
J,
I have an index built for each of these columns in my order by clause.
This query takes an unacceptable amount of time to execute. Here are the
results of the explain:
You need a single index which has all five columns, in order.
--
--Josh
Josh Berkus
Aglio Database Solutions
San
I created the index, in order. Did a vacuum analyze on the table and my
explain still says:
Limit (cost=229610.78..229611.03 rows=100 width=717)
- Sort (cost=229610.78..230132.37 rows=208636 width=717)
Sort Key: receipt, carrier_id, batchnum, encounternum, encounter_id
- Seq
On Tue, 17 Jan 2006, Josh Berkus wrote:
J,
I have an index built for each of these columns in my order by clause.
This query takes an unacceptable amount of time to execute. Here are the
results of the explain:
You need a single index which has all five columns, in order.
I think
On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:
Well,
I think that the documentation is not exactly easy to understand. I
always wondered why there are no examples for common postgresql
configurations. All I know is that the default configuration seems to be
too low for
On Tue, 17 Jan 2006 16:12:59 -0500
[EMAIL PROTECTED] wrote:
In the mean time, I successfully use LIMIT and OFFSET without such an
optimization, and things have been fine for me.
Same here.
-
Frank Wiles [EMAIL PROTECTED]
http://www.wiles.org
Alessandro,
I understand most of these issues, and expected this kind of reply.
Please, allow me to insist that we reason on this problem and try to
find a solution. My reason for doing so is that the future software
industry is likely to see more and more web applications retrieving data
try adding the keyword 'date' before the date in your query.
I ran into this quite a while back, but I'm not sure I remember the solution.
In Reply to: Tuesday January 17 2006 04:29 pm, [EMAIL PROTECTED] [EMAIL
PROTECTED]
wrote:
I created the index, in order. Did a vacuum analyze on the
People:
To follow up further, what Alessandro is talking about is known as a
keyset cursor. Sybase and SQL Server used to support them; I beleive
that they were strictly read-only and had weird issues with record
visibility.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
On Tue, 17 Jan 2006 [EMAIL PROTECTED] wrote:
I created the index like this:
CREATE INDEX rcbee_idx
ON detail_summary
USING btree
(receipt, carrier_id, batchnum, encounternum, encounter_id);
Is this correct ?
That would work if you were asking for all the columns ascending or
On Sat, Jan 14, 2006 at 01:41:43PM -0500, Tom Lane wrote:
Benjamin Arai [EMAIL PROTECTED] writes:
Right now I run sync afte the updates have finished to ensure that the
data is synced to disk but I am concerned about the segment data and
anything else I am missing that PostgreSQL explicitly
I've read all of this info, closely. I wish when I was searching for an
answer for my problem these pages came up. Oh well.
I am getting an idea of what I need to do to make this work well. I was
wondering if there is more information to read on how to implement this
solution in a more simple
[EMAIL PROTECTED] wrote:
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
What is wrong with LIMIT and OFFSET? I assume your results are ordered
in some manner.
Especially with web users, who become bored if the page doesn't flicker
in a way that appeals to them, how could
Hi,
I have two tables foobar and foobar2 (which inherits from foobar, no
extra columns).
foobar2 has all the data (574,576 rows), foobar is empty.
Both foobar and foobar2 have an index on the only column 'id'. Now I
have a list of ids in a tmp_ids tables.
A query on foobar2 (child table)
Alessandro Baretta wrote:
I think you're trying to do something at the wrong layer of your
architecture. This task normally goes in your middleware layer, not
your database layer.
I am developing my applications in Objective Caml, and I have written
the middleware layer myself. I could
Mark Kirkwood [EMAIL PROTECTED] writes:
SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;
Suppose this displays records for id 1 - 10020.
When the user hits next, and page saves id=10020 in the session state
and executes:
SELECT ... FROM table WHERE ... AND id 10020 ORDER BY id
Hari Warrier [EMAIL PROTECTED] writes:
A query on foobar2 (child table) uses the index, whereas the same query
via foobar (parent) doesn't.
A query just on foobar should be able to use the index AFAIR. The
problem here is that you have a join, and we are not very good about
situations
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
Sent: Rabu, 18 Januari 2006 07:23
To: Stephan Szabo
Cc: Josh Berkus; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Multiple Order By Criteria
I've read all of this info,
Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;
Suppose this displays records for id 1 - 10020.
When the user hits next, and page saves id=10020 in the session state
and executes:
SELECT ... FROM table WHERE ... AND id
62 matches
Mail list logo