Yeah, i saw it. It says full-page-writes can be disabled without problems.
But i wanted to confirm fsync cannot be disabled although i have battery.
Thanks!! :-)
We do mention battery-backed cache in our docs:
http://www.postgresql.org/docs/8.1/static/wal.html
If it is unclear,
Hi,
is interesting to do it when using RAID 1+0?
Thx
Hi all
I've a question about vacuuming, ...
Vacuum: cleans out obsolete and deleted registers...
Analyze: update statistics for the planner
Reindex: rebuild indexes
I think the correct order to perform the database maintenance for performance is:
1 - Vacuum
2 - Reindex
3 -
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Javier Somoza) would
write:
Hi all
I've a question about vacuuming, ...
Vacuum: cleans out obsolete and deleted registers...
Analyze: update statistics for the planner
At 04:45 AM 2/28/2006, Javier Somoza wrote:
Hi,
is interesting to do it (use different HD sets AKA LUNs for xlogs
than for data) when using RAID 1+0?
If interesting means this increases performance, this is not a
simple question.
Regardless of what RAID level you use, under the proper
Just a wouldn't it be nice if sort of feature request. I'm not sure
how practical it is.
Someone in our organization wrote a data fix query, which has sort of
odd logic, but it does what they need. The problem is that it ran for
14 hours in a test against a copy of the data. I looked at it and
On Tue, Feb 28, 2006 at 09:44:08AM -0600, Kevin Grittner wrote:
It struck me that it would be outstanding if the planner could
recognize this sort of situation, and build a temporary index based on
the snapshot of the data visible to the transaction. It seems to me
that the obvious downside
Kevin Grittner [EMAIL PROTECTED] writes:
It struck me that it would be outstanding if the planner could
recognize this sort of situation, and build a temporary index based on
the snapshot of the data visible to the transaction.
I don't think that's an appropriate solution at all. What it
Jim C. Nasby [EMAIL PROTECTED] writes:
FWIW, Sybase supported something similar a long time ago. It had the
ability to build a temporary 'clustered table' (think index organized
table) when there was enough benefit to do so. This is actually
much easier to make happen inside a transaction for
On Tue, Feb 28, 2006 at 11:05 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The issue at hand really has nothing to do with temp indexes, it's
with
the constrained way that the planner deals with EXISTS subplans.
Yet when the index exists, the query is optimized well.
Ups sorry.
Actually, you can't assume that a BBU means you can safely disable
full-page-writes. Depending on the controller, it's still possible to
end up with partially written pages.
BTW, if your mailer makes doing so convenient, it would be nice to trim
down your .signature; note that
Kevin Grittner [EMAIL PROTECTED] writes:
EXISTS works in situations where
you need to compare on multiple columns, so it is useful in many
situations where IN or MIN/MAX techniques just don't work.
IN works fine on multiple columns:
(foo, bar, baz) IN (SELECT x, y, z FROM ...)
On Tue, Feb 28, 2006 at 11:55:32AM -0600, Kevin Grittner wrote:
Also, EXISTS works in situations where
you need to compare on multiple columns, so it is useful in many
situations where EXISTS or MIN/MAX techniques just don't work.
Sorry. That should have read:
EXISTS works in situations
Ok, you absolutely can't guarantee you won't get partial page writes
then. A UPS buys you no more data safety than being plugged directly
into the wall. UPS's fail. People trip over cords. Breakers fail. Even
if you have multiple power supplies on multiple circuits fed by
different UPS's you can
On Tue, Feb 28, 2006 at 11:36:28AM -0600, Kevin Grittner wrote:
digression
I'm all for that. So far, we've been going after the low-hanging fruit
in our use of PostgreSQL. When we get to the main applications, we're
going to be dealing with a lot more in the way of EXISTS clauses. The
On Tue, Feb 28, 2006 at 11:05 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The limiting factor is that EXISTS subplans
aren't flattened ... and once that's fixed, I doubt the example
would
need any new kind of join support.
I rewrote the query to use IN predicates
PFC [EMAIL PROTECTED] writes:
Just a stupid question about the various fsync settings.
There is fsync=off, but is there fsync=fflush ?
fflush would mean only an OS crash could cause data loss,
I think.it could be useful for some applications where you need a speed
Hi list
I'm fairly new to Postgres so bear with me. Googling and searching the
list, I didn't find anything that resembled my problem.
I have a large table with ca. 10 million inserts per day (fairly simple
data: timestam, a couple of id's and a varchar message)
I run a query every couple of
sorry: Postgres 8.0.2 server. The EXPLAIN ANALYZE for the query looks
like this:
explain analyze select
syslog.logs.eventtime,assets.hosts.name,syslog.processes.name as
process
from
syslog.logs,assets.hosts,assets.ipaddrs,assets.macaddrs,syslog.processes
where msg like '%session opened for user
I have a table that has only a few records in it at the time, and they
get deleted every few seconds and new records are inserted. Table never
has more than 5-10 records in it.
However, I noticed a deteriorating performance in deletes and inserts
on it. So I performed vacuum analyze on it three
I am using triggers for all the events (insert,delete,update) please
find the details below.
trg_delpointtable BEFORE DELETE ON pointtable FOR EACH ROW EXECUTE
PROCEDURE pp_delpointtable()
trg_insdelpoints AFTER DELETE ON pointtable FOR EACH ROW EXECUTE
PROCEDURE pp_insdelpoints()
On Mon, Feb 27, 2006 at 06:48:02 -0800,
Nik [EMAIL PROTECTED] wrote:
I have a table that has only a few records in it at the time, and they
get deleted every few seconds and new records are inserted. Table never
has more than 5-10 records in it.
However, I noticed a deteriorating
On Feb 24, 2006, at 23:13 , jcfischer wrote:
Is there a way to limit the expensive query to only those last 1000
(or
whatever) results?
I have tried to nest SELECTS but my SQL-fu is to limited to get
anything through the SQL processor :-)
The basics of a subquery are:
SELECT expensive
23 matches
Mail list logo