On 19/11/2009 12:22 PM, Scott Carey wrote:
3: Have PG wait a half second (configurable) after the checkpoint fsync()
completes before deleting/ overwriting any WAL segments. This would be a
trivial feature to add to a postgres release, I think.
How does that help? It doesn't provide any
Scott Carey wrote:
For your database DATA disks, leaving the write cache on is 100% acceptable,
even with power loss, and without a RAID controller. And even in high write
environments.
That is what the XLOG is for, isn't it? That is where this behavior is
critical. But that has completely
Greg Smith wrote:
Scott Carey wrote:
For your database DATA disks, leaving the write cache on is 100%
acceptable,
even with power loss, and without a RAID controller. And even in
high write
environments.
That is what the XLOG is for, isn't it? That is where this behavior is
critical.
Scott Carey wrote:
Moral of the story: Nothing is 100% safe, so sometimes a small bit of KNOWN
risk is perfectly fine. There is always UNKNOWN risk. If one risks losing
256K of cached data on an SSD if you're really unlucky with timing, how
dangerous is that versus the chance that the raid
Hello,
I've inherited some very...interestingly...
designed tables, and am trying to figure out how to make them usable. I've
got an ugly hack in place, but it will not use an index properly, and I'm
hoping someone will be able to point me in the right direction.
Production is running 8.1.3,
On Thu, Nov 19, 2009 at 10:01 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Wed, Nov 18, 2009 at 11:39 PM, Scott Carey sc...@richrelevance.com wrote:
Well, that is sort of true for all benchmarks, but I do find that bonnie++
is the worst of the bunch. I consider it relatively useless
Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:
On 19/11/2009 12:22 PM, Scott Carey wrote:
3: Have PG wait a half second (configurable) after the checkpoint
fsync() completes before deleting/ overwriting any WAL segments. This
would be a trivial feature to add to a
Are the FSM parameters for each database, or the entire Postgres system? In
other words, if I have 100 databases, do I need to increase max_fsm_pages and
max_fsm_relations by a factor of 100, or keep them the same as if I just have
one database?
I suspect they're per-database, i.e. as I add
Craig James wrote:
Are the FSM parameters for each database, or the entire Postgres
system? In other words, if I have 100 databases, do I need to increase
max_fsm_pages and max_fsm_relations by a factor of 100, or keep them the
same as if I just have one database?
I suspect they're
On Thu, 2009-11-19 at 19:01 +0100, Anton Rommerskirchen wrote:
Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:
On 19/11/2009 12:22 PM, Scott Carey wrote:
3: Have PG wait a half second (configurable) after the checkpoint
fsync() completes before deleting/ overwriting any
Scott Carey wrote:
Have PG wait a half second (configurable) after the checkpoint fsync()
completes before deleting/ overwriting any WAL segments. This would be a
trivial feature to add to a postgres release, I think. Actually, it
already exists! Turn on log archiving, and have the script
Scott Marlowe wrote:
On Thu, Nov 19, 2009 at 10:01 AM, Merlin Moncure mmonc...@gmail.com wrote:
pgbench is actually a pretty awesome i/o tester assuming you have big
enough scaling factor
Seeing as how pgbench only goes to scaling factor of 4000, are the any
plans on enlarging that number?
On Thu, Nov 19, 2009 at 4:10 PM, Greg Smith g...@2ndquadrant.com wrote:
You can use pgbench to either get interesting peak read results, or peak
write ones, but it's not real useful for things in between. The standard
test basically turns into a huge stack of writes to a single table, and the
On Thu, Nov 19, 2009 at 2:39 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Thu, Nov 19, 2009 at 4:10 PM, Greg Smith g...@2ndquadrant.com wrote:
You can use pgbench to either get interesting peak read results, or peak
write ones, but it's not real useful for things in between. The standard
Hi All,
I have a stats collection system where I collect stats at specific
intervals (from network monitoring nodes), and stuff them into a
PostgreSQL DB. To make make the retrieval faster, I'm using a
partitioning scheme as follows:
stats_300: data gathered at 5 mins, child tables named
Dear All,
I've just joined this list, so let me first thank you in advance for
your hospitality.
I'm having lots of trouble with variously slow running queries on a
production system. I've tried all the obvious fixes: changing the
query planner, checking for indexing, autovacuum, making
Richard --
You might post the results of EXPLAIN ANALYZE your SQL here; ... be sure
to run it in a transaction if you want to be able roll it back. Perhaps try
EXPLAIN your SQL; first as it is faster, but EXPLAIN ANALYZE shows what the
planner is doing.
You wrote:
P.S. Sometimes, some
Thanks for your help. This issue splits into 2 bits:
1. Fixing specific queries.
2. Finding out when a specific running query is going to complete.
(At the moment, this is the bit I really need to know).
Greg Williamson wrote:
Richard --
You might post the results of EXPLAIN ANALYZE your
Greg Williamson wrote:
Richard --
You might post the results of EXPLAIN ANALYZE your SQL here; ...
be sure to run it in a transaction if you want to be able roll it
back. Perhaps try EXPLAIN your SQL; first as it is faster, but
EXPLAIN ANALYZE shows what the planner is doing.
Is
Greg Williamson wrote:
Richard --
You might post the results of EXPLAIN ANALYZE your SQL here; ...
be sure to run it in a transaction if you want to be able roll it
back. Perhaps try EXPLAIN your SQL; first as it is faster, but
EXPLAIN ANALYZE shows what the planner is doing.
Here's
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= hashincl...@gmail.com writes:
To make make the retrieval faster, I'm using a
partitioning scheme as follows:
stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2
21 matches
Mail list logo