You said:
If your write size is smaller than chunk_size*N (N = number of data blocks
in a stripe), in order to calculate correct parity you have to read data
from the remaining drives.
Neil explained it in this message:
http://marc.theaimsgroup.com/?l=linux-raidm=108682190730593w=2
Guy
really, really bad fragmentation,
which affects sequential scan operations (VACUUM, ANALYZE, REINDEX ...)
quite drastically. We have in-house patches that somewhat alleiviate this,
but they are not release quality. Has anybody else suffered this?
Guy Thornley
---(end
journalling filesystem, without an 'ordered write' mode, its
possible to end up with corrupt heaps after a crash because of garbage data
in the extended files.
If/when we move to postgres 8 I'll try to ensure the patches get re-done
with releasable quality
Guy Thornley
---(end
no longer need a VACUUM when
postgres starts, to learn about free space ;)
- Guy Thornley
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
and says 'deal with this'.
(Clearly the state object needs to contain all user and transaction state
the connection is involved in).
- Guy Thornley
---(end of broadcast)---
TIP 6: explain analyze is your friend
to detect this, and prevent
it occuring anyway. I don't know anything about linux's behaviour in this
area.
.Guy
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
the drive, you get the correct setting.
I tested this a while ago by writing a program that did fsync() to test
write latency and random-reads to test read latency, and then comparing
them.
- Guy
* I did experience a too-close-to-call case, where after write-cache was
disabled, the write
to improve this picture any further. I'd
appreciate some suggestions. Thanks.
--
Guy Rouillier
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
against
BigDBMS, so any penalty from this approach should be evenly felt.
--
Guy Rouillier
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
your tests...
Thanks to everyone for providing suggestions, and I apologize for my
delay in responding to each of them.
Shoaib Mir
EnterpriseDB (www.enterprisedb.com http://www.enterprisedb.com)
On 12/28/06, *Guy Rouillier* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote
logic is unchangeable; millions of rows of data
in a single table will be updated throughout the day. If PG can't
handle high volume updates well, this may be brick wall.
--
Guy Rouillier
---(end of broadcast)---
TIP 4: Have you searched our list
= 128MB
work_mem = 16MB
maintenance_work_mem = 64MB
temp_buffers = 32MB
max_fsm_pages = 204800
checkpoint_segments = 30
redirect_stderr = on
log_line_prefix = '%t %d'
--
Guy Rouillier
---(end of broadcast)---
TIP 4: Have you searched our list archives
Dave Cramer wrote:
On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be
hurt by this more than PG. Here are the settings I've modified in
postgresql.conf:
As I said you need to set shared_buffers to at least
changing the implementation.
--
Guy Rouillier
---(end of broadcast)---
TIP 6: explain analyze is your friend
Dave Cramer wrote:
Is it possible that providing 128G of ram is too much ? Will other
systems in the server bottleneck ?
What CPU and OS are you considering?
--
Guy Rouillier
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL
returns from the Xeon northbridge
memory access. If you are willing to spend that kind of money on
memory, you'd be better off with Opteron or Sparc.
--
Guy Rouillier
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ
significantly to long run times.
Guy Rouillier wrote:
I don't want to violate any license agreement by discussing performance,
so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as
BigDBMS here.
I'm trying to convince my employer to replace BigDBMS with PostgreSQL
for at least some
as well as production
maturity with Barcelona, and can judge then which better fits your needs.
--
Guy Rouillier
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about
applications, I imagine even that mark won't take
but a year or two to surpass.
--
Guy Rouillier
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
time
More platters means more tracks under the read heads at a time, so
generally *better* performance. All other things (like rotational
speed) being equal, of course.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
provide you
any performance numbers.
Difference in data structures, etc, are fairly easy to determine.
Anyone can read the Oracle documentation.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
was
introduced into the SQL vernacular by Codd and Date expressly to
represent unknown values.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
wrote
my own data access layer years ago, I expressly checked for empty
strings on input and changed them to null. I did this because empty
strings had a nasty way of creeping into our databases; writing queries
to produce predictable results got to be very messy.
--
Guy Rouillier
--
Sent via
. That seems preferable to adding an additional column to every
nullable column.
But as you say, that would have to be taken up with the SQL
standardization bodies, and not PostgreSQL.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
.
--
Guy Rouillier
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
25 matches
Mail list logo