On Sep 12, 2005, at 6:02 PM, Brandon Black wrote:- using COPY instead of INSERT ?(should be easy to do from the aggregators)Possibly, although it would kill the current design of returning the database transaction status for a single client packet back to the client on
On Sep 12, 2005, at 6:02 PM, Brandon Black wrote:- splitting the xlog and the data on distinct physical drives or arraysThat would almost definitely help, I haven't tried it yet. Speaking of the xlog, anyone know anything specific about the WAL tuning parameters for heavy concurrent write
* Brandon Black ([EMAIL PROTECTED]) wrote:
Ideally I'd like to commit the data seperately, as the data could contain
errors which abort the transaction, but it may come down to batching it and
coding things such that I can catch and discard the offending row and retry
the transaction if it
On 9/12/05, Christopher Petrilli [EMAIL PROTECTED] wrote:
3) Use 8.1 and strongly look at Bizgres. The data partitioning is critical.
I started looking closer at my options for partitioning (inheritance,
union all), and at Bizgres today. Bizgres partitioning appears to be
basically the same kind
I know I haven't provided a whole lot of application-level detail here,
You did !
What about :
- using COPY instead of INSERT ?
(should be easy to do from the aggregators)
- using Bizgres ?
(which was designed for your
"Brandon Black" [EMAIL PROTECTED] wrote ...
Increasing shared_buffers seems to always help, even out to half of
the dev box's ram (2G).
Though officially PG does not prefer huge
shared_buffers size, I did see several times thatperformancewas
boosted in case IO is the
Split your system into multiple partitions of RAID 10s.For max
performance, ten drive RAID 10 for pg_xlog (This will max out a
PCI-X bus) on Bus A, multiple 4/6Drive RAID 10s for tablespaces on Bus
B. For max performance I would recommend using one RAID 10 for raw data
tables, one for aggregate
On 9/12/05, PFC [EMAIL PROTECTED] wrote:
I know I haven't provided a whole lot of application-level detail here,You did !What about :- using COPY instead of INSERT ?(should
be easy to do from the aggregators)
Possibly, although it would kill the current design of returning the
database
Brandon Black wrote:
On 9/12/05, *PFC* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
- benchmarking something else than ext3
(xfs ? reiser3 ?)
We've had bad experiences under extreme and/or strange workloads with
XFS here in general, although this
Brandon Black [EMAIL PROTECTED] writes:
The vast, overwhelming majority of our database traffic is pretty much a
non-stop stream of INSERTs filling up tables.
That part Postgres should handle pretty well. It should be pretty much limited
by your I/O bandwidth so big raid 1+0 arrays are
On 9/12/05, Brandon Black [EMAIL PROTECTED] wrote:
I'm in the process of developing an application which uses PostgreSQL for
data storage. Our database traffic is very atypical, and as a result it has
been rather challenging to figure out how to best tune PostgreSQL on what
development
On 12 Sep 2005 23:07:49 -0400, Greg Stark [EMAIL PROTECTED] wrote:
The WAL parameters like commit_delay and commit_siblings are a bit of amystery. Nobody has done any extensive testing of them. It would be quite
helpful if you find anything conclusive and post it. It would also besurprising if
On 9/12/05, Christopher Petrilli [EMAIL PROTECTED] wrote:
2) Tune ext3.The default configuration wrecks high-write situations. Look into data="" for mounting, turning off atime (I hopeyou've done this already) updates, and also modifying the scheduler to
the elevator model.This is poorly
13 matches
Mail list logo