Hello,
The next question then is whether anything in your postgres configuration
is preventing it getting useful performance from the OS. What settings
have you changed in postgresql.conf?
The only options not commented out are the following (it's not even
tweaked for buffer sizes and such,
On 4-Apr-07, at 2:01 AM, Peter Schuller wrote:
Hello,
The next question then is whether anything in your postgres
configuration
is preventing it getting useful performance from the OS. What
settings
have you changed in postgresql.conf?
The only options not commented out are the
On 2007-04-04, Peter Schuller [EMAIL PROTECTED] wrote:
The next question then is whether anything in your postgres configuration
is preventing it getting useful performance from the OS. What settings
have you changed in postgresql.conf?
The only options not commented out are the following
Hello,
I'd always do benchmarks with a realistic value of shared_buffers (i.e.
much higher than that).
Another thought that comes to mind is that the bitmap index scan does
depend on the size of work_mem.
Try increasing your shared_buffers to a reasonable working value (say
10%-15% of
Hello,
If you are dealing with timed data or similar, you may consider to
partition your table(s).
Unfortunately this is not the case; the insertion is more or less
random (not quite, but for the purpose of this problem it is).
Thanks for the pointers though. That is sure to be useful in some
Hello,
SELECT * FROM test WHERE value = 'xxx' LIMIT 1000;
I tested this on a 14-way software raid10 on freebsd, using pg 8.1.6, and
couldn't reproduce anything like it. With one client I get about 200 disk
requests per second, scaling almost exactly linearly for the first 5 or so
clients,
On 2007-04-02, Peter Schuller [EMAIL PROTECTED] wrote:
I have confirmed that I am seeing expected performance for random
short and highly concurrent reads in one large ( 200 GB) file. The
I/O is done using libaio however, so depending on implementation I
suppose the I/O scheduling behavior of
On 2007-03-30, Peter Schuller [EMAIL PROTECTED] wrote:
[...]
Other than absolute performance, an important goal is to be able to
scale fairly linearly with the number of underlying disk drives. We
are fully willing to take a disk seek per item selected, as long as it
scales.
To this end I
Hello Peter,
If you are dealing with timed data or similar, you may consider to
partition your table(s).
In order to deal with large data, I've built a logical partition
system,
whereas the target partition is defined by the date of my data (the date
is part of the filenames that I import...).
Hello,
I am looking to use PostgreSQL for storing some very simple flat data
mostly in a single table. The amount of data will be in the hundreds
of gigabytes range. Each row is on the order of 100-300 bytes in size;
in other words, small enough that I am expecting disk I/O to be seek
bound (even
10 matches
Mail list logo