Kenneth Marshall escreveu:
I have found that increasing maintenance_work_mem speeds
index rebuilds, turn off synchronous_commit or fsync if
you really can afford to start over. Another big help is
to use the parallel pg_restore from PostgreSQL 8.4.0 to
perform the restore.
And make sure
Thanks for your response.
I think your analysis is correct, When there are more than 100 rows that
match this query, limit 100 is fast.
However, we often have less than hundred rows, so this is not sufficient for
us.
This suggestion ('OFFSET 0' trick) did not show differences in response time
On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dierenkeesvandie...@gmail.com wrote:
Would it get attention if I submit this to
http://www.postgresql.org/support/submitbug ? (in fact it is not really a
bug, but an improvement request).
I think that many of the people who read that mailing list also
we have actually gone the opposite way and switched to using syslog
for logging purposes some time ago, with no performance issues.
syslog files are easily read by a lot of applications out there. We have
been using rsyslog for aggregating logs from multiple servers, splunk
for analysis purposes
Michael Nacos escribió:
I would be very surprised if logging had a significant overhead any method
you choose. there's probably something very wrong with your setup if this
is the case.
Either something very wrong, or the load is extremely high. In the
latter case perhaps it would make sense
On 8/7/09 5:53 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dierenkeesvandie...@gmail.com
wrote:
Would it get attention if I submit this to
http://www.postgresql.org/support/submitbug ? (in fact it is not really a
bug, but an improvement request).
Hi Everyone,
I manage a freeBSD server that is dedicated to postgresql. The
machine has 4 gigs of ram and there is a single database powering a
web application that is hosted on a neighboring machine. The web
application is mostly reading the database but there are considerable
writes and I
Just stumbled across this recent article published in the
Communications of the ACM:
http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext
The author shares some insights relating to difficulties processing a
6.75 billion-row
table, a dummy table representing
On Fri, Aug 7, 2009 at 9:17 PM, Josh Kupershmidtschmi...@gmail.com wrote:
Just stumbled across this recent article published in the
Communications of the ACM:
http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext
The author shares some insights relating to
On Fri, Aug 7, 2009 at 2:17 PM, Josh Kupershmidtschmi...@gmail.com wrote:
Just stumbled across this recent article published in the
Communications of the ACM:
http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext
The author shares some insights relating to
Oh I just noticed his graphic is values per second but he had
originally said they were 16 bit values. Even if they were 32 or 64
bit values, I'd expect way more than what he's getting there.
On Fri, Aug 7, 2009 at 6:40 PM, Scott Marlowescott.marl...@gmail.com wrote:
Well, from the article, I
Well, there is CPU overhead for reading postgres pages and tuples. On a
disk subsystem that gets 1GB/sec sequential reads, I can't get more than
about 700MB/sec of I/O and on a select count(*) query on very large tables
with large rows (600 bytes) and its closer to 300MB/sec if the rows are
On Fri, Aug 7, 2009 at 7:34 PM, Scott Careysc...@richrelevance.com wrote:
Well, there is CPU overhead for reading postgres pages and tuples. On a
disk subsystem that gets 1GB/sec sequential reads, I can't get more than
about 700MB/sec of I/O and on a select count(*) query on very large tables
On Fri, Aug 7, 2009 at 5:09 PM, Scott Careysc...@richrelevance.com wrote:
On 8/7/09 5:53 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dierenkeesvandie...@gmail.com
wrote:
Would it get attention if I submit this to
14 matches
Mail list logo