Kenneth Marshall escreveu:
> I have found that increasing maintenance_work_mem speeds
> index rebuilds, turn off synchronous_commit or fsync if
> you really can afford to start over. Another big help is
> to use the parallel pg_restore from PostgreSQL 8.4.0 to
> perform the restore.
>
And make sur
Thanks for your response.
I think your analysis is correct, When there are more than 100 rows that
match this query, limit 100 is fast.
However, we often have less than hundred rows, so this is not sufficient for
us.
This suggestion ('OFFSET 0' trick) did not show differences in response time
(r
On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren wrote:
> Would it get attention if I submit this to
> http://www.postgresql.org/support/submitbug ? (in fact it is not really a
> bug, but an improvement request).
I think that many of the people who read that mailing list also read
this one, includi
we have actually gone the opposite way and switched to using syslog
for logging purposes some time ago, with no performance issues.
syslog files are easily read by a lot of applications out there. We have
been using rsyslog for aggregating logs from multiple servers, splunk
for analysis purposes a
Michael Nacos escribió:
> I would be very surprised if logging had a significant overhead any method
> you choose. there's probably something very wrong with your setup if this
> is the case.
Either something very wrong, or the load is extremely high. In the
latter case perhaps it would make sen
On 8/7/09 5:53 AM, "Robert Haas" wrote:
> On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren
> wrote:
>> Would it get attention if I submit this to
>> http://www.postgresql.org/support/submitbug ? (in fact it is not really a
>> bug, but an improvement request).
>
> I think that many of the people
Hi Everyone,
I manage a freeBSD server that is dedicated to postgresql. The
machine has 4 gigs of ram and there is a single database powering a
web application that is hosted on a neighboring machine. The web
application is mostly reading the database but there are considerable
writes and I don'
Just stumbled across this recent article published in the
Communications of the ACM:
http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext
The author shares some insights relating to difficulties processing a
6.75 billion-row
table, a dummy table representing census-type
On Fri, Aug 7, 2009 at 9:17 PM, Josh Kupershmidt wrote:
> Just stumbled across this recent article published in the
> Communications of the ACM:
>
> http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext
>
> The author shares some insights relating to difficulties processin
On Fri, Aug 7, 2009 at 2:17 PM, Josh Kupershmidt wrote:
> Just stumbled across this recent article published in the
> Communications of the ACM:
>
> http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext
>
> The author shares some insights relating to difficulties processin
Oh I just noticed his graphic is "values per second" but he had
originally said they were 16 bit values. Even if they were 32 or 64
bit values, I'd expect way more than what he's getting there.
On Fri, Aug 7, 2009 at 6:40 PM, Scott Marlowe wrote:
> Well, from the article, I got the feeling he nev
Well, there is CPU overhead for reading postgres pages and tuples. On a
disk subsystem that gets 1GB/sec sequential reads, I can't get more than
about 700MB/sec of I/O and on a select count(*) query on very large tables
with large rows (600 bytes) and its closer to 300MB/sec if the rows are
smalle
On Fri, Aug 7, 2009 at 7:34 PM, Scott Carey wrote:
> Well, there is CPU overhead for reading postgres pages and tuples. On a
> disk subsystem that gets 1GB/sec sequential reads, I can't get more than
> about 700MB/sec of I/O and on a select count(*) query on very large tables
> with large rows (60
On Fri, Aug 7, 2009 at 5:09 PM, Scott Carey wrote:
> On 8/7/09 5:53 AM, "Robert Haas" wrote:
>
>> On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren
>> wrote:
>>> Would it get attention if I submit this to
>>> http://www.postgresql.org/support/submitbug ? (in fact it is not really a
>>> bug, but an i
Tom Lane wrote:
> Alvaro Herrera writes:
> > toruvinn wrote:
> >> I was always wondering, though, why PostgreSQL uses this approach and not
> >> its catalogs.
>
> > It does use the catalog for most things. THe flatfile is used for the
> > situations where the catalogs are not yet ready to be re
Hello,
I'm trying to optimize the follow query which returns the top users
ordered by ranking. I'll show you my schema and "explain analyze" for
each case.
So, i'm asking two things:
1) Why "ranking" index is not used in the second query when sorting.
2) Am i missing some obvious optimization li
16 matches
Mail list logo