I have a postgres 8.4.5 instance on CentOS 5 (x86_64) which appears to
go crazy with the amount of memory it consumes.
When I run the query below, in a matter of a few seconds memory
balloons to 5.3G (virtual), 4.6G (resident) and 1840 (shared), and
eventually the oom killer is invoked, killing the
Fair enough; I'm so used to bumping wal_buffers up to 16MB nowadays that
I forget sometimes that people actually run with the default where this
becomes an important consideration.
Do you have any testing in favor of 16mb vs. lower/higher?
From some tests I had done some time ago, using sepa
On Sat, Nov 6, 2010 at 00:06, Greg Smith wrote:
> Please refrain from making changes to popular documents like the
> tuning guide on the wiki based on speculation about what's happening.
I will grant you that the details were wrong, but I stand by the conclusion.
I can state for a fact that Pos
On Nov 5, 2010, at 1:19 PM, Josh Berkus wrote:
>
>> The serious problems with this appear to be (a) that Linux/Ext4 PG
>> performance still hasn't fully recovered, and, (b) that RHEL6 is set to
>> ship with kernel 2.6.32, which means that we'll have a whole generation
>> of RHEL which is off-lim
> The main change here was discussed back in January:
> http://archives.postgresql.org/message-id/4b512d0d.4030...@2ndquadrant.com
>
> What I've been doing about this is the writing leading up to
> http://wiki.postgresql.org/wiki/Reliable_Writes so that when RHEL6 does
> ship, we have a place to
> Fair enough; I'm so used to bumping wal_buffers up to 16MB nowadays that
> I forget sometimes that people actually run with the default where this
> becomes an important consideration.
Do you have any testing in favor of 16mb vs. lower/higher?
--
-- Josh Berk
Marti Raudsepp wrote:
In fact, I was wrong in my earlier post. Linux always offered O_DSYNC
behavior. What's new is POSIX-compliant O_SYNC, and the fact that
these flags are now distinguished.
While I appreciate that you're trying to help here, I'm unconvinced
you've correctly diagnosed a c
On Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn wrote:
> 04.11.10 16:31, Nick Matheson написав(ла):
>
> Heikki-
>>
>>>
>>> Try COPY, ie. "COPY bulk_performance.counts TO STDOUT BINARY".
>>>
>>> Thanks for the suggestion. A preliminary test shows an improvement
>> closer to our expected 35 MB
On Fri, Nov 5, 2010 at 12:23 PM, Samuel Gendler
wrote:
> On Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn wrote:
>
>> 04.11.10 16:31, Nick Matheson написав(ла):
>>
>> Heikki-
>>>
Try COPY, ie. "COPY bulk_performance.counts TO STDOUT BINARY".
Thanks for the suggestion. A pre
Josh Berkus wrote:
Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:
http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1
http://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6
The main change here was discussed back in January:
On Friday 05 November 2010 22:10:36 Greg Smith wrote:
> Andres Freund wrote:
> > On Sunday 31 October 2010 20:59:31 Greg Smith wrote:
> >> Writes only are sync'd out when you do a commit, or the database does a
> >> checkpoint.
> >
> > Hm? WAL is written out to disk after an the space provided by
On Fri, Nov 5, 2010 at 23:10, Greg Smith wrote:
>> Not having a real O_DSYNC on linux until recently makes it even more
>> dubious to have it as a default...
>>
>
> If Linux is now defining O_DSYNC
Well, Linux always defined both O_SYNC and O_DSYNC, but they used to
have the same value. The defau
Andres Freund wrote:
On Sunday 31 October 2010 20:59:31 Greg Smith wrote:
Writes only are sync'd out when you do a commit, or the database does a
checkpoint.
Hm? WAL is written out to disk after an the space provided by wal_buffers(def
8) * XLOG_BLCKSZ (def 8192) is used. The default i
On Fri, Nov 5, 2010 at 2:32 PM, Josh Berkus wrote:
>
>> Why would it be off limits? Is it likely to lose data due to power failure
>> etc?
>
> If fsyncs are taking 5X as long, people can't use PostgreSQL on that
> platform.
I was under the impression that from 2.6.28 through 2.6.31 or so that
t
Mladen Gogala wrote:
Where can I find the documentation describing the buffer replacement
policy? Are there any parameters governing the page replacement policy?
I wrote a pretty detailed description of this in my "Inside the
PostgreSQL Buffer Cache" presentation at
http://projects.2ndquadran
On Friday 05 November 2010 21:15:20 Josh Berkus wrote:
> All,
>
> Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:
>
> http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&n
I guess thats the O_DSYNC thingy. See the "Defaulting wal_sync_method to
fdatasy
> Why would it be off limits? Is it likely to lose data due to power failure
> etc?
If fsyncs are taking 5X as long, people can't use PostgreSQL on that
platform.
> Are you referring to improvements due to write barrier support getting
> fixed up fr ext4 to run faster but still be safe? I wou
On Fri, Nov 5, 2010 at 2:15 PM, Josh Berkus wrote:
> All,
>
> Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:
>
> http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1
> http://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6
>
> The se
> The serious problems with this appear to be (a) that Linux/Ext4 PG
> performance still hasn't fully recovered, and, (b) that RHEL6 is set to
> ship with kernel 2.6.32, which means that we'll have a whole generation
> of RHEL which is off-limits to PostgreSQL.
Oh. Found some other information o
All,
Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:
http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1
http://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6
The serious problems with this appear to be (a) that Linux/Ext4 PG
perf
On 11/03/2010 04:52 PM, Nick Matheson wrote:
We have an application that needs to do bulk reads of ENTIRE
Postgres tables very quickly (i.e. select * from table). We have
observed that such sequential scans run two orders of magnitude slower
than observed raw disk reads (5 MB/s versus 100 MB/s).
Devrim GÜNDÜZ wrote:
On Fri, 2010-11-05 at 11:59 +0100, A B wrote:
If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then?
You can initdb to ramdi
gentosa...@gmail.com (A B) writes:
> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
> care for your data (you accept 100% dataloss and datacorruption if any
> error should occur), what settings should you use then?
Use /dev/null. It is web scale, and there are good tutorials.
On Fri, 2010-11-05 at 11:59 +0100, A B wrote:
>
> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
> care for your data (you accept 100% dataloss and datacorruption if any
> error should occur), what settings should you use then?
You can initdb to ramdisk, if you have enough R
On Fri, Nov 5, 2010 at 7:08 AM, Guillaume Cottenceau wrote:
> Marti Raudsepp writes:
>
>> On Fri, Nov 5, 2010 at 13:32, A B wrote:
>>> I was just thinking about the case where I will have almost 100%
>>> selects, but still needs something better than a plain key-value
>>> storage so I can do som
Marti Raudsepp writes:
> On Fri, Nov 5, 2010 at 13:32, A B wrote:
>> I was just thinking about the case where I will have almost 100%
>> selects, but still needs something better than a plain key-value
>> storage so I can do some sql queries.
>> The server will just boot, load data, run, hopefu
Marti Raudsepp writes:
> On Fri, Nov 5, 2010 at 13:11, Guillaume Cottenceau wrote:
>> Don't use PostgreSQL, just drop your data, you will end up with
>> the same results and be even faster than any use of PostgreSQL.
>> If anyone needs data, then just say you had data corruption, and
>> that sin
On Fri, Nov 5, 2010 at 13:11, Guillaume Cottenceau wrote:
> Don't use PostgreSQL, just drop your data, you will end up with
> the same results and be even faster than any use of PostgreSQL.
> If anyone needs data, then just say you had data corruption, and
> that since 100% dataloss is accepted, t
On 5 November 2010 11:36, Marti Raudsepp wrote:
> On Fri, Nov 5, 2010 at 13:32, A B wrote:
> > I was just thinking about the case where I will have almost 100%
> > selects, but still needs something better than a plain key-value
> > storage so I can do some sql queries.
> > The server will just
On Fri, Nov 5, 2010 at 13:32, A B wrote:
> I was just thinking about the case where I will have almost 100%
> selects, but still needs something better than a plain key-value
> storage so I can do some sql queries.
> The server will just boot, load data, run, hopefully not crash but if
> it would
>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
>> care for your data (you accept 100% dataloss and datacorruption if any
>> error should occur), what settings should you use then?
>
> Others have suggested appropriate parameters ("running with scissors").
>
> I'd like to add
>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
>> care for your data (you accept 100% dataloss and datacorruption if any
>> error should occur), what settings should you use then?
>>
>
>
> I'm just curious, what do you need that for?
>
> regards
> Szymon
I was just thinking
On 05/11/10 18:59, A B wrote:
> Hi there.
>
> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
> care for your data (you accept 100% dataloss and datacorruption if any
> error should occur), what settings should you use then?
Others have suggested appropriate parameters ("runni
> Turn off fsync and full_page_writes (i.e. running with scissors).
> Also depends on what you mean by "as fast as possible". Fast at doing
> what? Bulk inserts, selecting from massive tables?
I guess some tuning has to be done to make it work well with the
particular workload (in this case most
On 5 November 2010 11:59, A B wrote:
> Hi there.
>
> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
> care for your data (you accept 100% dataloss and datacorruption if any
> error should occur), what settings should you use then?
>
>
I'm just curious, what do you need that
A B writes:
> Hi there.
>
> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
> care for your data (you accept 100% dataloss and datacorruption if any
> error should occur), what settings should you use then?
Don't use PostgreSQL, just drop your data, you will end up with
the s
On 5 November 2010 11:14, Thom Brown wrote:
> On 5 November 2010 10:59, A B wrote:
>
>> Hi there.
>>
>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
>> care for your data (you accept 100% dataloss and datacorruption if any
>> error should occur), what settings should you u
On 5 November 2010 10:59, A B wrote:
> Hi there.
>
> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
> care for your data (you accept 100% dataloss and datacorruption if any
> error should occur), what settings should you use then?
>
>
Turn off fsync and full_page_writes (i.e.
Hi there.
If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
On Thu, 04 Nov 2010 15:42:08 +0100, Nick Matheson
wrote:
I think your comments really get at what our working hypothesis was, but
given that our experience is limited compared to you all here on the
mailing lists we really wanted to make sure we weren't missing any
alternatives. Also the w
40 matches
Mail list logo