Hi
But maybe postgres should provide its own subsystem like linux active/inactive
memory over and/or near shared buffers? There
could be some postgres special heuristics in its own approach.
And does anyone know how mysql-innodb guys are getting with similar issues?
Thank you!
> hm, in that case, wouldn't adding 48gb of physical memory have
> approximately the same effect? or is something else going on?
imho, adding 48gb would have no effects.
server already has 376GB memory and still has a lot of unused GB.
let me repeat, we added 80GB for files cache by decreasing b
hi, list. there are my proposal. i would like to tell about odirect in wal sync
in wal_level is higher than minimal. i think in my case when wal traffic is up
to 1gb per 2-3 minutes but discs hardware with 2gb bbu cache (or maybe ssd
under wal) - there would be better if wall traffic could not h
hi, list, again. the next proposal into auto explain. one would be happy if
could set list of target tables and indexes. sometimes it is very hard to
detect who is using your indexes. but turn total logging on under thousands
transactions per seconds is not seems like nice idea couse size of re
i tell about wal_level is higher than MINIMAL
wal_level != minimal
http://doxygen.postgresql.org/xlogdefs_8h_source.html
"
48 * Because O_DIRECT bypasses the kernel buffers, and because we never
49 * read those buffers except during crash recovery or if wal_level !=
minimal "
>> hi, list.
why does we take so many attention to fsync issue?
but there are also table spaces in tmpfs, wal in tmpfs, disks with cache
without bbu, writeback writes and fs without ordering and journal, any CLOUDS,
etc etc... in our real world installations.
more over not all of these issues are usually in
Hello!
I wrote to general ( [GENERAL] standby, pg_basebackup and last xlog file )
some times ago. but still hasn't got any feedback.
Hello!
Is there any reason why pg_basebackup has limitation in an online backup from
the standby: "The backup history file is not created in the datab
Hello all and Heikki personally
Thank you for your answer
I have some new points:
Понедельник, 21 января 2013, 10:08 +02:00 от Heikki Linnakangas
:
>On 21.01.2013 09:14, Миша Тюрин wrote:
>>Is there any reason why pg_basebackup has limitation in an online backup
>> f
Hello all and Heikki personally
Thank you for your answer
I have some new points:
21.01.2013, 10:08 +02:00 от Heikki Linnakangas :
>On 21.01.2013 09:14, Миша Тюрин wrote:
>>Is there any reason why pg_basebackup has limitation in an online backup
>> from the standby: &quo
Hi all
I've got suspicious behavior for transaction cooked with deferrable trigger.
if trigger has update on row of his target table we get infinite "recursion"
without limitation of stack depth.
trigger -> update -> trigger -> update -> ... ... -- infinite pending list :)
- Misha
thanks a lot for responses
1) just remind my case
Intel 32 core = 2*8 *2threads
Linux 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux
PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian
4.4.5-8) 4.4.5, 64-bit
shared_buffers 64GB / constant hit ra
typo
> if ( user cpu + io wait ) is ~140% then i have ~140GB free.
140% ===>> 1400%
if ~14 cores are busy then ~140GB is free
10GB per process
hmmm...
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
vm state
root@avi-sql09:~# /sbin/sysctl -a|grep vm
vm.overcommit_memory = 0
vm.panic_on_oom = 0
vm.oom_kill_allocating_task = 0
vm.oom_dump_tasks = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
vm.
Hello!
Could anyone review patch suggested by Jeff Janes ?
Initial thread
http://www.postgresql.org/message-id/flat/1384356585.995240...@f50.i.mail.ru#1384356585.995240...@f50.i.mail.ru
Thanks in advance!
>
> On Wed, Nov 13, 2013 at 3:53 PM, Sergey Burladyan < eshkin...@gmail.com >
> wrote
> This should be a very common setup in the field, so how are people doing it
>in practice?
One of possible workaround with archive and streaming was to use pg_receivexlog
from standby to copy/save WALs to archive. but with pg_receivexlog was also
issue with fsync.
[ master ] -- streaming
maybe vertica's approach will be a useful example
http://my.vertica.com/docs/7.1.x/HTML/index.htm#Authoring/AdministratorsGuide/Partitions/PartitioningTables.htm
http://my.vertica.com/docs/7.1.x/HTML/index.htm#Authoring/SQLReferenceManual/Statements/CREATETABLE.htm
... [ PARTITION BY partition-c
16 matches
Mail list logo