On Mon, Jun 3, 2013 at 11:08 AM, Миша Тюрин tmih...@bk.ru wrote:
Hi all hackers again!
Since i had got this topic there many test was done by our team and many
papers was seen. And then I noticed that os_page_replacement_algorithm with
CLOCK and others features
might * interfere / overlap
hm, in that case, wouldn't adding 48gb of physical memory have
approximately the same effect? or is something else going on?
imho, adding 48gb would have no effects.
server already has 376GB memory and still has a lot of unused GB.
let me repeat, we added 80GB for files cache by decreasing
On 05/03/2013 07:09 AM, Andres Freund wrote:
We've got that in 9.3 which is absolutely fabulous! But that's not
related to doing DMA which you cannot (and should not!) do from
userspace.
You can do zero-copy DMA directly into userspace buffers. It requires
root (or suitable capabilities that
On Fri, May 3, 2013 at 12:09 AM, Andres Freund and...@2ndquadrant.com wrote:
But that brings up an interesting question. How hard / feasible would it be
to add DIO functionality to PG itself?
I don't think there is too much chance of that - but I also don't really
see the point in trying to
On 05/01/2013 06:37 PM, Bruce Momjian wrote:
Sorry to be dense here, but what is the problem with that output?
That there is a lot of memory marked as free? Why would it mark
any memory free?
That's kind of my point. :) That 14GB isn't allocated to cache, buffers,
any process, or anything
That's kind of my point. :) That 14GB isn't allocated to cache, buffers,
any process, or anything else. It's just... free. In the middle of the
day, where 800 PG threads are pulling 7000TPS on average. Based on that
scenario, I'd like to think it would cache pretty aggressively, but
instead,
On 05/02/2013 12:04 PM, Josh Berkus wrote:
There is a good, but sad, reason for this: IBM and Oracle and their
partners are the largest employers of people hacking on core Linux
memory/IO functionality, and both of those companies use DirectIO
extensively in their products.
I never thought of
On 2013-05-02 16:13:42 -0500, Shaun Thomas wrote:
On 05/02/2013 12:04 PM, Josh Berkus wrote:
Yeah, this is why I want to go to Linux Plumbers this year. The
Kernel.org engineers are increasingly doing things which makes Linux
unsuitable for applications which depend on the filesystem.
Uh.
On Wed, Apr 24, 2013 at 08:39:09AM -0500, Shaun Thomas wrote:
On 04/24/2013 08:24 AM, Robert Haas wrote:
Are you referring to the fact that vm.zone_reclaim_mode = 1 is an
idiotic default?
Well... it is. But even on systems where it's not the default or is
explicitly disabled, there's just
On 04/24/2013 09:39 PM, Shaun Thomas wrote:
On 04/24/2013 08:24 AM, Robert Haas wrote:
Are you referring to the fact that vm.zone_reclaim_mode = 1 is an
idiotic default?
Servers are getting shafted in a lot of cases, and it's actually
starting to make me angry.
A significant part of that
On Tue, Apr 23, 2013 at 10:50 AM, Shaun Thomas stho...@optionshouse.com wrote:
This is most likely a NUMA issue. There really seems to be some kind of
horrible flaw in the Linux kernel when it comes to properly handling NUMA on
large memory systems.
Are you referring to the fact that
On 04/24/2013 08:24 AM, Robert Haas wrote:
Are you referring to the fact that vm.zone_reclaim_mode = 1 is an
idiotic default?
Well... it is. But even on systems where it's not the default or is
explicitly disabled, there's just something hideously wrong with NUMA in
general. Take a look at
On 2013-04-24 08:39:09 -0500, Shaun Thomas wrote:
The memory pressure code in Linux is extremely fucked up. I can't find it
right now, but the memory management algorithm makes some pretty ridiculous
assumptions once you pass half memory usage, regarding what is in active and
inactive cache.
On 04/24/2013 08:49 AM, Andres Freund wrote:
Uh. Ranting can be rather healthy thing every now and then and it good
for the soul and such. But. Did you actually try reporting those issues?
That's actually part of the problem. How do you report:
Throwing a lot of processes at a high-memory
On 2013-04-24 09:06:39 -0500, Shaun Thomas wrote:
On 04/24/2013 08:49 AM, Andres Freund wrote:
Uh. Ranting can be rather healthy thing every now and then and it good
for the soul and such. But. Did you actually try reporting those issues?
That's actually part of the problem. How do you
thanks a lot for responses
1) just remind my case
Intel 32 core = 2*8 *2threads
Linux 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux
PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian
4.4.5-8) 4.4.5, 64-bit
shared_buffers 64GB / constant hit
typo
if ( user cpu + io wait ) is ~140% then i have ~140GB free.
140% === 1400%
if ~14 cores are busy then ~140GB is free
10GB per process
hmmm...
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
vm state
root@avi-sql09:~# /sbin/sysctl -a|grep vm
vm.overcommit_memory = 0
vm.panic_on_oom = 0
vm.oom_kill_allocating_task = 0
vm.oom_dump_tasks = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
On 04/22/2013 05:12 PM, Merlin Moncure wrote:
free -g
total used free sharedbuffers cached
Mem: 378250128 0 0229
-/+ buffers/cache: 20357
This is most likely a NUMA issue. There really
On Mon, Apr 22, 2013 at 1:22 PM, Миша Тюрин tmih...@bk.ru wrote:
My first message has been banned for too many latters.
Hi all
There is something wrong and ugly.
1)
Intel 32 core = 2*8 *2threads
Linux avi-sql09 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64
GNU/Linux
On Mon, Apr 22, 2013 at 11:22 AM, Миша Тюрин tmih...@bk.ru wrote:
free -g
total used free sharedbuffers cached
Mem: 378250128 0 0229
-/+ buffers/cache: 20357
and
! disks usage 100%
21 matches
Mail list logo