Aurora doesn’t use a typical file system so the RAM usually reserved for the OS 
file system bufffer cache is instead used for shared_buffers. 

We run multiple Aurora/ PG instances and they work quite well.  There are 
limitations in superuser private, so be aware of that, but generally speaking 
Aurora/PG works well. 

Bob Lunney

Sent from my PDP11

> On Dec 8, 2018, at 2:03 PM, Square Bob <square_...@yahoo.com> wrote:
> 
> This question is probably more of a fit for the performance list, sorry for 
> the cross post
> 
> 
> 
> -------- Forwarded Message --------
> Subject:      amazon aroura config - seriously overcommited defaults? (May be 
> Off Topic)
> Date: Sat, 8 Dec 2018 12:00:33 -0700
> From: Square Bob <square_...@yahoo.com>
> To:   pgsql-gene...@lists.postgresql.org
> 
> 
> All;
> 
> 
> My apologies if this is off topic.
> 
> 
> Our company is moving to Aurora, In the past I would take care not to allow 
> postgresql to over-commit memory beyond the actual memory on the server, 
> which meant I would add the buffer pool + (work_mem * max_connections) + 
> (maintenance_work_mem * autovacuum threads)
> 
> 
> However as I look at the aroura defaults they are all off the charts, for 
> example, based on the calculations in the config (amazon doesn't make it 
> easy, some settings are in pages, some are in kb, some are who knows what) I 
> see the following settings as default in our aroura config:
> 
> 
> The instance size is db.r4.xlarge
> 
> 
> this instance size is listed as having 30.5GB of ram
> 
> 
> Here's the default settings:
> 
> 
> shared_buffers: {DBInstanceClassMemory/10922}
> 
> which equates to 24GB
> 
> 
> work_mem:   64000 (kb)
> 
> which equates to 65.5MB
> 
> 
> maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)
> 
> which equates to 4.2GB
> 
> 
> max_connections: LEAST({DBInstanceClassMemory/9531392},5000)
> 
> which equates to 3,380
> 
> 
> According to my math (If I got it right)  in a worst case scenario,
> 
> if we maxed out max_connections, work_mem and maintenance_work_mem limits
> 
> the db would request 247GB of memory
> 
> 
> Additionally amazon has set effective_cache_size =
> {DBInstanceClassMemory/10922}
> 
> which equates to about 2.9MB (which given the other outlandish setting may be 
> the only appropriate setting in the system)
> 
> 
> 
> What the hell is amazon doing here? Am I missing the boat on tuning 
> postgresql memory? Is amazon simply counting on the bet that users will never 
> fully utilize an instance?
> 
> 
> Thanks in advance
> 
> 
> 

Reply via email to