Re: [PERFORM] Low priority batch insert

2017-10-19 Thread Michael Paquier
On Fri, Oct 20, 2017 at 1:10 AM, Jean Baro  wrote:
> That's my first question in this mailing list! :)

Welcome!

> Is it possible (node.js connecting to PG 9.6 on RDS) to set a lower priority
> to a connection so that that particular process (BATCH INSERT) would have a
> low impact on other running processes on PG, like live queries and single
> inserts/updates?
>
> Is that a good idea? Is this feasible with Node.js + PG?

The server could be changed so as backend processes use setpriority
and getpriority using a GUC parameter, and you could leverage priority
of processes using that. The good news is that this can be done as a
module, see an example from Fujii Masao's pg_cheat_funcs that caught
my attention actually yesterday:
https://github.com/MasaoFujii/pg_cheat_funcs/commit/a39ec1549e2af72bf101da5075c4e12d079f7c5b
The bad news is that you are on RDS, so vendor locking is preventing
you from loading any custom modules.
-- 
Michael


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Low priority batch insert

2017-10-19 Thread Jean Baro
Hi there,

That's my first question in this mailing list! :)

Is it possible (node.js connecting to PG 9.6 on RDS) to set a lower
priority to a connection so that that particular process (BATCH INSERT)
would have a low impact on other running processes on PG, like live queries
and single inserts/updates?

I would like the batch insert to complete as soon as possible, but at the
same time keep individual queries and inserts running on maximum speed.

*SINGLE SELECTS (HIGH PRIORITY)*
*SINGLE INSERTS/UPDATES (HIGH PRIORITY)*
BATCH INSERT (LOW PRIORITY)
BATCH SELECT (LOW PRIORITY)



Is that a good idea? Is this feasible with Node.js + PG?

Thanks


Re: [PERFORM] memory allocation

2017-10-19 Thread Laurenz Albe
nijam J wrote:
> our server is getting too slow again and again

Use "vmstat 1" and "iostat -mNx 1" to see if you are
running out of memory, CPU capacity or I/O bandwith.

Figure out if the slowness is due to slow queries or
an overloaded system.

Yours,
Laurenz Albe


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] memory allocation

2017-10-19 Thread nijam J
we are using cloud server

*this are memory info*

free -h
 total   used   free sharedbuffers cached
Mem:   15G15G   197M   194M   121M14G
-/+ buffers/cache:   926M14G
Swap:  15G32M15G

*this are disk info:*
 df -h

FilesystemSize  Used Avail Use% Mounted on
/dev/vda1  20G  1.7G   17G  10% /
devtmpfs  7.9G 0  7.9G   0% /dev
tmpfs 7.9G  4.0K  7.9G   1% /dev/shm
tmpfs 7.9G   17M  7.9G   1% /run
tmpfs 7.9G 0  7.9G   0% /sys/fs/cgroup
/dev/mapper/vgzero-lvhome  99G  189M   94G   1% /home
/dev/mapper/vgzero-lvdata 1.2T   75G  1.1T   7% /data
/dev/mapper/vgzero-lvbackup   296G  6.2G  274G   3% /backup
/dev/mapper/vgzero-lvxlog 197G   61M  187G   1% /pg_xlog
/dev/mapper/vgzero-lvarchive  197G   67G  121G  36% /archive



i allocated memory as per following list:
shared_buffers = 2GB  (10-30 %)
effective_cache_size =7GB (70-75 %)   >>(shared_buffers+page cache) for
dedicated server only
work_mem = 128MB (0.3-1 %)
maintenance_work_mem = 512MB (0.5-4 % )
temp_Buffer =  8MB >>default is better( setting can
be changed within individual sessions)

checkpoint_segments = 64
checkpoint_completion_target = 0.9
random_page_cost = 3.5
cpu_tuple_cost = 0.05
wal_buffers = 32MB  leave this default 3% of shared buffer is better



is it better or do i want to modify any thing

our server is getting too slow again and again

please give me a suggestion