Re: Query slow for new participants

2019-02-25 Thread MichaelDBA
Regarding shared_buffers, please install the pg_buffercache extension and run the recommended queries with that extension during high load times to really get an idea about the right value for shared_buffers. Let's take the guess work out of it. Regards, Michael Vitale Justin Pryzby

Re: Query slow for new participants

2019-02-25 Thread Justin Pryzby
On Tue, Feb 26, 2019 at 12:22:39AM +0100, supp...@mekong.be wrote: > Hardware > Standard DS15 v2 (20 vcpus, 140 GB memory) > "effective_cache_size" "105GB" "configuration file" > "effective_io_concurrency" "200" "configuration file" > "maintenance_work_mem" "2GB" "configuration file" >

Re: Query slow for new participants

2019-02-25 Thread supp...@mekong.be
Hi, thank you for your reply. Yes, I will go through this page. Regards, Kim Op ma 25 feb. 2019 om 17:16 schreef Justin Pryzby : > On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote: > > Is there any way how I can make the queries fast for new participants? > This > > is a big problem,

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-25 Thread MichaelDBA
Was wondering when that would come up, taking queuing logic outside the database. Can be overly painful architecting queuing logic in relational databases. imho. Regards, Michael Vitale Jeff Janes Monday, February 25, 2019 3:30 PM On Sat, Feb 23, 2019 at 4:06

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-25 Thread Jeff Janes
On Mon, Feb 25, 2019 at 11:13 AM Gunther Schadow wrote: > Anyway, I think the partitioned table is the right and brilliant solution, > because an index really isn't required. The actual pending partition will > always remain quite small, and being a queue, it doesn't even matter how > big it

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-25 Thread Jeff Janes
On Sat, Feb 23, 2019 at 4:06 PM Gunther wrote: > Hi, > > I am using an SQL queue for distributing work to massively parallel > workers. > You should look into specialized queueing software. ... > I figured I might just pause all workers briefly to schedule the REINDEX > Queue command, but the

Re: Aggregate and many LEFT JOIN

2019-02-25 Thread Michael Lewis
On Mon, Feb 25, 2019 at 2:44 AM kimaidou wrote: > I have better results with this version. Basically, I run a first query > only made for aggregation, and then do a JOIN to get other needed data. > > * SQL : http://paste.debian.net/1070007/ > * EXPLAIN: https://explain.depesz.com/s/D0l > > Not

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-25 Thread Corey Huinker
> > > Anyway, I think the partitioned table is the right and brilliant solution, > because an index really isn't required. The actual pending partition will > always remain quite small, and being a queue, it doesn't even matter how > big it might grow, as long as new rows are inserted at the end

Re: Query slow for new participants

2019-02-25 Thread Justin Pryzby
On Mon, Feb 25, 2019 at 03:41:18AM -0700, Kim wrote: > Is there any way how I can make the queries fast for new participants? This > is a big problem, because for new participants, speed is even more > important. > > Thank you for your help. Could you include information requested here ?

Query slow for new participants

2019-02-25 Thread Kim
Hello, I have an article query which returns articles enabled for a participant. Article table – Participant table – Table in between which stores the links between the Article and particitpant including characteristics such as enabled. It is possible to search on the articles by number,

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-25 Thread Gunther Schadow
Wow, yes, partition instead of index, that is interesting. Thanks Corey and Justin. The index isn't required at all if all my pending jobs are in a partition of only pending jobs. In that case the plan can just be a sequential scan. And Jeff James, sorry, I failed to show the LIMIT 1 clause

Re: Idle backends outside a transaction holding onto large amounts of memory / swap space?

2019-02-25 Thread Pavel Stehule
Hi po 25. 2. 2019 v 11:37 odesílatel Tobias Gierke < tobias.gie...@code-sourcery.de> napsal: > Hi, > > Recently we started seeing the Linux OOM killer kicking in and killing > PostgreSQL processes on one of our development machines. > > The PostgreSQL version we're using was compiled by us, is

Re: Aggregate and many LEFT JOIN

2019-02-25 Thread kimaidou
I have better results with this version. Basically, I run a first query only made for aggregation, and then do a JOIN to get other needed data. * SQL : http://paste.debian.net/1070007/ * EXPLAIN: https://explain.depesz.com/s/D0l Not really "fast", but I gained 30% Le lun. 25 févr. 2019 à 09:54,

Re: Aggregate and many LEFT JOIN

2019-02-25 Thread kimaidou
Thanks for your answers. I tried with > set session work_mem='250MB'; > set session geqo_threshold = 20; > set session join_collapse_limit = 20; It seems to have no real impact : https://explain.depesz.com/s/CBVd Indeed an index cannot really be used for sorting here, based on the complexity of