Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
Correct, but in the Java code, it's multiple statements in a single transaction, so it should stick. Not sure if something else stupid is going on. Good to know about the ALTER DATABASE effect. I didn't realize that. Thanks a billion. Laurent. From: Tom Lane

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread Tom Lane
"l...@laurent-hasson.com" writes: > Second, here is what i found and what messed us up. > select current_setting('random_page_cost'); --> 4 > alter database "CMS_TMP" set random_page_cost=0.0001; > select current_setting('random_page_cost'); --> 4 ALTER DATABASE only affects

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
Just a correction from my previous message regarding the throughput we get. On that one table with 1.2B row, the plan through the index scan delivers actually 50K rows/s in read speed to the application, almost immediately. It would go through the entire table in under 7h vs the other approach

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
OK... I think we may have cracked this. First, do you think that 128MB work_mem is ok? We have a 64GB machine and expecting fewer than 100 connections. This is really an ETL workload environment at this time. Second, here is what i found and what messed us up. select current_setting('ran

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
Sorry :) When i look at the "SQL" tab in PGAdmin when i select the index in the schema browser. But you are right that /d doesn't show that. From: Tom Lane Sent: Friday, January 25, 2019 1:34:01 PM To: l...@laurent-hasson.com Cc: pgsql-performa...@postgresql.org

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread Tom Lane
"l...@laurent-hasson.com" writes: > Also, the original statement i implemented did not have all of that. This is > the normalized SQL that Postgres now gives when looking at the indices. [ squint... ] What do you mean exactly by "Postgres gives that"? I don't see any redundant COLLATE clauses i

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
Agreed on the 2 indices. I only added the second non-unique index to test the hypothesis that i was doing an order-by col1, col2 when the original unique index was on col1, col2, col3... Also, the original statement i implemented did not have all of that. This is the normalized SQL that Postgr

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
Since the PGADmin4 client timed out when creating the index, you picked my interest here and i was wondering if the index creation itself had failed... but: \d tmp_outpatient_rev Indexes: "ui_outprev_ptclaimline" UNIQUE, btree (desy_sort_key, claim_no, clm_line_num) "i_outprev_ptclaim

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread Tom Lane
"l...@laurent-hasson.com" writes: > The indices are defined as: > CREATE INDEX i_outprev_ptclaim > ON public.tmp_outpatient_rev USING btree > (desy_sort_key COLLATE pg_catalog."default", claim_no COLLATE > pg_catalog."default") > TABLESPACE pg_default; > CREATE UNIQUE INDEX ui_outpr

Re: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
Sorry, the web outlook client may be "prepending" this message instead of appending, as is the custom on this mailing list. The indices are defined as: CREATE INDEX i_outprev_ptclaim ON public.tmp_outpatient_rev USING btree (desy_sort_key COLLATE pg_catalog."default", claim_no COLLATE

RE: Zero throughput on a query on a very large table.

2019-01-25 Thread l...@laurent-hasson.com
> -Original Message- > From: Andreas Kretschmer > Sent: Friday, January 25, 2019 00:55 > To: pgsql-performance@lists.postgresql.org > Subject: Re: Zero throughput on a query on a very large table. > > > > Am 25.01.19 um 06:20 schrieb l...@laurent-hasson.com: > > > > Hello, > > > > We ha

Re: ERROR: found xmin from before relfrozenxid

2019-01-25 Thread Mariel Cherkassky
I'm getting this issue when I try to connect to a specific db. Does it matters what table I specify ? Should I just choose a random table from the problematic db? If I'll dump the db and restore it it can help ? On Fri, Jan 25, 2019, 10:19 AM Adrien NAYRAT On 1/24/19 3:14 PM, Mariel Cherkassky wr

Re: ERROR: found xmin from before relfrozenxid

2019-01-25 Thread Adrien NAYRAT
On 1/24/19 3:14 PM, Mariel Cherkassky wrote: I'm checking the full version. As you said I saw that in 9.6.9 there was a fix for the next bug : Avoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee, Álvaro Herrera) This could happen if some tuples were locked (but not deleted