PG and I guess blindly adding the "materialized" keyword will cause
errors.
yeah, prior to 11 CTEs are a optimizer barrier. You can try to rewrite
the queries to not using CTEs - or upgrade. If i were you i would upgrade.
Regards, Andreas
--
Andreas Kretschmer - currently still (gard
to force_custom_plan, default is auto and
with that and bind variables pg will use a generic plan.
Regards, Andreas
--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com
the
statement is run by the JDBC driver. Then you can compare the execution plans
and spot the difference.
Yours,
Laurenz Albe
--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com
On 11 September 2023 03:15:43 CEST, Laurenz Albe
wrote:
>On Fri, 2023-09-08 at 18:51 +0800, Mikhail Balayan wrote:
>> I have three tables:
>> - test_db_bench_1
>> - test_db_bench_tenants
>> - test_db_bench_tenant_closure
>>
>> And the query to join them:
>> SELECT
On 20 May 2022 10:27:50 CEST, aditya desai wrote:
>Hi,
>One of our applications needs 3000 max_connections to the database.
>Connection pooler like pgbouncer or pgpool is not certified within the
>organization yet. So they are looking for setting up high configuration
>Hardware with CPU and
Am 05.04.20 um 19:48 schrieb Arya F:
Am I missing something in my setup? Or is this expected? I do know
having more than 100 partitions in prior versions of PostgreSQL 12
would cause a major slow down, but from what I read PostgreSQL 12
addresses that now?
to say more about your problem we
Am 21.03.20 um 13:02 schrieb Anders Steinlein:
default_statistics_target = 1000
not sure if this be the culprit here, but i think this is way too high.
Leave it at the normal value of 100 and raise it only for particular
tables and columns.
Regards, Andreas
--
2ndQuadrant - The
Am 27.01.20 um 14:15 schrieb Kemal Ortanca:
https://explain.depesz.com/s/G4vq
the estimates and the real values are very different, seems like
problems with autoanalyze.
which version?
Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
Am 19.09.19 um 17:59 schrieb Joao Junior:
I have a table that Is not being use anymore, I want to drop it.
The table is huge, around 800GB and it has some index on it.
When I execute the drop table command it goes very slow, I realised
that the problem is the filesystem.
It seems that XFS
Am 05.08.19 um 22:47 schrieb Mariel Cherkassky:
Hey,
I have a very big query that consist from 3-4 subqueries that use
windows functions. There is a chance that I'll need to rewrite the
query but first I'm trying to search for other ways to improve it and
I'll be happy to hear if one of u
Am 03.08.19 um 16:06 schrieb Thomas Kellerer:
it's posible to rewrite the query to:
test=# explain analyse select count(*) from foo_bar_baz as fbb where
foo_id = (select foo_id from foo where foo_name = 'eeny');
I know, that's not a solution, but a workaround. :-(
Yes, I discovered
Hi,
Am 03.08.19 um 15:16 schrieb MichaelDBA:
I too am a bit perplexed by why runtime partition pruning does not
seem to work with this example. Anybody got any ideas of this?
please don't top-posting.
it's posible to rewrite the query to:
test=# explain analyse select count(*) from
Am 17.07.19 um 14:48 schrieb Tomas Vondra:
Either that, or try creating a covering index, so that the query can
do an
index-only scan. That might reduce the amount of IO against the table,
and
in the index the data should be located close to each other (same page or
pages close to each
Am 17.04.19 um 11:51 schrieb laurent.decha...@orange.com:
Here are the logs (with log_error_verbosity = verbose) :
2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;0;LOG: 0:
execute : SELECT COUNT(1) FROM big_table
2019-04-17 11:30:42 CEST;35895;thedbuser;thedb;0;LOCATION:
Am 17.04.19 um 08:30 schrieb laurent.decha...@orange.com:
SELECT current_setting('max_parallel_workers_per_gather') gives 10 from my
session.
Is there a client configuration that prevents from using parallelism ?
unlikely.
if i were you, i would compare all settings, using the different
Am 10.04.19 um 07:40 schrieb Daulat Ram:
We have two node postgresql database version 9.6 with streaming
replication which is running on docker environment, os Linux (Ubuntu)
and we have to migrate on PostgresQL11. I need your suggestions &
steps to compete the upgrade process
Am 29.03.19 um 15:29 schrieb Stephan Schmidt:
PostgreSQL version: 11.2
Operating system: Linux
Description:
We have a wuite complex CTE which collects data fast enough for us and
has a ok execution plan.
When we insert the result into a table like
With _/some/_data AS (
SELECT….
),
Am 25.01.19 um 06:20 schrieb l...@laurent-hasson.com:
Hello,
We have been stuck for the past week on a query that simply won’t
“execute”. We have a table with 1.2B rows that took around 14h to
load, but a simple select takes forever and after 10h, no records are
coming through still.
Am 21.09.2018 um 21:07 schrieb Patrick Molgaard:
Andreas -- just following up to say that this was indeed the root
cause. Thanks again.
glad i could help you.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
>
>Intermittently (one or two times a week), all queries on that host are
>simultaneously blocked for extended periods (10s of seconds).
>
>The blocked queries are trivial & not related to locking - I'm seeing
>slowlogs of the form:
>
please check if THP are enabled.
Regards, Andreas
--
Am 03.09.2018 um 09:06 schrieb Justin Pryzby:
Note, I believe it's planned in the future for foreign keys to support
referenes to partitioned tables, at which point you could just DROP the monthly
partition...but not supported right now.
the future is close, that's possible in 11 ;-)
Am 29.08.2018 um 20:10 schrieb David:
On Wed, Aug 29, 2018 at 7:25 AM, Andreas Kretschmer
mailto:andr...@a-kretschmer.de>> wrote:
Okay, other solution. The problem is the nested loop, we can
disable that:
test=*# set enable_nestloop to false;
Is it OK t
Am 29.08.2018 um 05:31 schrieb David:
For now, I can bypass the GIST index by avoiding range operators in my
queries. But why is the GIST index so slow?
your GiST-Index contains (member_id,group_id,valid_period), but your
query is only on the latter 2 fields.
test=*# create index
Am 23.02.2018 um 20:29 schrieb Daulat Ram:
We have the following requirements in single query or any proper solution.
Please help on this.
How many sessions are currently opened.
ask pg_stat_activity, via select * from pg_stat_activity
-and if opened then how many queries have
Am 05.02.2018 um 17:22 schrieb Andrew Kerber:
Oracle has a problem with transparent hugepages, postgres may well
have the same problem, so consider disabling transparent hugepages.
yes, that's true.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
Am 05.02.2018 um 14:14 schrieb Thomas Güttler:
What do you suggest to get some reliable figures?
sar is often recommended, see
https://blog.2ndquadrant.com/in-the-defense-of-sar/.
Can you exclude other reasons like vacuum / vacuum freeze?
Regards, Andreas
--
2ndQuadrant - The
Am 04.02.2018 um 13:19 schrieb Mariel Cherkassky:
I checked the plan of the next query :
explain select count(*) from log_full where end_date between
to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');
can you rewrite the query to
... where end_date between
Hi,
Am 24.01.2018 um 12:48 schrieb Stefan Petrea:
We're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.
During some database imports(using pg_restore), we're noticing fast
and unbounded growth of pg_xlog up to the point where the
partition(280G in size for us) that stores it fills up and
Am 23.01.2018 um 16:20 schrieb Pavan Teja:
Hi David,
If it's yes what needs to be done in order to stabilize this issue??
Don't top-post ;-)
You can't prevent the generation of wal's (apart from using unlogged
tables, but i'm sure, that will be not your solution.)
Regards, Andreas
--
On 08.12.2017 05:21, Alex Tokarev wrote:
I have made a minimally reproducible test case consisting of a table
with 848 columns
Such a high number of columns is maybe a sign of a wrong table /
database design, why do you have such a lot of columns? How many indexes
do you have?
Regards,
30 matches
Mail list logo