Exciting! Since I still have the same performance issues on compressed btrfs,
I'm looking forward to testing the patches, probably when a 17 Beta is out and
I can find binaries on my platform (OpenSUSE). It looks like it will make a
huge difference.
Thank you for persisting and getting
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
> So would it make sense for postgres to perform reads in bigger blocks? Is it
> easy-ish to implement (where would one look for that)? Or must the I/O unit be
> tied to postgres' page size?
FYI as of last week we can do a little bit of
ysg>
From: Laurenz Albe
Sent: Thursday, February 29, 2024 9:32:48 AM
To: David Kelly ;
pgsql-performance@lists.postgresql.org
Subject: Re: Table Partitioning and Indexes Performance Questions
On Thu, 2024-02-29 at 11:42 -0500, David Kelly wrote:
> I
t have to start with it).
> Any other performance considerations when it comes to partitioned table
> indexing?
> Specifically, partitioning by range where the range is a single value.
Not particularly - selecting from a partitioned table is like selecting
from a UNION ALL of all par
?
Any other performance considerations when it comes to partitioned table
indexing? Specifically, partitioning by range where the range is a single
value.
ery poor sql
> performance. why generic plan change to to a poor plan "nestloop" ? how
> to fix that.
>
please, send result of EXPLAIN ANALYZE, try to run VACUUM ANALYZE before
probably there will not good estimation
>
> explain execute j2eemt
Hi,
we create statistics (dependencies,distinct) on (cccid,sssid); with
real bind variables , it make good plan of Hash join , but when it try to
generic plan, it automatically convert to Nestloop and then very poor sql
performance. why generic plan change to to a poor plan "nestloop&q
On Fri, 2024-02-23 at 18:21 +0800, James Pang wrote:
> it's a third-party vendor application, not easy to change their code.
Then the application is broken, and you should make the vendor fix it.
> is it possible to 1) in Postgresql JDBC driver connection, set
>
it's a third-party vendor application, not easy to change their code.
is it possible to 1) in Postgresql JDBC driver connection, set
plan_cache_mode=force_custom_plan
or 2) some other parameters can workaround this issue?
Thanks,
James
Laurenz Albe 於 2024年2月23日週五 下午5:17寫道:
> On Fri,
On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote:
> we have a SQL from Postgresql JDBC, primary is based on
> (bigint,varchar2,bigint),
> but from sql plan, it convert to ::numeric so the plan just use one "varchar"
> key column and use the other 2 bigint keys as filters. what's the cause
Hi experts,
we have a SQL from Postgresql JDBC, primary key is based on
(bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so
the plan just use one "varchar" key column and use the other 2 bigint keys
as filters. what's the cause about that ?
Hi experts,
we have a SQL from Postgresql JDBC, primary is based on
(bigint,varchar2,bigint), but from sql plan, it convert to ::numeric so
the plan just use one "varchar" key column and use the other 2 bigint keys
as filters. what's the cause about that ?
Table
From: Laurenz Albe
Sent: Tuesday, February 20, 2024 8:29 AM
>Re: "not related" code blocks for removal of dead rows when using vacuum and
>this kills the performance
>Laurenz Albe
>Lars Aksel Opsahl;
>pgsql-performance@lists.postgresql.org
>On Tue, 2024-02-2
On Tue, 2024-02-20 at 05:46 +, Lars Aksel Opsahl wrote:
> If this is expected behavior it means that any user on the database that
> writes
> a long running sql that does not even insert any data can kill performance for
> any other user in the database.
Yes, that is the c
back as user lop and creates the test table
test_null and inserts data in many tiny operations.
User joe who has the long running operation does not know anything about table
test_null and does not have any grants to that table.
The table test_null is not granted to public either.
The problem is the
From: Laurenz Albe
>
>It is not entirely clear what you are doing, but it seems like you are holding
>a database transaction open, and yes, then it is expected behavior that
>VACUUM cannot clean up dead rows in the table.
>
>Make sure that your database transactions are short.
>Don't use table
On Mon, 2024-02-19 at 16:14 +, Lars Aksel Opsahl wrote:
> Then we start testing VACUUM and very simple SQL testing in another window.
>
> We can now show we have performance of "3343.794 ms" and not "0.123 ms", which
> is what we get when we are able to remove
testing VACUUM and very simple SQL testing in another window.
We can now show we have performance of "3343.794 ms" and not "0.123 ms", which
is what we get when we are able to remove dead rows and run a new analyze.
The problem is that as long as the master code is active, we cann
On Thu, 2024-02-01 at 10:23 +0100, Dirk Krautschick wrote:
> I have run a test with pgbench against two cloud vendors (settings,
> parameters almost the same).
> Both Postgres (or whatever they do internally when they call it as Postgres
> offering, NOT Aurora or so :-) )
>
> I have got a
Hi,
I have run a test with pgbench against two cloud vendors (settings,
parameters almost the same).
Both Postgres (or whatever they do internally when they call it as
Postgres offering, NOT Aurora or so :-) )
I have got a strange result that cloud vendor 1 is performing almost
ou are not thinking of keeping business logic on the application
side and querying the database from different locations. If you treat the
database as a regular application's database and run multiple DML's for
each request through the internet, performance of the application will be
horrible due to l
Hi,
We want to work with PostgreSQL in our new project. I need your opinion on
the best way to create a database.
Description of our Project:
It will be in Client/Server Architecture. Windows Application users will
access the server as clients and they are all in different locations. There
will
On Wed, Dec 27, 2023 at 2:23 PM Ranier Vilela wrote:
> ...
>
> Although granule_file has an index as a foreign key, it seems to me that
> it is not being considered.
>
You seem to be mistaken here, a foreign key does not automatically create
an index on the columns, you need to do it by
; ERROR: column "granule_uuid" contains null values
>
>
>
Seems like an odd design for your table. Check if those rows with null
value make any sense for your design.
In any case, for performance, you can try a plain index:
CREATE INDEX ON granule_file (granule_uuid)
-> Seq Scan on visibility visibility_1 (cost=0.00..1.52
rows=52 width=16) (actual time=0.026..0.029 rows=52 loops=1)
Buffers: shared hit=1
Planning Time: 7.354 ms
Execution Time: 64789.927 ms
(52 rows)
From: Ranier Vilela
Date: Wednesday, December 27, 2023
Em qua., 27 de dez. de 2023 às 14:11, Wilson, Maria Louise
(LARC-E301)[RSES] escreveu:
> Thanks for the reply!! Having some issues due to nulls…. Any other
> thoughts?
>
>
>
> i=# ALTER TABLE granule_file ADD PRIMARY KEY (granule_uuid, file_id);
>
> ERROR: column "granule_uuid" contains null
t;Wilson, Maria Louise (LARC-E301)[RSES]"
Cc: Frits Hoogland ,
"pgsql-performance@lists.postgresql.org"
Subject: Re: [EXTERNAL] Need help with performance tuning pg12 on linux
CAUTION: This email originated from outside of NASA. Please take care when
clicking links or opening
>
>-> Hash Join (cost=644250.54..10734700.30 rows=22333224
> width=223) (actual time=7864.023..44546.392 rows=22325462 loops=1)
> Hash Cond: (file_1.id = granule_file_1.file_id)
> Buffers: shared hit=780882 read=8345236
>
Hi Maria, could you please run explain analyse for the problem query?
The ‘analyze’ addition will track actual spent time and show statistics to
validate the planner’s assumptions.
Frits Hoogland
> On 27 Dec 2023, at 16:38, Wilson, Maria Louise (LARC-E301)[RSES]
> wrote:
>
> Hello folks!
Hello folks!
I am having a complex query slowing over time increasing in duration. If
anyone has a few cycles that they could lend a hand or just point me in the
right direction with this – I would surely appreciate it! Fairly beefy Linux
server with Postgres 12 (latest) – this particular
Jean-Christophe Boggio writes:
> I did upgrade :-) But we have many users for which we don't decide on
> when they do upgrade so we have to keep compatibility with most versions
> of PG and in that particular case (non-existence of the materialized
> keyword for PG 11 and before) it is a real
Andreas,
Le 22/11/2023 à 15:25, Andreas Kretschmer a écrit :
Am 22.11.23 um 12:38 schrieb Jean-Christophe Boggio: >> Also, adding "materialized" to both "withcwrack" and "withcwrack0"
>> CTEs gets the result in acceptable timings (a few seconds). The >>
problem with this is that we have some
Am 22.11.23 um 12:38 schrieb Jean-Christophe Boggio:
Also, adding "materialized" to both "withcwrack" and "withcwrack0"
CTEs gets the result in acceptable timings (a few seconds). The
problem with this is that we have some clients with older versions of
PG and I guess blindly adding the
John,
Le 22/11/2023 à 14:30, John Naylor a écrit :
Note that "vacuum full" is not recommended practice in most > situations. Among the downsides, it removes the visibility map, >
which is necessary to allow index-only scans. Plain vacuum should >
always be used except for certain dire
On Wed, Nov 22, 2023 at 6:39 PM Jean-Christophe Boggio
wrote:
>
> Hello,
>
> I just switched from PG11 to PG15 on our production server (Version is
> 15.5). Just made a vacuum full analyze on the DB.
Note that "vacuum full" is not recommended practice in most
situations. Among the downsides, it
Hello,
I just switched from PG11 to PG15 on our production server (Version is
15.5). Just made a vacuum full analyze on the DB.
I have a relatively simple query that used to be fast and is now taking
very long (from less than 10 seconds to 3mn+)
If I remove a WHERE condition changes the
Hi guys,
Thanks for the help.
I was able to recreate the problem , on the same DB, with PSQL only. No JDBC.
A plain run of a complicated query : 50ms
A prepare and then execute of the same query: 2500ms.
The plans are different, as discussed above. The fast one is using
On Mon, 6 Nov 2023 at 09:59, Jose Osinde wrote:
>
> Dear all,
>
> I'm running a query from Java on a postgres database:
>
> Java version: 17
> JDBC version: 42.4.2
> Postgres version: 13.1
>
> In parallel I'm testing the same queries from pgAdmin 4 version 6.13
>
> The tables I'm using contains
Dear all,
I'm running a query from Java on a postgres database:
Java version: 17
JDBC version: 42.4.2
Postgres version: 13.1
In parallel I'm testing the same queries from pgAdmin 4 version 6.13
The tables I'm using contains more than 10million rows each and I have two
questions here:
1. I
Very good point from Danny: generic and custom plans.
One thing that is almost certainly not at play here, and is mentioned: there
are some specific cases where the planner does not optimise for the query in
total to be executed as fast/cheap as possible, but for the first few rows. One
reason
On Mon, 6 Nov 2023 at 08:37, Abraham, Danny wrote:
>
> Both plans refer to the same DB.
JDBC is making use of PREPARE statements, whereas psql, unless you're
using PREPARE is not.
> #1 – Fast – using psql or old JDBC driver
The absence of any $1 type parameters here shows that's a custom plan
Thanks for the help.
Both plans refer to the same DB.
#1 – Fast – using psql or old JDBC driver
==>
Sort (cost=13113.27..13113.33 rows=24 width=622)
Output: dm.calname, dm.jobyear, dm.caltype, ((dm.daymask)::character
varying(400))
Sort Key: dm.calname, dm.jobyear
-> HashAggregate
On Sun, Nov 5, 2023 at 11:20 AM Abraham, Danny
wrote:
> Thanks Laurenz,
>
> Traced two huge plans. They differ.
> The fast one does use Materialize and Memoize (the psql).
> Is there something in JDBC 42 that blocks these algoruthms?
Directly blocking those is not likely. Maybe the way the
blocks these algoruthms?
>
> Thanks again
>
> Danny
>
> -Original Message-
> From: Laurenz Albe
> Sent: Saturday, November 4, 2023 11:07 PM
> To: Abraham, Danny ; psql-performance
>
> Subject: [EXTERNAL] Re: Performance down with JDBC 42
>
>>
name, setting from pg_settings where name ~ 'enable';
using the JDBC-connection.
Regards, Andreas
Thanks again
Danny
-Original Message-
From: Laurenz Albe
Sent: Saturday, November 4, 2023 11:07 PM
To: Abraham, Danny ; psql-performance
Subject: [EXTERNAL] Re: Performance down
, Danny ; psql-performance
Subject: [EXTERNAL] Re: Performance down with JDBC 42
On Sat, 2023-11-04 at 19:08 +, Abraham, Danny wrote:
> Asking for help with a JDBC related issue.
> Environment: Linux 7.9 PG 14.9 , very busy PG Server.
>
> A big query - 3 unions and about 1
On Sat, 2023-11-04 at 19:08 +, Abraham, Danny wrote:
> Asking for help with a JDBC related issue.
> Environment: Linux 7.9 PG 14.9 , very busy PG Server.
>
> A big query - 3 unions and about 10 joins runs :
> - 70ms on psql , DBeaver with JDBC 42 and in our Server using old JDBC 9.2
> -
Hi,
Asking for help with a JDBC related issue.
Environment: Linux 7.9 PG 14.9 , very busy PG Server.
A big query - 3 unions and about 10 joins runs :
- 70ms on psql , DBeaver with JDBC 42 and in our Server using old JDBC 9.2
- 2500 ms in our Server using new JDBC 42 driver. ( and this is
On 29/08/2023, David Rowley wrote:
> On Tue, 29 Aug 2023 at 19:40, Philippe Pepiot wrote:
> > I'm trying to implement some range partitioning on timeseries data. But it
> > looks some queries involving date_trunc() doesn't make use of partitioning.
> >
> > BEGIN;
> > CREATE TABLE test (
> >
On Tue, 29 Aug 2023 at 19:40, Philippe Pepiot wrote:
> I'm trying to implement some range partitioning on timeseries data. But it
> looks some queries involving date_trunc() doesn't make use of partitioning.
>
> BEGIN;
> CREATE TABLE test (
> time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
>
Hi,
I'm trying to implement some range partitioning on timeseries data. But it
looks some queries involving date_trunc() doesn't make use of partitioning.
BEGIN;
CREATE TABLE test (
time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
value FLOAT NOT NULL
) PARTITION BY RANGE (time);
CREATE INDEX
Thanks for the feedback. Appreciate it.
On Thu, 27 Jul, 2023, 01:09 Laurenz Albe, wrote:
> On Wed, 2023-07-26 at 18:15 +0530, Piyush Katariya wrote:
> > I have a few queries regarding the TOAST Fields
> serialisation/deserialization performance.
> >
> > The use case i
On Wed, 2023-07-26 at 18:15 +0530, Piyush Katariya wrote:
> I have a few queries regarding the TOAST Fields serialisation/deserialization
> performance.
>
> The use case i am trying to solve here is to have millions of partitions and
> aggregate the data in array field.
>
>
Hello Gents,
I have a few queries regarding the TOAST Fields
serialisation/deserialization performance.
The use case i am trying to solve here is to have millions of partitions
and aggregate the data in array field.
I wish to know if i declare certain column in table as "array of UDT/
configure --with-liburing (I'm on linux) and run with
> io_method=io_uring?
It's probably worth trying out both io_uring and worker. I've not looked at
performance on btrfs. I know that some of the optimized paths for io_uring
(being able to perform filesystem IO without doing so s
Thanks, it sounds promising! Are the changes in the 16 branch already,
i.e. is it enough to fetch sources for 16-beta2? If
so do I just configure --with-liburing (I'm on linux) and run with
io_method=io_uring? Else, if I use the io_method=worker what is a sensible
amount of worker threads? Should
On Thu, Jul 13, 2023 at 6:50 AM Dimitrios Apostolou wrote:
> Interesting and kind of sad that the last update on the wiki page is from
> 2021. What is the latest prototype? I'm not sure I'm up to the task of
> putting my database to the test. ;-)
It works pretty well, certainly well enough to
Hello and thanks for the feedback!
On Wed, 12 Jul 2023, Thomas Munro wrote:
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
Note that I suspect my setup being related, (btrfs compression behaving
suboptimally) since the raw device can give me up to 1GB/s rate. It is however
On Wed, Jul 12, 2023 at 5:12 AM Thomas Munro wrote:
> "gathering"
(Oops, for reads, that's "scattering". As in scatter/gather I/O but I
picked the wrong one...).
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
> Note that I suspect my setup being related, (btrfs compression behaving
> suboptimally) since the raw device can give me up to 1GB/s rate. It is however
> evident that reading in bigger chunks would mitigate such setup
>
Hello list,
I have noticed that the performance during a SELECT COUNT(*) command is
much slower than what the device can provide. Parallel workers improve the
situation but for simplicity's sake, I disable parallelism for my
measurements here by setting max_parallel_workers_per_gather to 0
Hi,
Is there a way to visualize RLS policy check(s) in the query plan?
Regards,
Akash Anand
On Mon, Jul 10, 2023 at 11:33 AM Akash Anand wrote:
> Hi,
>
> --
> Postgres version
> --
> postgres=# SELECT version();
> version
>
Hi,
--
Postgres version
--
postgres=# SELECT version();
version
---
PostgreSQL 15.3 (Debian
Hello! I tried asking this over on the general listserv before realizing
pgsql-performance is probably better suited.
Hi all,
I recently started at a new firm and have been trying to help to grok
certain planner behavior. A strip-down example of the sort of join we do in
the database looks like
Em ter., 23 de mai. de 2023 às 08:43, Druckenmueller, Marc <
marc.druckenmuel...@philips.com> escreveu:
> Hi there,
>
>
>
> I am investigating possible throughput with PostgreSQL 14.4 on an ARM
> i.MX6 Quad CPU (NXP sabre board).
>
> Testing with a simple python script (running on the same CPU),
6: this = 0.56 tot = 3.96
7: this = 0.55 tot = 4.51
8: this = 0.59 tot = 5.11
9: this = 0.60 tot = 5.71
That's with governor=performance and a couple of background tasks
running as well as the python. PostgreSQL 15 in a container on a Debian
O.S. I've not done any tuning on PostgreSQL
"Druckenmueller, Marc" writes:
> I am investigating possible throughput with PostgreSQL 14.4 on an ARM i.MX6
> Quad CPU (NXP sabre board).
> Testing with a simple python script (running on the same CPU), I get ~1000
> request/s.
That does seem pretty awful for modern hardware, but it's hard to
On Tue, 23 May 2023 at 13:43, Druckenmueller, Marc
wrote:
> Testing with a simple python script (running on the same CPU), I get ~1000
> request/s.
Is the time spent in the client or in the server? Are there noticeable
differences if you execute that statement in a loop in psql (with the
precision[])
AS $$
BEGIN
END
$$ LANGUAGE plpgsql;
This seems to be a quite low number of requests/s, given that there are no
changes to the database.
Looking for suggestions what could cause this poor performance and where to
start investigations.
Tha
On Fri, 2023-04-28 at 15:19 +0300, Παρασκευη Πασσαρη wrote:
> We are facing a performance issue with the following query. Executing this
> query takes about 20 seconds.
> (the database version is 14.1)
The execution plan seems to be incomplete.
Yours,
Laurenz Albe
=?UTF-8?B?zqDOsc+BzrHPg866zrXPhc63IM6gzrHPg8+DzrHPgc63?=
writes:
> We are facing a performance issue with the following query. Executing this
> query takes about 20 seconds.
Raising join_collapse_limit (to more than the number of joins in
the query) might help. But I think
On Friday, April 28, 2023, Παρασκευη Πασσαρη
wrote:
> Dear all,
>
> We are facing a performance issue with the following query. Executing this
> query takes about 20 seconds.
> (the database version is 14.1)
>
Given the possibility of this working better in the supported 14.7
Dear all,
We are facing a performance issue with the following query. Executing this
query takes about 20 seconds.
(the database version is 14.1)
The query:
- SLOW QUERY -
SELECT lead_record.id AS id
FROM "lead_record" lead_record
On Mon, Feb 13, 2023 at 10:09 AM Mkrtchyan, Tigran
wrote:
>
> 0.524 0 BEGIN;
> 0.819 0 INSERT INTO t_inodes (inumber, icrtime,
> igeneration)
> 0.962 0 UPDATE t_inodes SET igeneration = igeneration
> + 1 where inumber = :inumber;
>
Maybe reconsider your expectation.
Note: Every “update” have to “select” before modifying data.
Even if the page is in memory, there still work…reading ,acquiring lock,
modifying and request to write to disk.
Regards,
Tobi
> On 13 Feb 2023, at 18:48, Laurenz Albe wrote:
>
> On Mon,
On Mon, 2023-02-13 at 16:09 +0100, Mkrtchyan, Tigran wrote:
> Typically we expect that UPDATE is a slow operation in PostgreSQL, however,
> there are cases where it's hard to understand why. In particular, I have a
> table like
>
> ```
> CREATE SEQUENCE t_inodes_inumber_seq
> START WITH 1
>
Dear Postgres Folks,
Typically we expect that UPDATE is a slow operation in PostgreSQL, however,
there are cases where it's hard to understand why. In particular, I have a
table like
```
CREATE SEQUENCE t_inodes_inumber_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
On Tue, Nov 29, 2022 at 4:07 PM David Rowley wrote:
> On Wed, 30 Nov 2022 at 03:31, Tom Lane wrote:
> >
> > Alvaro Herrera writes:
> > > IMO it was a mistake to turn JIT on in the default config, so that's
> one
> > > thing you'll likely want to change.
> >
> > I wouldn't necessarily go quite
On 2022-11-30 We 11:36, Tom Lane wrote:
> Andres Freund writes:
>> On November 30, 2022 3:47:32 AM PST, Andrew Dunstan
>> wrote:
>>> I think Alvaro's point is that it would have been better to work out
>>> these wrinkles before turning on JIT by default. Based on anecdotal
>>> reports from
Andres Freund writes:
> On November 30, 2022 3:47:32 AM PST, Andrew Dunstan
> wrote:
>> I think Alvaro's point is that it would have been better to work out
>> these wrinkles before turning on JIT by default. Based on anecdotal
>> reports from the field I'm inclined to agree.
> The problem is
Hi,
On November 30, 2022 3:47:32 AM PST, Andrew Dunstan wrote:
>
>On 2022-11-29 Tu 16:06, David Rowley wrote:
>> On Wed, 30 Nov 2022 at 03:31, Tom Lane wrote:
>>> Alvaro Herrera writes:
IMO it was a mistake to turn JIT on in the default config, so that's one
thing you'll likely want
On 2022-11-29 Tu 16:06, David Rowley wrote:
> On Wed, 30 Nov 2022 at 03:31, Tom Lane wrote:
>> Alvaro Herrera writes:
>>> IMO it was a mistake to turn JIT on in the default config, so that's one
>>> thing you'll likely want to change.
>> I wouldn't necessarily go quite that far, but I do think
On Tue, 2022-11-29 at 19:09 +0100, Alvaro Herrera wrote:
> On 2022-Nov-29, Mladen Gogala wrote:
>
> > Hmmm, I think I will run pgbench with and without JIT on and see
> > the
> > difference.
>
> I doubt you'll notice anything, because the pgbench queries will be
> far
> below the JIT cost, so
On Wed, 30 Nov 2022 at 03:31, Tom Lane wrote:
>
> Alvaro Herrera writes:
> > IMO it was a mistake to turn JIT on in the default config, so that's one
> > thing you'll likely want to change.
>
> I wouldn't necessarily go quite that far, but I do think that the
> default cost thresholds for
On 2022-Nov-29, Mladen Gogala wrote:
> Hmmm, I think I will run pgbench with and without JIT on and see the
> difference.
I doubt you'll notice anything, because the pgbench queries will be far
below the JIT cost, so nothing will get JIT compiled at all. Or are you
planning on using a custom
Alvaro Herrera writes:
> IMO it was a mistake to turn JIT on in the default config, so that's one
> thing you'll likely want to change.
I wouldn't necessarily go quite that far, but I do think that the
default cost thresholds for invoking it are enormously too low,
or else there are serious bugs
On 11/29/22 03:36, Alvaro Herrera wrote:
On 2022-Nov-28, Mladen Gogala wrote:
As for JIT, I've recently asked that question myself. I was told that
PostgreSQL with LLVM enabled performs approximately 25% better than without
it.
Hmm, actually, normally you're better off turning JIT off,
On 11/29/22 03:31, Alvaro Herrera wrote:
On 2022-Nov-28, Mladen Gogala wrote:
You'll probably be glad to learn that we have hints now.
What hints are you talking about? As I understand, we still don't have
Oracle-style query hints.
https://github.com/ossc-db/pg_hint_plan
--
Mladen Gogala
On 2022-Nov-28, Mladen Gogala wrote:
> As for JIT, I've recently asked that question myself. I was told that
> PostgreSQL with LLVM enabled performs approximately 25% better than without
> it.
Hmm, actually, normally you're better off turning JIT off, because it's
very common to diagnose cases
On 2022-Nov-28, Mladen Gogala wrote:
> You'll probably be glad to learn that we have hints now.
What hints are you talking about? As I understand, we still don't have
Oracle-style query hints.
--
Álvaro HerreraBreisgau, Deutschland — https://www.EnterpriseDB.com/
lz4 # v14
recovery_init_sync_method = syncfs # v14
check_client_connection_interval = ... # v14
wal_compression = {lz4,zstd} # v15
If anyone has links to blogs or other things that discuss the
performance implications of the above settings that would be wonderful!
--
Josh Berkus
On 11/28/22 21:59, Josh Berkus wrote:
Hey, folks:
I haven't configured a PostgreSQL server since version 11 (before
that, I did quite a few).
What's changed in terms of performance configuration since then? Have
the fundamentals of shared_buffers/work_mem/max_connections changed at
all
On Mon, Nov 28, 2022 at 06:59:41PM -0800, Josh Berkus wrote:
> Hey, folks:
>
> I haven't configured a PostgreSQL server since version 11 (before that, I
> did quite a few).
>
> What's changed in terms of performance configuration since then? Have the
> fundamentals of sh
Hey, folks:
I haven't configured a PostgreSQL server since version 11 (before that,
I did quite a few).
What's changed in terms of performance configuration since then? Have
the fundamentals of shared_buffers/work_mem/max_connections changed at
all? Which new settings are must-tunes
Rick Otten writes:
> I was able to reproduce a similar issue with using `= ANY(VALUES)` instead
> of `= ANY(ARRAY)`:
>
> 1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP8
> 2. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb
> 3. slow query using
I was able to reproduce a similar issue with using `= ANY(VALUES)`
> instead of `= ANY(ARRAY)`:
>
> 1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP8
> 2. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb
> 3. slow query using =ANY(VALUES):
Okay, increasing the collection of statistics seems to have helped. I used
`ALTER TABLE report ALTER COLUMN reporter_id SET STATISTICS 1` and now
queries which previously didn't finish at all now finish in < 1 ms.
The following gave me the hint:
“The amount of information stored in
Hello!
I have been struggling with finding a proper solution for this query for
some time and wanted to ask if someone here knows how to approach this?
I have a table named "report" which has an index on report.reporter_id.
This column consists of IDs which are grouped together using a table
On Sat, 2022-09-17 at 01:05 +0200, Hubert Rutkowski wrote:
> Hello! I have written python program to benchmark view efficiency, because in
> our platform
> they have a role to play and we noticed the performance is less than expected.
If your platform plans to use millions of views, y
Hello! I have written python program to benchmark view efficiency,
because in our platform they have a role to play and we noticed the
performance is less than expected.
Basically, benchmark creates table:
CREATE TABLE IF NOT EXISTS foobar ( id int, text varchar(40) );
for i in range(1200300
1 - 100 of 911 matches
Mail list logo