To whom it may concern, The system I'm referring to has been built by my company beginning in 2019/2020 with PG 10.2.
The SQL functionality and related tables contained within the PG DB has changed little since initial delivery. There a very few complex forms of SQL used and the tables are quite simple - the largest being ~400,000 rows based directly on ~50,000 instances of the primary key. The functions do not update anything in this 'large' table (from which data is only read) but do produce several significantly smaller tables - the largest being ~ 40 columns x 50-80 rows about 80% of which are fixed point numbers (6 decimal places). Initially (in 2020 before an indexing was added) the time to produce and output the overall result was 30-60 minutes. After adding quite basic indexing, that time was reduced to 1 or 2 minutes. PG 10.2 remained in place until 2023 when the DB was upgraded to PG 15.2 with no problems. In fact performance increased. When I recently upgraded to PG 18.1 (without any changes to the DB functions or tables, or indexing) it took 7-10 hours to produce the same result - a 300-600 times decrease in performance. My first step was to reduce the jump in versioning of PG from 15 to 18 down to 15 - 16. This action produced the same very poor performance result as when using PG 18.1 To try and save time I employed the help of Claude Pro (an excellent AI tool for software development). After implementing all sorts of suggestions (indexing, configuring etc.) and undertaking all sorts of checks (using available PG functions), the conclusion is that some significant regression was introduced in PG 16, and hasn't been removed in (at least) PG 18 (I ddn't try PG 17). Of course, I do understand that bringing Claude Pro into the fray may have its own issues, but Claude's suggestions and explanations do make sense and at the very least may help a PG expert to more easily hone in on the path to take to fix the issue. Sticking with PG 15 in the short term is fine for us. However, the current performance of PG 16 - PG 18 is totally unacceptable in terms of expectations. As an aside we did increase the number of elements by 30% within the largest table (mentioned above). So that instead of ~400,000 rows there were ~550,000 rows to read from. The number of instances of the primary key was not changed. This action (for PG 15) only added an extra 20-30 seconds to the production of the result - more or less as expected. Although the same actions were performed on PG 18, after several hours we simply killed the process. BTW, the 'tests' I did were carried out in both a MacOS (Sequoia) and a Linux (Rocky Linux 9.6) environment. For reasons of security I can't provide all the SQL code, but I can (if it is necessary) provide an outline of what the code does. Regards. Clive. Dr Clive Boughton Software Improvements Street address: 97 Bankers Road, NSW 2621 Australia Mobile Phone: +61 (0)410 632 055 Telephone: +61 (0)2 6230 3195
