t know
> maybe this is already how PostgreSQL works).
That will happen automatically: all data you read or modify get loaded into
shared buffers and stay there.
The cache will be empty after a restart of the database. You can use the
pg_prewarm
extension (in shared_preload_libraries) for PostgreSQL to load the data into the
cache again automatically after a restart.
Yours,
Laurenz Albe
s
of connections established. That can easily use your CPU.
Other than that, I am out of guesses.
> Frits Hoogland 於 2025年6月25日週三 下午10:27寫道:
> > > On 25 Jun 2025, at 07:59, Laurenz Albe wrote:
> > >
> > > Run "sar -P all 1" and see if "%iowait&qu
insert/delete
> other tables in the tigger.
One thing that almost certainly would improve your situation is to run fewer
concurrent statements, for example by using a reasonably sized connection pool.
Yours,
Laurenz Albe
lding the lock.
If you have hundreds of sessions, you are allowing to many connections.
Yours,
Laurenz Albe
tion Time: 0.066 ms
Which index is bigger (you can use \di+ in "psql")?
Could you run the pgstatindex() function from the "pgstattuple" extension on
both indexes and compare the output?
Does ANALYZE on the table make a difference?
Yours,
Laurenz Albe
r-defined constant, you
could write the query as
SELECT ...
FROM test_constraint
WHERE value = $1
AND length($1) <= 10;
If the query planner knows the value of the constant, it will evaluate
the second condition when it plans the query and replace it with a
"One-Time Filter: false", which would do exactly what you want.
Yours,
Laurenz Albe
much smaller) connection pool.
That will probably take care of the problem and will probably
improve your overall performance.
Yours,
Laurenz Albe
On Thu, 2025-03-13 at 12:28 +0100, I wrote:
> Then the best you can do is to use COPY rather than INSERT.
> It will perform better (but now vastly better).
Sorry, I meant "*not* vastly better".
Yours,
Laurenz Albe
T.
It will perform better (but now vastly better).
Yours,
Laurenz Albe
if you want the
features that a relational database provides.
To get better performance, the best I can think of is to parallelize
loading the data until you saturate CPU, disk or hit internal contention
in the database.
Yours,
Laurenz Albe
On Thu, 2025-02-27 at 16:54 +0300, Alexey Borschev wrote:
> I see poor performance of text sorting of collate "en_US.utf8" in PG 17.4.
I'd say that you would have to complain to the authors of the
GNU C library, which provides this collation.
Yours,
Laurenz Albe
h case the WHERE (col_1, col_2,
> col_3) > (10, 20, 29)
> syntax is not an option (unless I somehow derive "reversed" data from the
> column,
> which I would like to avoid).
Here are my ideas for this situation:
https://www.cybertec-postgresql.com/en/keyset-pagination-with-descending-order/
Yours,
Laurenz Albe
4 ms
> Execution Time: 2712.873 ms
How is the index "entity_data_model_id_primary_key_uniq" defined?
Yours,
Laurenz Albe
help, it would be interesting to run the query with the low
"work_mem"
setting, but with "enable_seqscan = off".
- Does PostgreSQL choose the correct index then?
- What are the cost estimates for the index scan?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
On Mon, 2025-01-27 at 12:29 +, Praveen Kumar Mandula wrote:
> Attaching linux and windows logs, are they useful in identifying why its
> taking time more in windows ?
No.
Yours,
Laurenz Albe
On Mon, 2025-01-27 at 12:29 +, Praveen Kumar Mandula wrote:
> Attaching linux and windows logs, are they useful in identifying why its
> taking time more in windows ?
No.
I told you what to investigate, right?
Yours,
Laurenz Albe
than that, activate pg_stat_statements and see which statements take
more time on the windows machine. Investigating those statements could help
solve the mystery.
Yours,
Laurenz Albe
ch won't conflict with data modifications.
Yours,
Laurenz Albe
on for
that resource is the bottleneck.
It is hard to say what is the root cause without further analysis, but very
often the
cause is that you have too many connections to the database. Using an
effective connection
pool *might* solve that particular problem.
Yours,
Laurenz Albe
er | 43
> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28
> 2024-04-08 09:00:07.114015+00 | ClientRead | 11
> 2024-04-08 09:00:07.114015+00 | | 11
That's quite obvious: too many connections cause internal contention in the
database.
Reduce the number of connections by using a reasonably sized connection pool.
Yours,
Laurenz Albe
er. I have 128GB memory.
> 1. Any advice or thoughts?
> 2. Is there any other parameter that can accelerate index creation?
It is safe as long as you have enough free memory on the machine.
You can verify with tools like "free" on Linux (look for "available" memory).
Yours,
Laurenz Albe
1681 loops=3)
Why does it take over 41 seconds to read a table with less than
3 million rows? Are the rows so large? Is the tabe bloated?
What is the size of the table as measured with pg_relation_size()
and pg_table_size()?
Yours,
Laurenz Albe
l partitions, except that sometimes PostgreSQL
can forgo scanning some of the partitions.
If you use very many partitions, the overhead for query planning and
execution can become noticable.
Yours,
Laurenz Albe
62,080 instead
> of 1,292,010).
Looking at the samples you provided, I get the impression that the statistics
for
the table are quite outdated. That will affect the estimates. Try running
ANALYZE
and see if that improves the estimates.
Yours,
Laurenz Albe
lan_cache_mode=force_custom_plan or 2) some other parameters can
> workaround this issue?
You can set "prepareThreshold" to 0 to keep the JDBC driver from using
prepared statements in PostgreSQL. I am not sure if that is enough to
fix the problem.
Yours,
Laurenz Albe
rsions
of PostgreSQL.
The problem is that you bind the query parameters with the wrong data types.
Don't use "setBigDecimal()", but "setLong()" if you want to bind a "bigint".
An alternative is "setObject()" with "targetSqlType" set to "Types.BIGINT".
Yours,
Laurenz Albe
No, that is not a problem. Keeping *connections* open is a good thing. It is
keeping data modifying transactions, cursors or long-running queries open
that constitutes a problem.
Yours,
Laurenz Albe
ke sure that your database transactions are short.
Don't use table or row locks to synchronize application threads.
What you could use to synchronize your application threads are advisory locks,
they are not tied to a database transaction.
Yours,
Laurenz Albe
e available from the index. In this example, x is not needed except in
the context f(x), but the planner does not notice that and concludes that
an index-only scan is not possible. If an index-only scan seems sufficiently
worthwhile, this can be worked around by adding x as an included column,
for example
CREATE INDEX tab_f_x ON tab (f(x)) INCLUDE (x);
Yours,
Laurenz Albe
On Fri, 2024-02-02 at 02:04 -0800, Nikolay Samokhvalov wrote:
> On Thu, Feb 1, 2024 at 04:42 Laurenz Albe wrote:
> > Today, the only feasible solution is not to create more than 64
> > subtransactions
> > (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.
>
>
w to resolve this
> waits ?
Today, the only feasible solution is not to create more than 64 subtransactions
(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.
Don't use extensions or the JDBC driver option to simulate statement level
rollback,
that is the road to hell.
Yours,
Laurenz Albe
e initial data load is fast, because you have not yet exceeded your
I/O quota,
and then I/O is throttled.
Yours,
Laurenz Albe
ow statements.
One theory could be that there was a long running transaction or something else
that prevented VACUUM from cleaning up. For that, the output of
"VACUUM (VERBOSE) shortened_url" would be interesting.
> Additional details
> PostgreSQL version: 14.7 on db.t3.micro RDS
> PG configuration: Default of RDS
We can only speak about real PostgreSQL...
Yours,
Laurenz Albe
able auto_explain and look at the execution plan
when the statement is run by the JDBC driver. Then you can compare the
execution plans and spot the difference.
Yours,
Laurenz Albe
rmat please, no JSON.
Yours,
Laurenz Albe
On Fri, 2023-09-22 at 10:35 +0200, Koen De Groote wrote:
> On Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe wrote:
> > On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:
> > > I'm doing the following query:
> > > select * from my_table where hasbeench
the first reader has to set hint bits, which is
an extra
chore. You can avoid that if you VACUUM the table before you query it.
Yours,
Laurenz Albe
imes. If a generic plan is used (which should happen), you will
see $1 instead of the literal argument in the execution plan.
Prepared statements are probably your best bet.
Yours,
Laurenz Albe
g Time: 0.732 ms
> Execution Time: 0.039 ms
>
>
> Where the planning time gets in the way as it takes an order of magnitude
> more time than the actual execution.
>
> Is there a possibility to reduce this time? And, in general, to understand
> why planning takes so much time.
You could try to VACUUM the involved tables; indexes with many entries pointing
to dead tuples
can cause a long planing time.
Also, there are quite a lot of indexes on "test_db_bench_1". On a test
database, drop some
indexes and see if that makes a difference.
Finally, check if "default_statistics_target" is set to a high value, or if the
"Stats target"
for some column in the "\d+ tablename" output is set higher than 100.
Yours,
Laurenz Albe
;
> Btw this is Postgres 9.6
>
> (we tried unlogged table (that did nothing), we tried creating indexes after
> (that helped), we're experimenting with RRI)
Why are you doing this the hard way, when pg_squeeze or pg_repack could do it?
You definitely should not be using PostgreSQL 9.6 at this time.
Yours,
Laurenz Albe
plit the JSONB in several parts and store each
of those parts in a different table row. That would reduce the impact.
Yours,
Laurenz Albe
of 8140 bytes.
If you want your block size to be a power of two, the limit would be 4kB, which
would waste
almost half your storage space.
Yours,
Laurenz Albe
make backend to cache every SQL
> statement
> plan in that function too? and for table triggers, have similar caching
> behavior ?
Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.
Yours,
Laurenz Albe
want to benefit from plan caching, you
can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".
Yours,
Laurenz Albe
On Fri, 2023-04-28 at 15:19 +0300, Παρασκευη Πασσαρη wrote:
> We are facing a performance issue with the following query. Executing this
> query takes about 20 seconds.
> (the database version is 14.1)
The execution plan seems to be incomplete.
Yours,
Laurenz Albe
E) to see the effects that
partitioning has on your queries.
Yours,
Laurenz Albe
ITH (MODULUS 26, REMAINDER 0);
[...]
CREATE TABLE humans_2002_25
PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 25);
and so on for the other years.
Yours,
Laurenz Albe
ve?
Updating a newly inserted row is about as expensive as inserting the row in the
first place.
You can reduce the overall impact somewhat by creating the table with a
"fillfactor" below
100, in your case 90 would probably be enough. That won't speed up the UPDATE
itself, but
it should greatly reduce the need for VACUUM.
Yours,
Laurenz Albe
see anything obvious
> in the catalog.
The view is "pg_stat_replication", but you won't see there if an entry is
abandoned before PostgreSQL does and terminates it. You can set
"tcp_keepalived_idle"
low enough so that the kernel will detect broken connections early on.
Yours,
Laurenz Albe
LTER TABLE ... VALIDATE CONSTRAINT
...,
which takes a while too, but does not lock the table quite that much.
But I don't think there is a way to do that with a domain.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ee structure, the random key values will cause more index page splits
> and merges as
>there is no pre-determined order of filling the tree structure.
I'd say that is quite accurate.
Yours,
Laurenz Albe
cade;
>
> Wihout the views, table can be dropped in 20ms.
You misconfigured your operating system and didn't disable memory overcommit,
so you got killed
by the OOM killer. Basically, the operation ran out of memory.
Yours,
Laurenz Albe
just make there citext column, and try searching for value in it,
> and check explain from the search. If it will cast column - it's no
> good.
>
> Sorry I can't tell you what to fix, but perhaps this will be enough for
> you to find solution.
Quite so.
You are pro
he second case, the whole IN list is shipped to the remote side.
In short, the queries are quite different, and I don't think it is possible to
get
the first query to perform as well as the second.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
dp1.daily_production_id) prodId
FROM ps_daily_production_v dp1
WHERE dp1.fleet_object_number = cast(coalesce(nullif
(cast(40001000277313 AS varchar), ''), NULL) AS numeric)
AND dp1.activity_code IS NOT NULL
GROUP BY dp1.fleet_object_number
Remove the superfluous GRO
. I may have made a mistake, and I have no reproducer, but I would
be curious to know if there is an explanation for that.
(I am aware that "top" shows shared buffers multiple times).
Yours,
Laurenz Albe
ons
> worked fine.
Some application that uses the database has a connection leak: it opens new
connections
without closing old ones. Examine "pg_stat_activity" to find out which
application is
at fault, and then go and fix that application.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
your permises?
Apart from all other things, compare the network latency. If a single request
results in 500 database queries, you will be paying 1000 times the network
latency per request.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
. at
> database and it will not
> create any issues to queries running in database.
That will very likely cause problems in your database, because sometimes a
nested loop join
is by far the most efficient way to run a query.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ly disable nested loop joins for the whole
query, rather than
doing the right thing and fixing the estimates:
BEGIN;
SET LOCAL enable_nestloop = off;
SELECT ...;
COMMIT;
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
> |
>
> It also returns 45 rows, but in 25 seconds which is unacceptable.
You should create an index that supports LIKE; for example
CREATE INDEX ON media.oo_file (relpath COLLATE "C");
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
On Wed, 2021-12-22 at 20:26 +0100, Zbigniew Kostrzewa wrote:
> Yes, I am aware 9.6 is pretty old, soon I will be replacing it with 11.x.
> Thanks.
v11 is old as well. I suggest v14.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
> FROM song
> WHERE (song.artist %> :'query' OR song.title %> :'query')
> ;
The table is quite small; with a bigger table, the test would be more
meaningful.
Since you have SSDs, you should tune "random_page_cost = 1.1".
This makes the planner prefer index scans, and it leads to the index scan
being chosen in your case.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ike you would get into trouble just from the
sheer number of sessions, so it must be something else.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
uffer
>
> 1.Is there a way to tune the lock contention ?
How many concurrent sesions are you running?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
gettimeofday(2) is portable enough.
Yours,
Laurenz Albe
g indexes:
https://www.cybertec-postgresql.com/en/index-your-foreign-key/
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
On Mon, 2021-10-04 at 21:51 -0400, Mladen Gogala wrote:
>
> On 10/4/21 02:34, Laurenz Albe wrote:
> > On Fri, 2021-10-01 at 15:06 -0500, Jeff Holt wrote:
> > > TLDR; If I spend the time necessary to instrument the many functions that
> > > are the equivalent
&g
QL, I can
simply read the code or attach a debugger to a backend, and when it comes to
profiling, "perf" works pretty well. So there is less need for these things.
I don't want to discourage you, but contributing to PostgreSQL can be a lengthy
and tedious process. On the upside, things that make it into core are usually
fairly mature.
Yours,
Laurenz Albe
LAIN output
(without ANALYZE) will also help.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
$5))
If it scanned the index for 6 seconds before finding the first result,
I'd suspect one of the following:
- the index is terribly bloated
- there were lots of deleted rows, and the index entries were marked as "dead"
- something locked the table for a long time
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
On Mon, 2021-08-23 at 17:54 +, Nagaraj Raj wrote:
> Wouldn’t be easy if we have option to_schema ?
Sure, but it wouldn't be easy to implement that.
It would have to be a part of "pg_dump".
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
is to create a new database, import the data there,
rename the schema and dump again.
Then import that dump into the target database.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
acle has given
its permission.
The question cannot be answered, because performance depends on your workload,
configuration, software and hardware. Perhaps PostgreSQL will be faster,
perhaps not.
Test and see.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
action is done, the tuples cannot be removed,
no matter if they are HOT or not.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ry
creates the temporary files.
If it is a statement in a function called from your top level query,
auto_explain with the correct parameters can get you that output for
those statements too.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
onfigured PostgreSQL, you won't need a lot of swap space.
> 2. If the RAM is 4GB and 8GB, the formula (RAM - swap) / RAM * 100 result
> will become to 0,
>how could we configure overcommit_ratio please?
You have to use floating point arithmetic.
The result will only
ned that 80 or 90 configuration in their env
> So is it OK just to configure vm.overcommit_ratio to 90 please?
It depends on the size of RAM and swap space:
overcommit_ratio < (RAM - swap) / RAM * 100
Here, RAM is the RAM available to PostgreSQL.
Yours,
Laurenz Albe
--
Cybertec
using the same generic plan may and will lead to very long execution times.
AI can go wrong too, and I personally would be worried that such cases
are very hard to debug...
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
emented
as waiting for a lock on the transaction ID.
There must be a long running transaction that locks a row that is needed for
the INSERT. It could be a row in a different table that is referenced by a
foreign key.
Make that long running transaction go away. Transactions should never l
g that table
directly.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
uting the EXISTS portion of the
> query with an explicit PLATE_384_ID yields the execution plan we want as
> well (https://explain.depesz.com/s/3QAK). unnesting the EXISTS and adding
> a DISTINCT on the result also yields a better plan.
Great! Then use one of the rewritten queries.
n the database level.
Unless you can benefit from that, your queries will become slower.
Yours,
Laurenz Albe
ain.depesz.com/s/7HFJ
If your transaction modifies the data significantly (which it does if the
table is empty before you start), you should throw in an ANALYZE on the
affected tables occasionally.
Normally, autovacuum takes care of that, but it cannot see your data
until the transaction is co
stgreSQL performs VACUUM in batches of "maintenance_work_mem" size
of tuple identifiers. If that parameter is small, the indexes have
to be scanned often.
Try increasing "maintenance_work_mem" to 1GB (if you have enough RAM),
that will make it faster.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
traints have been added
> recently or if any foreign key indexes may have inadvertently been removed.
> Indexing the foreign keys resolved our issue.
Were these deferred foreign key constraints?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
S Azure (*not*
> managed by them).
Unless you are using WITH HOLD cursors on large result sets, this is very likely
I/O overload. Use tools like "sar", "vmstat" and "iostat" to monitor your I/O
load.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
n_wal_size= 192 MB.
You should increase "max_wal_size" even more.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
quot;sent_lsn", "replay_lsn" from
"pg_stat_replication" and
pg_current_wal_lsn() on the primary.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
would result in
unnecessary delay and I/O.
You'd have to hack the code, but I wonder what leads you to this interesting
requirement.
Yours,
Laurenz Albe
--
+43-670-6056265
CYBERTEC PostgreSQL International GmbH
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Web: https://www.cybertec-postgresql.com
Hash (cost=4.00..4.00 rows=51 width=4)
> -> Seq Scan on dim (cost=0.00..4.00 rows=51 width=4)
> Filter: ((part_key >= 110) AND (part_key <= 160))
One thing you could try is to partition "dim" just like "fact" and
set "enable_partitionwise_join = on".
I didn't test it, but that might do the trick.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
'THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')
and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30
00:00:00'
and
... WHERE j.countrycode = 'TH'
and j.facilitycode in
('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')
and j.jobstartdatetime IS NULL
These indexes could speed up the resulting query:
CREATE INDEX ON job (countrycode, facilitycode);
CREATE INDEX ON job (countrycode, jobstartdatetime);
CREATE INDEX ON job (countrycode, facilitycode) WHERE jobstartdaytime IS NULL;
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
atefully received.
Yes, the query plan for the query that defines the materialized view
is the interesting data point. Run an EXPLAIN (ANALYZE, BUFFERS) on
that query.
If your statistics are off because the data have just been imported a
second ago, run an explicit ANALYZE on the affected tables
l other sessions.
Make sure that the transaction in this database session ends, e.g. by
SELECT pg_cancel_backend(6423);
Either there is a session that did not close its transaction (coding bug),
or a database statement ran inordinately long.
Yours,
Laurenz Albe
--
+43-670-6056265
CYBERTEC PostgreSQL In
ns
> as this does seem to be the root of the problem?
You should
- check with "pgstattuple" if the table is bloated.
- use "perf" to see where the CPU time is spent.
- look at "pg_stat_activity" for wait events (unlikely if the CPU is busy).
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
think they will receive 1301316 rows from the
> seqscan, when in fact they will probably only see a hand full, which the
> planner could have (easily?) deduced by taking the
> greater of the two subplan row estimates.
>
> What am I missing, or is this perhaps a shortfall of the planner?
The subplans are executed *fpr each row* found in "transactions",
and the estimate on the subplans is *per execution".
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
gt;
>
> Any suggestions or help would be highly appreciated.
The only potential improvement I can see is to strive for an
"index only scan" on "prospect".
For that, you'd have to add and INCLUDE clause to "pk_prospect"
so that "physical_address_sid" and "individual_entity_proxy_id"
are included and VACUUM the table.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
be used if the expression is exactly the
same as on one
side of an operator in a WHERE condition.
So your only chance with that query is to hope for a bitmap OR with an index on
"parental path".
Two things to try:
1) CREATE INDEX ON table_name (parental_path text_pattern_ops);
2) CREATE EXTENSION pg_trgm;
CREATE INDEX ON table_name USING GIN (parental_path gin_trgm_ops);
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
as below:
Is there nothing in the PostgreSQL log?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ny more, you have to scale, which
is easily done by adding more machines if you have many databases.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
1 - 100 of 188 matches
Mail list logo