;off".
>
> We need help to understand this unexpected behaviour.
>
Would be good to have some sort of reproducer - ideally a script that
sets up an instance + replication, and demonstrates the issue. Or at
least a sufficiently detailed steps to reproduce it without having to
guess what exactly you did.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ansfer the configuration, or did you just create a new
cluster with the default values?
4) What exactly is slower? Queries? Inserts?
5) Can you quantify the impact? Is it 2x slower? 100x slower?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
e? The filesytem page? The storage block/sector size?
FWIW I think for SSDs this matters way more than for HDD, because SSDs
have to erase the space before a rewrite, which makes it much more
expensive. But that's not just about the alignment, but about the page
size (with smaller pages being bett
t;
Hard to say. The best thing would be to have a reproducer script, ofc.
If that's not possible, the information already requested seems like a
good start.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
(especially the restart_lsn
value). Also, maybe show the contents of pg_wal (especially for the
segment referenced in the error message).
Can you reproduce this outside Google cloud environment?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
then the function can't do any writes either.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 7/17/23 13:20, Tomas Vondra wrote:
> ...
>
> It's always going to be slow with the COUNT(DISTINCT), I'm afraid.
>
> Not sure how much you can modify the query / database, and how accurate
> results you need. If you're OK with estimates, you can try postgres-hll
> extensio
raid.
Not sure how much you can modify the query / database, and how accurate
results you need. If you're OK with estimates, you can try postgres-hll
extension [2] which estimates count(distinct). For exact reaults, I
wrote count_distinct extension [2] that uses hashtable. Might be worth a
try, I guess.
Another option is to precalculate stuff, and do queries on that. That
is, you're not going to add data with old timestamps, so you can
pre-aggregate that.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
rocess to generate a "normalized" query.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
t practical issues you run ran into.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>
> I know that page. obviously, as I have to kill the request, I cannot
> provide a explain analyze...
>
It's a bit weird the "victor" table is joined seemingly without any join
conditions, leading to a cross join (which massively inflates the cost
for joins above it). Maybe the anonymized plan mangles it somehow.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ing:
> Buffers: shared hit=3
> Planning Time: 2.394 ms
> Execution Time: 2273.256 ms
> (13 rows)
>
>
> Thank you for any sort of insight into this.
I don't think indexing the number of trigrams like this can help, and
I'm not sure how to improve this
On 8/7/22 19:28, Tom Lane wrote:
> Tomas Vondra writes:
>> Yeah, this seems like a clear bug - we should not fail queries like
>> this. It's a sign statext_is_compatible_clause() and the MCV code
>> disagrees which clauses are compatible.
>
> Indeed. I attempt
ol the build flags
>
Yeah, this seems like a clear bug - we should not fail queries like
this. It's a sign statext_is_compatible_clause() and the MCV code
disagrees which clauses are compatible.
Can you share an example triggering this?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
o get memory context stats from the
processes consuming a lot of memory. If you know which processes are
suspect (and it seems you know, bacause if a reconnect helps it's the
backend handling the connection), you can attach a debugger and do
$ gdb -p $PID
call MemoryContextStats(TopMemoryContext)
which will log info about memory contexts, just like autovacuum.
Hopefully that tells us memory context is bloated, and that might point
us to particular part of the code.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
=183) ...
So it seems *more* expensive than the first plan. Taken to the extreme
the planner could theoretically have chosen to use the first plan (and
delay the condition until after the join).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
k like? Is it using the merge sort
of nested loop? I wonder if this might be formulated as a costing issue,
pushing the planner to use the nested loop.
[1]
https://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com
regards
--
Tomas Vondra
Enterp
ta t (cost=0.57..21427806.53 rows=58785023
width=131) (actual time=0.024..0.482 rows=854 loops=1)
Index Cond: (id > 205284974)
It'd be interesting to see the plans without the LIMIT, as that makes
the "actual" values low simply by terminating early.
regards
--
Tomas Vo
what's available.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 1/5/22 17:24, Ron wrote:
On 1/5/22 7:22 AM, Tomas Vondra wrote:
On 1/5/22 14:17, Shaozhong SHI wrote:
Any examples in Postgres to create a dictionary type to store and
access key value pairs?
I'd say JSONB can be used as an associative array, and e.g. in Python
can map to dict data
On 1/5/22 14:17, Shaozhong SHI wrote:
Any examples in Postgres to create a dictionary type to store and access
key value pairs?
I'd say JSONB can be used as an associative array, and e.g. in Python
can map to dict data type.
regards
--
Tomas Vondra
EnterpriseDB: http
/listinfo/postgis-users
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
e
disappears. The simpler the query, the easier it is to investigate.
And yes, replacing parts of a query with a temporary table is a common
solution, because it's possible to collect statistics on it, build
indexes etc. That usually solves estimation issues in multi-tenancy.
Sometimes ev
) Don't post backtraces as screenshots. Copy them as text, attach them
as a text file.
5) Continue in the thread you already started, otherwise it's impossible
to follow the discussion.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
1GB limit is going to cause all sorts of unpredictable and rather
painful issues :-(
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
or are you
looking for a faster / more efficient way to transfer the data?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
(gdb) c
3) run the CREATE INDEX query
4) get backtrace from the debugger
(gdb) bt
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
in the error message
(gdb) break nodeFuncs.c:263
(gdb) continue
4) run the query, the breakpoint should be triggered
5) extract full backtrace
(gdb) bt full
6) print the expression
(gdb) p nodeToString(expr)
That should give us some hints about what might be wro
the error is printed, and then set a breakpoint to that place.
Makes it easier to generate the backtrace.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
this:
1) lo_get (loid, offset, length)
https://www.postgresql.org/docs/13/lo-funcs.html
2) lo_seek() + lo_read()
https://www.postgresql.org/docs/13/lo-interfaces.html
Obviously, you can't do "loread(fd, size)" because that's going to
attempt building one large bytea, failing because of the alloc limit.
You have to stream the data into the hash.
Doing that in plpgsql is possible, although possibly somewhat slow.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 5/5/21 3:23 PM, Pavel Luzanov wrote:
Hello,
It is very likely that the date_trunc function in the following example
is executed for each line of the query. Although it marked as a STABLE
and could only be called once.
It could, but that's just an option - the database may do that,
how many groups we actually matched, etc.
TBH I'm not sure how much of those implementation details we want to put
into the user docs - it may be a bit too much, and we may need to change
some of it.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL
ecause the whole idea is
about eliminating large blocks of data (e.g. 1MB). But with random data
that's not going to happen, because each such range will match anything.
Which is why seqscan is a bit faster than when using BRIN index.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprise
chema
versioning tools available - we have a list on the wiki:
https://wiki.postgresql.org/wiki/Change_management_tools_and_techniques
I'm sure it's incomplete, but it's helpful nevertheless.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
s generated per day, or something like that.
Maybe there was some I/O issue in that time period and a couple writes
got lost, or something like that.
Are there any other corrupted indexes on the table?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
consider the column-level correlation even
when the combination of values is not in the MCV. It might make the
"good" estimate worse, but that's an inherent trade-off.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
t null and b = true) and (c is not null)
Or something like that.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
(gdb) p MemoryContextStats(TopMemoryContext)
(gdb) q
and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.
It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.
regards
--
Tomas Vondra
y bad idea to post questions about three very
different topics into a single pgsql-hackers thread. That'll just lead
to a lot of confusion.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
the root cause - you'll see the query plan,
which should give you enough context.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ithout any problems
I'll mark this patch as ready for commiter
Thank you for your work
Thanks for the review. I propose to commit this shortly.
Now that this was committed, I've updated the patch status accordingly.
Thanks!
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
say "portable option". Can you explain?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
significantly less
efficient than COPY. So if you have a lot of these parquet files, I'd
probably use the COPY. But maybe the ability to query the parquet files
directly (with FDW) is useful for you.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support
in the barman docs, i.e.
pg_basebackup (backup_method = postgres) and WAL streaming.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Mon, Oct 21, 2019 at 08:06:46AM -0700, Adrian Klaver wrote:
On 10/20/19 11:07 PM, Tomas Vondra wrote:
On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:
True. And AFAIK catching exceptions is not really possible in some code,
e.g. in stored procedures (because we can't do
On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:
On 10/20/19 4:18 PM, Tomas Vondra wrote:
On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:
On 10/20/19 1:14 PM, David G. Johnston wrote:
On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan
mailto:andrew.duns
execution is using CPU, so yes - it can cause a spike in CPU
usage. The question is whether the query plan is efficient, or whether
it's using the CPU unenecessarily. You need to show us EXPLAIN ANALYZE
and perhaps explain why you think it shouldn't use this much CPU.
regards
--
Tomas Vondra
ches. I'd say no, but I have no idea how
many people are affected by this but don't know about it ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
upgrade.
FWIW Julie, please don't top post - it just makes it harder to follow
the discussion. Also, this seems like a completely separate question,
unrelated to the DROP DATABLASE one. It might be better to start a new
thread instead of repurposing an existing one.
regards
--
Tomas Vondra
On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:
On 10/19/19 12:32 PM, David G. Johnston wrote:
On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>>
wrote:
>
>We invented jsonb_set() (credit to Dmitry Dolgov). And we've had i
, that people, who are
not aware about this caveat with jsonb_set and NULL values, will most likely
use it due to a bit simpler syntax (more similar to some popular programming
languages).
This seems like an entirely independent thing ...
Right. Useful, but entirely separate feature.
regards
--
Tomas
uot;...", 'null':jsonb)
I'm not entirely surprised it's what MySQL does ;-) but I'd say treating
it as a deletion of the key (just like MSSQL) is somewhat more sensible.
But I admit it's quite subjective.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Devel
- we could
use the table directly. Yes, that would be slower, but maybe it'd be
sufficient.
But I think the idea is ultimately that we'd implement a new dict type
in core, and people would just specify which table to load data from.
regards
--
Tomas Vondra http://www
xes behind. You might want to consider
upgrading ...
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
d, etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
the timescaleDB
extention on the master side?
AFAIK timescale is "just" a regular extension, in which case yes, all
you need to do is installing it on the master side.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA
. That seems as if a single query was executed
repeatedly. Maybe try looking into what query that is.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
y way to distinguish them.
I'm no psycopg2 expert, but it does have some infrastructure for casting
PostgreSQL types to Python types, and I guess that could be used for the
encryption.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support,
interesting idea - I wonder if it could be done e.g. in
psycopg as an extension, or something like that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
not a project/page managed by the PostgreSQL
community, you need to report the issues to the authors (most likely
through github issues).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
thing is to
send it to pgsql-general. Likewise, it makes little sense to send
questions to the "owner". I've removed the other lists from CC.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
That being said, we also have RELOPT_KIND_BRIN, and that seems to be
missing from RELOPT_KIND_INDEX too (and AFAICS the optimization works
for all index types).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
cussion, it
does not seem very polished / resilient.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ery, making it visible in pg_stat_activity, various
logs etc.
Which is why people generally use FDE for the whole disk, which is
transparent and provides the same level of protection.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
an rows are 50, but actual rows it got 1. In the second node
> type of "Bitmap Index Scan” on “ lower_state_users_idx”, I see the plan
> rows are 211, and actual rows 211. Both are same. Based on what
> conditions planner estimated the planned and actual rows count?
>
>
about this part.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e per second. Increasing the
number of workers is akin to allowing more cars on a highway, but also
lowering the speed limit.
You need to increase the limit on amount of work, and lowering naptime
is one way to do that.
regards
--
Tomas Vondra http://www.2ndQuadrant.co
/ more often, so it may
have adverse impact, of course. It's hard to say if it's going to be a
win overall, because we don't know how serious is the bloat.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
workers being constantly busy, unable to cleanup
all tables in a timely manner. In that case lowering the threshold is
not going to help, on the contrary.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
d to others.)||
>
It might be better to note current values of the counters somewhere, and
compute a delta later (and use that to compute the cache hit ratio). The
issue is that pg_stat_reset() throws away all sorts of interesting and
important stats, including those driving autovacuum/autoanalyze
On 04/09/2018 01:49 PM, Alexandre Arruda wrote:
>
>
> 2018-04-06 13:11 GMT-03:00 Tomas Vondra <tomas.von...@2ndquadrant.com
> <mailto:tomas.von...@2ndquadrant.com>>:
>
>
>
> On 04/06/2018 04:29 PM, Alexandre Arruda wrote:
> > 2018-04
es to rather different estimates. That is fixed now
and should be in the next minor release.
Hard to say if this is a case of that, but judging by the number of dead
tuples chances are it is.
It's probably better to use n_live_tup instead, though. I'd say that's
closer to the "live tuples" defi
row)
>
>
>
> *Time: 2245.073 ms (00:02.245)*
>
>
>
>
> On Fri, Mar 23, 2018 at 9:31 AM, Alessandro Aste
> <alessandro.a...@gmail.com <mailto:alessandro.a...@gmail.com>> wrote:
>
> Tomas, I'm attaching a 4MB file with the perf
On 03/22/2018 11:29 PM, Alessandro Aste wrote:
> Thanks Tomas. We're currently building postgres from source. In order to
> enable symbols, you want me to re-configure postres with --enable-debug
> then run perf?
>
Yes.
regards
--
Tomas Vondra http://www.2nd
ith us (well, if it's very large you may need
to only cut the first 1000 lines or so).
That should tell us in which functions most of the time is spent. That
will give us some hints, hopefully.
>
> For what concerns the self-contained test case - I'll do my best to
> prepa
produce the issue? For
example by dumping the relevant part of the tables?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
d <http://sg.id> AND cl.ordinal
> = 1
> JOIN circuit c ON c.id <http://c.id> = cl.circuit_id
> JOIN vendor v ON v.id <http://v.id> = c.pop_support_vendor_id
> ) seg ON seg.circuit_id = c.id <http://c.id>
> JOIN vendor_gtt_pop on vendor_gtt_pop.gtt_pop_id = seg.circuit_id
> ) foo wh
ytes and 1 GB for vm.dirty_bytes sounds fine.
>
It should be the other way around: dirty_background_bytes < dirty_bytes
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
don't know
which columns will be referenced by WHERE conditions, and data ingestion
has lower priority than queries). But that doesn't seem to be the case
here - you know the WHERE conditions, and people are likely sending a
lot of inserts (and expecting low latency responses).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
useful information. It would especially be good to know if you've
> added an index on product (establishment_id), for example.
>
There's actually a wiki page with overview of what should be included in
"slow query" questions:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
regards
for serving queries)
>
It depends on how large is the active part of the data set is. If it
fits into 200GB but not to smaller shared buffers (say, 100GB), then
using 200GB may be a win.
If the active set is much larger than RAM, smaller shared_buffer values
work better in my experience.
regards
--
Tom
additional shared space provides any real benefit.
>
Gradual increases are a good approach in general. And yes, having
buffers_checkpoint > buffers_clean > buffers_backend
is a good idea too. Together with the cache hit ratio it's probably a
more sensible metric than looking at usagecount directly.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
gt; BINARY to AVRO for example.
>
> Here, I am talking about native format exports/imports for
> performance considerations.
>
That is true, but the question is how significant the overhead is. If
it's 50% then reducing it would make perfect sense. If it's 1% then no
one if going to be bothered by it.
Without these numbers it's hard to make any judgments.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e hard-coded in core). But I agree that if we are to
add multiple new formats, it'd be nice to allow doing that in extension.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
this is a plain COPY to stdout ... I am wondering what
> could possibly be in that path that wants to allocate 1GB. Or is this
> not so plain but rather a COPY ... SELECT ...?
>
That's what I've been guessing, and why I was asking for a query plan.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
at file
depending on the log level.
> * What measures could be done to prevent this in future, or at least
> make the situation easier to recover from (e.g. without data loss)?
>
The jury is still out on the root cause, so it's too soon to say.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
/updating/writing the page again)?
>
Checkpoint only deals with 8kB chunks of data. Hint bits are not set on
a page, but on individual items (rows), so it's not up to the checkpoint
process to tweak that - that's up to queries accessing the data.
regards
--
Tomas Vondra http:/
extrapolating it in some way) and sizing the
hardware accordingly. Or you can squeeze as much money from the
management as possible, and buying the shiniest stuff out possible.
> Oh yeah, apparently we're married to HP or something… At least, IT
> management told me to look at their offer
single process, it might be a poor plan choice (e.g. hash
aggregate may easily cause that).
If all processes combined, then perhaps it's due to work_mem being too high.
> what should I do to deal with this problem?
>
Hard to say, until you provide enough information.
regards
--
Tomas Vondra
ar why the directory got so large.
You could try setting autovacuum_multixact_freeze_max_age=0 and
vacuuming the database, but it's hard to say if that will help.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 11/28/2017 07:26 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
> <tomas.von...@2ndquadrant.com> wrote:
>>
>> ...
>>
>> That is, most of the memory is allocated for SPI (2.4GB) and PL/pgSQL
>> procedure (500MB). How do you do the
iles each with a limited
> number of INSERTs to get around this issue but maybe there's
> another/better way?
>
The question is what exactly runs out of memory, and how did you modify
the configuration (particularly related to memory).
regards
--
Tomas Vondra http
ebuginfo first, so that the backtrace makes sense.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
96 matches
Mail list logo