On Tue, Oct 8, 2019 at 12:44 PM Merlin Moncure wrote:
> On Sun, Apr 14, 2019 at 3:51 PM Gunther wrote:
> >
> > For weeks now, I am banging my head at an "out of memory" situation. There
> > is only one query I am running on an 8 GB system, whatever I try, I get
> > knocked out on this out of me
On Sun, Apr 14, 2019 at 3:51 PM Gunther wrote:
>
> For weeks now, I am banging my head at an "out of memory" situation. There is
> only one query I am running on an 8 GB system, whatever I try, I get knocked
> out on this out of memory. It is extremely impenetrable to understand and fix
> this
On Fri, Aug 23, 2019 at 09:17:38AM -0400, Gunther wrote:
Hi all, I am connecting to a discussion back from April this year. My
data has grown and now I am running into new out of memory situations.
Meanwhile the world turned from 11.2 to 11.5 which I just installed
only to find the same out of
On Sat, Aug 24, 2019 at 11:40:09AM -0400, Gunther wrote:
Thanks Tom, yes I'd say it's using a lot of memory, but wouldn't call
it "leak" as it doesn't grow during the 30 min or so that this query
runs. It explodes to 4GB and then stays flat until done.
Well, the memory context stats you've s
Thanks Tom, yes I'd say it's using a lot of memory, but wouldn't call it
"leak" as it doesn't grow during the 30 min or so that this query runs.
It explodes to 4GB and then stays flat until done.
Yes, and this time the query is super complicated with many joins and
tables involved. The query p
OK, I went back through that old thread, and I noticed an early opinion
by a certain Peter who said that I should provision
some swap space. Since I had plenty of disk and no other option I tried
that. And it did some magic. Here this is a steady state now:
top - 14:07:32 up 103 days, 9:57,
Gunther writes:
> Hi all, I am connecting to a discussion back from April this year. My
> data has grown and now I am running into new out of memory situations.
It doesn't look like this has much of anything to do with the hash-table
discussion. The big hog is an ExprContext:
> ExprContext: 1
Hi all, I am connecting to a discussion back from April this year. My
data has grown and now I am running into new out of memory situations.
Meanwhile the world turned from 11.2 to 11.5 which I just installed only
to find the same out of memory error.
Have any of the things discussed and propo
On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:
...
I still think the idea with an "overflow batch" is worth considering,
because it'd allow us to keep the memory usage within work_mem. And
after getting familiar with the hash join code again (haven't messed
with it since 9.5 or s
On Tue, Apr 23, 2019 at 07:09:00PM -0400, Gunther wrote:
On 4/23/2019 16:43, Justin Pryzby wrote:
It wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the
query time.
I now upped my shared_buffers back from 1 to 2GB and work_mem from 4 to
16MB. Need to set vm.overcom
On 4/23/2019 16:43, Justin Pryzby wrote:
It wrote 40GB tempfiles - perhaps you can increase work_mem now to improve the
query time.
I now upped my shared_buffers back from 1 to 2GB and work_mem from 4 to
16MB. Need to set vm.overcommit_ratio from 50 to 75 (percent, with
vm.overcommit_memory =
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
On 4/21/2019 23:09, Tomas Vondra wrote:
What I think might work better is the attached v2 of the patch, with a
single top-level condition, comparing the combined memory usage
(spaceUsed + BufFile) against spaceAllowed. But it
On Tue, Apr 23, 2019 at 03:43:48PM -0500, Justin Pryzby wrote:
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
On 4/21/2019 23:09, Tomas Vondra wrote:
>What I think might work better is the attached v2 of the patch, with a
Thanks for this, and I am trying this now.
...
Aand, it's
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
> On 4/21/2019 23:09, Tomas Vondra wrote:
> >What I think might work better is the attached v2 of the patch, with a
> Thanks for this, and I am trying this now.
...
> Aand, it's a winner!
>
> Unique (cost=5551524.36..5554207.33 rows=3461
On 4/21/2019 23:09, Tomas Vondra wrote:
What I think might work better is the attached v2 of the patch, with a
single top-level condition, comparing the combined memory usage
(spaceUsed + BufFile) against spaceAllowed. But it also tweaks
spaceAllowed once the size needed for BufFile gets over wor
Jeff Janes writes:
> Is there a reason to not just elog the HJDEBUG stuff?
Yes --- it'd be expensive (a "no op" elog is far from free) and
useless to ~ 99.999% of users.
Almost all the conditionally-compiled debug support in the PG executor
is legacy leftovers from Berkeley days. If it were use
On Sat, Apr 20, 2019 at 4:48 PM Tom Lane wrote:
> Gunther writes:
> > and checked my log file and there was nothing before the call
> > MemoryContextStats(TopPortalContext) so I don't understand where this
> > printf stuff is ending up.
>
> It's going to stdout, which is likely block-buffered wh
On Mon, Apr 22, 2019 at 10:07:52AM +0200, Gaetano Mendola wrote:
Batch splitting shouldn't be followed by a hash function change?
What would be the value? That can help with hash collisions, but that's
not the issue with the data sets discussed in this thread. The issue
reported originally i
Batch splitting shouldn't be followed by a hash function change?
On Mon, Apr 22, 2019, 05:09 Tomas Vondra
wrote:
> On Sun, Apr 21, 2019 at 11:40:22AM -0500, Justin Pryzby wrote:
> >On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:
> >> Jeff Janes writes:
> >> > The growEnabled stuff onl
On Sun, Apr 21, 2019 at 11:40:22AM -0500, Justin Pryzby wrote:
On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:
Jeff Janes writes:
> The growEnabled stuff only prevents infinite loops. It doesn't prevent
> extreme silliness.
> If a single 32 bit hash value has enough tuples by itself
Now to Justin's patch. First undo Tomas' patch and apply:
$ mv src/include/executor/hashjoin.h.orig src/include/executor/hashjoin.h
$ mv src/backend/executor/nodeHash.c.orig src/backend/executor/nodeHash.c
$ mv src/backend/executor/nodeHashjoin.c.orig
src/backend/executor/nodeHashjoin.c
$ patch
After applying Tomas' patch 0002 as corrected, over 0001, same thing:
integrator=# set enable_nestloop to off;
SET
integrator=# explain analyze select * from reports.v_BusinessOperation;
WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32
WARNING: ExecHashIncreaseNum
On Sun, Apr 21, 2019 at 07:25:15PM -0400, Gunther wrote:
After applying Tomas' corrected patch 0001, and routing HJDEBUG messages
to stderr:
integrator=# set enable_nestloop to off;
SET
integrator=# explain analyze select * from reports.v_BusinessOperation;
...
WARNING: ExecHashIncreaseNum
After applying Tomas' corrected patch 0001, and routing HJDEBUG messages
to stderr:
integrator=# set enable_nestloop to off;
SET
integrator=# explain analyze select * from reports.v_BusinessOperation;
WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32
WARNING: Exec
Tomas Vondra writes:
> On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:
>> Jeff Janes writes:
>>> If a single 32 bit hash value has enough tuples by itself to not fit in
>>> work_mem, then it will keep splitting until that value is in a batch by
>>> itself before shutting off
>> Right,
On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:
> Jeff Janes writes:
> > The growEnabled stuff only prevents infinite loops. It doesn't prevent
> > extreme silliness.
>
> > If a single 32 bit hash value has enough tuples by itself to not fit in
> > work_mem, then it will keep splitting
On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote:
Jeff Janes writes:
The growEnabled stuff only prevents infinite loops. It doesn't prevent
extreme silliness.
If a single 32 bit hash value has enough tuples by itself to not fit in
work_mem, then it will keep splitting until that val
Jeff Janes writes:
> The growEnabled stuff only prevents infinite loops. It doesn't prevent
> extreme silliness.
> If a single 32 bit hash value has enough tuples by itself to not fit in
> work_mem, then it will keep splitting until that value is in a batch by
> itself before shutting off
Right
On Sat, Apr 20, 2019 at 4:26 PM Tom Lane wrote:
> Tomas Vondra writes:
> > Considering how rare this issue likely is, we need to be looking for a
> > solution that does not break the common case.
>
> Agreed. What I think we need to focus on next is why the code keeps
> increasing the number of
On Sun, Apr 21, 2019 at 03:08:22AM -0500, Justin Pryzby wrote:
On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote:
On 4/20/2019 21:14, Tomas Vondra wrote:
>Maybe. But before wasting any more time on the memory leak investigation,
>I suggest you first try the patch moving the BufFile allocat
I was able to reproduce in a somewhat contrived way:
sh -c 'ulimit -v 1024000 ; /usr/local/pgsql/bin/postgres -D ./pg12dev5
-cport=1234' &
postgres=# SET work_mem='64kB';SET client_min_messages =debug1;SET
log_statement_stats=on;explain(analyze) WITH v AS MATERIALIZED (SELECT * FROM
generate_s
On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote:
> On 4/20/2019 21:14, Tomas Vondra wrote:
> >Maybe. But before wasting any more time on the memory leak investigation,
> >I suggest you first try the patch moving the BufFile allocations to a
> >separate context. That'll either confirm or dis
On 4/20/2019 21:14, Tomas Vondra wrote:
Maybe. But before wasting any more time on the memory leak investigation,
I suggest you first try the patch moving the BufFile allocations to a
separate context. That'll either confirm or disprove the theory.
OK, fair enough. So, first patch 0001-* applie
I am now running Justin's patch after undoing Tomas' patches and any of
my own hacks (which would not have interfered with Tomas' patch)
On 4/20/2019 15:30, Justin Pryzby wrote:
With my patch, the number of batches is nonlinear WRT work_mem, and reaches a
maximum for moderately small work_mem.
On Sat, Apr 20, 2019 at 08:33:46PM -0400, Gunther wrote:
On 4/20/2019 16:01, Tomas Vondra wrote:
For me, this did the trick:
update pg_class set (relpages, reltuples) = (100, 1) where
relname = 'tmp_r';
update pg_class set (relpages, reltuples) = (1, 100) where
relname = 'tmp_q';
On 4/20/2019 16:01, Tomas Vondra wrote:
For me, this did the trick:
update pg_class set (relpages, reltuples) = (100, 1) where
relname = 'tmp_r';
update pg_class set (relpages, reltuples) = (1, 100) where
relname = 'tmp_q';
YES! For me too. My EXPLAIN ANALYZE actually succeeded.
On Sat, Apr 20, 2019 at 06:20:15PM -0400, Tom Lane wrote:
Justin Pryzby writes:
On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
Maybe we just need to account for the per-batch buffers while estimating
the amount of memory used during planning. That would force this case
into a merge
Justin Pryzby writes:
> On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
>> Maybe we just need to account for the per-batch buffers while estimating
>> the amount of memory used during planning. That would force this case
>> into a mergejoin instead, given that work_mem is set so small.
On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
> Tomas Vondra writes:
> > I think it's really a matter of underestimate, which convinces the planner
> > to hash the larger table. In this case, the table is 42GB, so it's
> > possible it actually works as expected. With work_mem = 4MB I'v
On Sat, Apr 20, 2019 at 10:36:50PM +0200, Tomas Vondra wrote:
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:
Tomas Vondra writes:
Considering how rare this issue likely is, we need to be looking for a
solution that does not break the common case.
Agreed. What I think we need to f
On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
Tomas Vondra writes:
I think it's really a matter of underestimate, which convinces the planner
to hash the larger table. In this case, the table is 42GB, so it's
possible it actually works as expected. With work_mem = 4MB I've seen 32k
Gunther writes:
> and checked my log file and there was nothing before the call
> MemoryContextStats(TopPortalContext) so I don't understand where this
> printf stuff is ending up.
It's going to stdout, which is likely block-buffered whereas stderr
is line-buffered, so data from the latter will
Tomas Vondra writes:
> I think it's really a matter of underestimate, which convinces the planner
> to hash the larger table. In this case, the table is 42GB, so it's
> possible it actually works as expected. With work_mem = 4MB I've seen 32k
> batches, and that's not that far off, I'd day. Maybe
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:
Tomas Vondra writes:
Considering how rare this issue likely is, we need to be looking for a
solution that does not break the common case.
Agreed. What I think we need to focus on next is why the code keeps
increasing the number of bat
Tomas Vondra writes:
> Considering how rare this issue likely is, we need to be looking for a
> solution that does not break the common case.
Agreed. What I think we need to focus on next is why the code keeps
increasing the number of batches. It seems like there must be an undue
amount of data
On Tue, Apr 16, 2019 at 11:46:51PM -0500, Justin Pryzby wrote:
I wonder if it'd be useful to compile with
./configure CFLAGS=-DHJDEBUG=1
Could you try this, too ?
OK, doing it now, here is what I'm getting in the log file now. I am
surprised I get so few rows here when there
2019-04-20 17:
On Sat, Apr 20, 2019 at 02:30:09PM -0500, Justin Pryzby wrote:
On Sun, Apr 14, 2019 at 11:24:59PM -0400, Tom Lane wrote:
Gunther writes:
>ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244
chunks); 2230341056 used
Oooh, that looks like a memory leak right enough. T
On Fri, Apr 19, 2019 at 11:34:54PM -0400, Gunther wrote:
...
It would be so nice if there was a way to force a specific plan for
purposes of the testing. I tried giving false data in pg_class
reltuples and relpages:
foo=# analyze tmp_q;
ANALYZE
foo=# analyze tmp_r;
ANALYZE
foo=# select rel
On Sun, Apr 14, 2019 at 11:24:59PM -0400, Tom Lane wrote:
> Gunther writes:
> >ExecutorState: 2234123384 total in 266261 blocks; 3782328 free
> > (17244 chunks); 2230341056 used
>
> Oooh, that looks like a memory leak right enough. The ExecutorState
> should not get that big for any rea
> The only problem is that I can't test that this actually would trigger the
> memory problem, because I can't force the plan to use the right join, it
> always reverts to the left join hashing the tmp_q:
I think the table on the "OUTER" side is the one which needs to be iterated
over (not hashed)
On Fri, Apr 19, 2019 at 11:34:54PM -0400, Gunther wrote:
> On 4/19/2019 17:01, Justin Pryzby wrote:
> >Were you able to reproduce the issue in some minimized way ? Like after
> >joining fewer tables or changing to join with fewer join conditions ?
> >
> >On Thu, Apr 18, 2019 at 05:21:28PM +0200, T
pg_hint_plan extension might be able to force a plan.
Also, I don’t know if perf probes & perf record/script could be useful for
creating a log of all the calls to do memory allocation along with the unwound
call stacks? Then analyzing that file? At least this can be done for a single
process,
On 4/19/2019 17:01, Justin Pryzby wrote:
Were you able to reproduce the issue in some minimized way ? Like after
joining fewer tables or changing to join with fewer join conditions ?
On Thu, Apr 18, 2019 at 05:21:28PM +0200, Tomas Vondra wrote:
It would be possible to do at least one of these
On Wed, Apr 17, 2019 at 11:52:44PM -0400, Gunther wrote:
> Hi guys. I don't want to be pushy, but I found it strange that after so much
Were you able to reproduce the issue in some minimized way ? Like after
joining fewer tables or changing to join with fewer join conditions ?
On Thu, Apr 18, 20
On Thu, Apr 18, 2019 at 6:01 AM Gunther wrote:
> Hi guys. I don't want to be pushy, but I found it strange that after so
> much lively back and forth getting to the bottom of this, suddenly my last
> nights follow-up remained completely without reply. I wonder if it even got
> received. For those
On Wed, Apr 17, 2019 at 11:52:44PM -0400, Gunther wrote:
Hi guys. I don't want to be pushy, but I found it strange that after
so much lively back and forth getting to the bottom of this, suddenly
my last nights follow-up remained completely without reply. I wonder
if it even got received. For t
Hi guys. I don't want to be pushy, but I found it strange that after so
much lively back and forth getting to the bottom of this, suddenly my
last nights follow-up remained completely without reply. I wonder if it
even got received. For those who read their emails with modern readers
(I know I
On 17/04/2019 18:01, Jean-David Beyer wrote:
On 4/16/19 6:39 PM, Gavin Flower wrote:
I suspect that most things will run a little better with some swap space.
Not always.
$ free
total used free shared buffers cached
Mem: 16254616 13120960 3133656 20
On 4/16/19 6:39 PM, Gavin Flower wrote:
> I suspect that most things will run a little better with some swap space.
Not always.
$ free
total used free shared buffers cached
Mem: 16254616 13120960 3133656 20820646676 10765380
-/+ buffers/cache:
I wonder if it'd be useful to compile with
./configure CFLAGS=-DHJDEBUG=1
On 4/16/2019 11:30, Tom Lane wrote:
Breakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718
718 {
(gdb) bt 8
#0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718
#1 0x0084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969
#2 0x00702b63 in makeBufFileCommo
On 15/04/2019 08:23, Gunther wrote:
For weeks now, I am banging my head at an "out of memory" situation.
There is only one query I am running on an 8 GB system, whatever I
try, I get knocked out on this out of memory. It is extremely
impenetrable to understand and fix this error. I guess I co
Jeff Janes writes:
> On Mon, Apr 15, 2019 at 9:49 PM Gunther wrote:
>> Isn't there some other way?
> I wonder of valgrind or something like that could be of use. I don't know
> enough about those tools to know. One problem is that this is not really a
> leak. If the query completely successfu
On Mon, Apr 15, 2019 at 9:49 PM Gunther wrote:
> Jeff Janes had more
>
> Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715
>> 715 {
>> (gdb) p context->name
>> $8 = 0x96ce5b "ExecutorState"
>>
>>
> I think that the above one might have been the one you wanted.
>
> Not s
Gunther writes:
> And there we go:
> Breakpoint 6, AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718
> 718 {
> (gdb) bt 8
> #0 AllocSetAlloc (context=0x29a6450, size=8) at aset.c:718
> #1 0x0084e8ad in palloc0 (size=size@entry=8) at mcxt.c:969
> #2 0x00702b63 in makeB
It is confirmed, these two call paths are the only ones. At least
probably the only ones to occur with enough of a frequency.
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
28576 postgres 20 0 2695304 1.0g 200764 R 11.3 13.8 4:20.13 postgres:
postgres integr
I saw your replies, if there was a way of using gdb commands to have a
conditional breakpoint which will only fire if the n-th caller in the
chain is not a certain source location, then one could exclude the bulk
of these allocations and focus better.
But I decided I try to re-factor this quer
Gunther writes:
> So what I am wondering now, is there seems to be an EXPLOSION of memory
> consumption near the time of the crash. That ExecutorState has
> 2,238,648,944 but just until the very last second(s) the RES memory as
> per top was 1.5 GB I swear.
That's not hugely surprising really,
Gunther writes:
> Is there a way of dumping that memory map info during normal runtime, by
> calling a function with the debugger?
Sure, "call MemoryContextStats(TopMemoryContext)"
(or actually, since you know which context is the problematic one,
just print that one context)
On 4/15/2019 21:49, Gunther wrote:
I'm going to try without that DISTINCT step, or perhaps by dismantling
this query until it works without this excessive memory growth.
It also failed. Out of memory. The resident memory size of the backend
was 1.5 GB before it crashed.
TopMemoryContext: 4
On 2019-Apr-15, Tom Lane wrote:
> It's barely conceivable that in your particular query, there's something
> acting to break that which doesn't manifest typically; but I think it's
> much more likely that you simply haven't found the culprit allocation.
> It's quite feasible that many many ExecHas
Gunther writes:
> Tom (BTW, your mail server rejects my direct mail,
[ raised eyebrow ] It's coming through fine AFAICS.
>> I'm pretty sure that's not the droid we're looking for.
>> ExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately
>> sticks it into a TupleTableSlot that wi
Wow, we are getting somewhere.
Tom (BTW, your mail server rejects my direct mail, but I'm glad you got
it through the list), you say:
I'm pretty sure that's not the droid we're looking for.
ExecHashJoinGetSavedTuple does palloc a new tuple, but it immediately
sticks it into a TupleTableSlot t
On Mon, Apr 15, 2019 at 12:34 PM Gunther wrote:
> Breakpoint 2, AllocSetAlloc (context=0x1168230, size=8272) at aset.c:715
> 715 {
> (gdb) p context->name
> $8 = 0x96ce5b "ExecutorState"
>
>
I think that the above one might have been the one you wanted.
> I guess I should run this for a lit
Alvaro Herrera writes:
> Seems that ExecHashJoinGetSavedTuple stores a minimalTuple and sets the
> shouldFree flag to "true", and then in ExecHashJoinNewBatch, callee
> ExecFetchSlotMinimalTuple sets shouldFree to false inconditionally when
> the slot uses minimal tuple ops. Maybe that's correct,
On 2019-Apr-15, Gunther wrote:
> #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715
> #1 0x0084e6cd in palloc (size=385) at mcxt.c:938
> #2 0x0061019c in ExecHashJoinGetSavedTuple
> (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,
> tupleSlot=
Gunther writes:
> Now I give you a bt so we have something to look at:
> #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715
> #1 0x0084e6cd in palloc (size=385) at mcxt.c:938
> #2 0x0061019c in ExecHashJoinGetSavedTuple
> (file=file@entry=0x8bbc528, hashvalue=hashval
OK Guys, you are very kind to continue taking an interest in this matter.
I will try what I can to help squish the bug.
Tomas Vondra just added a good idea that explains why I get the out of
memory with still having so much cache available:
# sysctl vm.overcommit_memory
vm.overcommit_memory =
On Mon, Apr 15, 2019 at 11:28 AM Tom Lane wrote:
> Jeff Janes writes:
> > To get it to happen faster, maybe you could run the server with a small
> > setting of "ulimit -v"? Or, you could try to capture it live in gdb.
> > Unfortunately I don't know how to set a breakpoint for allocations into
I wrote:
> If you can use gdb at all, it's not that hard to break on allocations
> into a specific context; I've done it many times. The strategy is
> basically
> 1. Let query run long enough for memory usage to start increasing,
> then attach to backend with gdb.
BTW, just to clarify that strate
On Sun, Apr 14, 2019 at 05:19:50PM -0400, Jeff Janes wrote:
On Sun, Apr 14, 2019 at 4:51 PM Gunther wrote:
For weeks now, I am banging my head at an "out of memory" situation.
There is only one query I am running on an 8 GB system, whatever I try,
I get knocked out on this out of
Jeff Janes writes:
> To get it to happen faster, maybe you could run the server with a small
> setting of "ulimit -v"? Or, you could try to capture it live in gdb.
> Unfortunately I don't know how to set a breakpoint for allocations into a
> specific context, and setting a breakpoint for any memo
On Sun, Apr 14, 2019 at 11:59:45PM -0400, Gunther wrote:
On 4/14/2019 23:24, Tom Lane wrote:
ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244
chunks); 2230341056 used
Oooh, that looks like a memory leak right enough. The ExecutorState
should not get that big for an
On Sun, Apr 14, 2019 at 11:59 PM Gunther wrote:
> Is there any doubt that this might be a problem with Linux? Because if
> you want, I can whip out a FreeBSD machine, compile pgsql, and attach
> the same disk, and try it there. I am longing to have a reason to move
> back to FreeBSD anyway. But
On Sun, Apr 14, 2019 at 11:04 PM Gunther wrote:
> Could you rerun the query with \set VERBOSITY verbose to show the file/line
> that's failing ?
>
> Here goes:
>
> integrator=# \set VERBOSITY verbose
> integrator=# SET ENABLE_NESTLOOP TO OFF;
> SET
> integrator=# INSERT INTO reports.BusinessOpera
On Sun, Apr 14, 2019 at 11:59:45PM -0400, Gunther wrote:
> On 4/14/2019 23:24, Tom Lane wrote:
> >Any chance of extracting a self-contained test case that reproduces this?
> With 18 million rows involved in the base tables, hardly.
Were you able to reproduce the problem with SELECT (without INSERT
On 4/14/2019 23:24, Tom Lane wrote:
ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244
chunks); 2230341056 used
Oooh, that looks like a memory leak right enough. The ExecutorState
should not get that big for any reasonable query.
2.2 GB is massive yes.
Your error a
Gunther writes:
>ExecutorState: 2234123384 total in 266261 blocks; 3782328 free (17244
> chunks); 2230341056 used
Oooh, that looks like a memory leak right enough. The ExecutorState
should not get that big for any reasonable query.
Your error and stack trace show a failure in HashBatch
Thanks Justin Pryzby too, and Jeff Janes, responding to both of you for
efficiency. Answers and more logs and the gdb backtrace below.
The version is 10.2 latest.
v10.7 is available; could you upgrade ?
Sorry I meant 11.2 actually latest.
What are these set to ? shared_buffers? work_mem?
sh
On Sun, Apr 14, 2019 at 9:06 PM Gunther wrote:
> Thanks for looking at my problem Tom Lane and Jeff Janes. Sorry for not
> having given enough detail.
>
> The version is 10.2 latest. The database was originally built with 10.1
> and then just started with 10.2.
>
Do you mean 11.2? The latest in
On Sun, Apr 14, 2019 at 05:19:11PM -0400, Tom Lane wrote:
! Gunther writes:
! > For weeks now, I am banging my head at an "out of memory" situation.
! > There is only one query I am running on an 8 GB system, whatever I try,
! > I get knocked out on this out of memory. It is extremely impenetrab
On Sun, Apr 14, 2019 at 09:05:48PM -0400, Gunther wrote:
> Thanks for looking at my problem Tom Lane and Jeff Janes. Sorry for not
> having given enough detail.
>
> The version is 10.2 latest.
v10.7 is available; could you upgrade ?
What are these set to ? shared_buffers? work_mem?
Was postgre
Thanks for looking at my problem Tom Lane and Jeff Janes. Sorry for not
having given enough detail.
The version is 10.2 latest. The database was originally built with 10.1
and then just started with 10.2. No dump and reload or pg_upgrade.
Underlying system is 64bit Amazon Linux (CentOS like) r
On Sun, Apr 14, 2019 at 4:51 PM Gunther wrote:
> For weeks now, I am banging my head at an "out of memory" situation. There
> is only one query I am running on an 8 GB system, whatever I try, I get
> knocked out on this out of memory.
>
Is PostgreSQL throwing an error with OOM, or is getting kill
Gunther writes:
> For weeks now, I am banging my head at an "out of memory" situation.
> There is only one query I am running on an 8 GB system, whatever I try,
> I get knocked out on this out of memory. It is extremely impenetrable to
> understand and fix this error. I guess I could add a swap
95 matches
Mail list logo