You are right. When I add those code, it really give me a strong bad smell.
It not worth these effort.
Thanks for your reply and suggestion.
--
Sincerely
Fan Yang
fan yang writes:
> - src/port/quotes.c
> At line 38, at function "escape_single_quotes_ascii",
> here used "malloc" to get some memory and return the
> pointer returned by the "malloc".
> So, any caller used this function shoule free this memory.
> -
Hi all,
While reading the code, I find some code that make memory leak:
- src/port/quotes.c
At line 38, at function "escape_single_quotes_ascii",
here used "malloc" to get some memory and return the
pointer returned by the "malloc".
So, any caller used this function shoule free
On 10/11/2016 12:56 AM, Peter Geoghegan wrote:
Also, something about trace_sort here:
+#ifdef TRACE_SORT
+ if (trace_sort)
+ elog(LOG, "using " INT64_FORMAT " KB of memory for read buffers among %d
input tapes",
+(state->availMem) / 1024, numInputTapes);
+#endif
+
+
On Mon, Oct 10, 2016 at 5:21 AM, Heikki Linnakangas wrote:
> Admittedly that's confusing. Thinking about this some more, I came up with
> the attached. I removed the separate LogicalTapeAssignReadBufferSize() call
> altogether - the read buffer size is now passed as argument to
On 10/06/2016 06:44 PM, Peter Geoghegan wrote:
While the fix you pushed was probably a good idea anyway, I still
think you should not use swhichtate->maxTapes to exhaustively call
LogicalTapeAssignReadBufferSize() on every tape, even non-active
tapes. That's the confusing part.
Admittedly
On Thu, Oct 6, 2016 at 8:44 AM, Peter Geoghegan wrote:
> Besides, what I propose to do is really exactly the same as what you
> also want to do, except it avoids actually changing state->maxTapes.
> We'd just pass down what you propose to assign to state->maxTapes
> directly,
On Thu, Oct 6, 2016 at 12:00 AM, Heikki Linnakangas wrote:
> This is related to earlier the discussion with Peter G, on whether we should
> change state->maxTapes to reflect the actual number of tape that were used,
> when that's less than maxTapes. I think his confusion about
On 10/06/2016 07:50 AM, Tomas Vondra wrote:
it seems e94568ecc10 has a pretty bad memory leak. A simple
Oops, fixed, thanks for the report!
To be precise, this wasn't a memory leak, just a gross overallocation of
memory. The new code in tuplesort.c assumes that it's harmless to call
Hi,
it seems e94568ecc10 has a pretty bad memory leak. A simple
pgbench -i -s 300
allocates ~32GB of memory before it fails
vacuum...
set primary keys...
ERROR: out of memory
DETAIL: Failed on request of size 134184960.
The relevant bit from the memory context stats:
Andrey Zhidenkov writes:
> It's very strange, but when I use expression 'update test set test =
> 'test' where id = 1' as argument of plpy.execute() memory do not
> growing at all...
Well, that suggests it's not particularly plpython's fault at all, but
a leak
It's very strange, but when I use expression 'update test set test =
'test' where id = 1' as argument of plpy.execute() memory do not
growing at all...
On Sun, Jun 26, 2016 at 9:05 PM, Andrey Zhidenkov
wrote:
> Thank you for your answer, Tom.
>
> I've tried code in
Thank you for your answer, Tom.
I've tried code in your example and I still see an always growing
memory consumption (1Mb per second). As it was before, I do not see
growing memory if
I use 'select 1' query as argument of plpy.execute(). Table test does
not has any triggers or foreign keys, I
Andrey Zhidenkov writes:
> I see memory consumption in htop and pg_activity tools.
"top" can be pretty misleading if you don't know how to interpret its
output, specifically that you have to discount whatever it shows as
SHR space. That just represents the amount of
I found commit, that fixes some memory leaks in 9.6 beta 2:
https://github.com/postgres/postgres/commit/8c75ad436f75fc629b61f601ba884c8f9313c9af#diff-4d0cb76412a1c4ee5d9c7f76ee489507
I'm interesting in how Tom Lane check that is no more leaks in plpython?
On Sat, Jun 25, 2016 at 4:54 AM, Andrey
For test I wrote script in Python, which calls a test function via psycopg2:
#!/usr/bin/env python2
import psycopg2
conn = psycopg2.connect('xxx')
cursor = conn.cursor()
cursor.execute('set application_name to \'TEST\'')
for i in range(1, 100):
cursor.execute('select test()')
On Fri, Jun 24, 2016 at 6:41 PM, Andrey Zhidenkov <
andrey.zhiden...@gmail.com> wrote:
> For example, when I call this procedure
> many times,
Call how? Specifically, how are you handling transactions in the calling
client? And what/how are you measuring memory consumption?
David J.
I have postgresql 9.4.8 on my server and I've noticed always growing
memory when I use plpython. I've made some tests and find a few
situations, when memory leaks. For example, when I call this procedure
many times, I can see an always growing memory:
create or replace
function test() returns
I had the possibility to perform tests on 9.5, and can confirm the
memory leak I was seeing is solved with the patch (and that's great :) )
Regards
Marc
On 18/04/2016 17:53, Julien Rouhaud wrote:
> On 18/04/2016 16:33, Tom Lane wrote:
>> I poked at this over the weekend, and got more unhappy
On 18/04/2016 16:33, Tom Lane wrote:
>
> I poked at this over the weekend, and got more unhappy the more I poked.
> Aside from the memory leakage issue, there are multiple coding-rule
> violations besides the one you noted about scope of the critical sections.
> One example is that in the
Julien Rouhaud writes:
> On 16/04/2016 20:45, Tom Lane wrote:
>> I think this needs to be redesigned so that the critical section and WAL
>> insertion calls all happen within a single straight-line piece of code.
>>
>> We could try making that place be
On 16/04/2016 20:45, Tom Lane wrote:
> Julien Rouhaud writes:
>
>> Also, in dataPlaceToPageLeaf() and ginVacuumPostingTreeLeaf(), shouldn't
>> the START_CRIT_SECTION() calls be placed before the xlog code?
>
> Yeah, they should. Evidently somebody kluged it to avoid
Julien Rouhaud writes:
> After some digging, the leak comes from walbufbegin palloc in
> registerLeafRecompressWALData().
> IIUC, walbufbegin isn't pfree-d and can't be before XLogInsert() is
> called, which happens in ginPlaceToPage().
Hmm.
> I don't see a simple way
Hello,
Another colleague provided a report of memory leak, during a GIN index
build. Test case to reproduce the attached (need to create a gin index
on the val column after loading). Sorry, it generates a 24GB table, and
memory start leaking with a 1GB maintenance_work_mem after reaching 8 or
9
Julien Rouhaud writes:
> My colleague Adrien reported me a memory leak in GIN indexes while doing
> some benchmark on several am.
> ...
> I'm not at all familiar with GIN code, but naive attached patch seems to
> fix the issue and not break anything. I can reproduce
Hello,
My colleague Adrien reported me a memory leak in GIN indexes while doing
some benchmark on several am.
Here is a test case to reproduce the issue:
CREATE TABLE test AS (
SELECT t
FROM generate_series(now(), now() + interval '10 day', '1 second')
AS d(t)
CROSS JOIN
Jeff Janes writes:
> I bisected it down to:
> d88976cfa1302e8dccdcbfe55e9e29faee8c0cdf is the first bad commit
> commit d88976cfa1302e8dccdcbfe55e9e29faee8c0cdf
> Author: Heikki Linnakangas
> Date: Wed Feb 4 17:40:25 2015 +0200
> Use a
On Fri, Mar 11, 2016 at 11:40 PM, Jaime Casanova
wrote:
> Hi,
>
> On the spanish list, Felipe de Jesús Molina Bravo, reported a few days
> back that a query that worked well in 9.4 consume all memory in 9.5.
> With the self contained test he provided us i
On Fri, Jul 3, 2015 at 3:14 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
I committed some of these that seemed like improvements on readability
grounds, but please just mark the rest as ignore in coverity.
Done. Thanks.
--
Michael
--
Sent via pgsql-hackers mailing list
On 06/08/2015 09:48 AM, Michael Paquier wrote:
Hi all,
Please find attached a set of fixes for a couple of things in src/bin:
- pg_dump/pg_dumpall:
-- getFormattedTypeName, convertTSFunction and myFormatType return
strdup'd results that are never free'd.
-- convertTSFunction returns const char.
On Tue, Jun 9, 2015 at 10:09 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Tue, Jun 9, 2015 at 6:25 AM, Heikki Linnakangas wrote:
I'm still not sure if I should've just reverted that refactoring, to make
XLogFileCopy() look the same in master and back-branches, which makes
On Wed, Jul 1, 2015 at 10:58 AM, Fujii Masao wrote:
On Tue, Jun 9, 2015 at 10:09 AM, Michael Paquier wrote:
That's a valid concern. What about the attached then? I think that it
is still good to keep upto to copy only data up to the switch point at
recovery exit. InstallXLogFileSegment()
On 06/08/2015 09:04 PM, Fujii Masao wrote:
On Mon, Jun 8, 2015 at 11:52 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Fri, Jun 5, 2015 at 10:45 PM, Fujii Masao wrote:
Why don't we call InstallXLogFileSegment() at the end of XLogFileCopy()?
If we do that, the risk of memory leak
On Tue, Jun 9, 2015 at 6:25 AM, Heikki Linnakangas wrote:
I'm still not sure if I should've just reverted that refactoring, to make
XLogFileCopy() look the same in master and back-branches, which makes
back-patching easier, or keep the refactoring, because it makes the code
slightly nicer. But
On Tue, Jun 9, 2015 at 6:25 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
On 06/08/2015 09:04 PM, Fujii Masao wrote:
On Mon, Jun 8, 2015 at 11:52 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Fri, Jun 5, 2015 at 10:45 PM, Fujii Masao wrote:
Why don't we call
Hi all,
Please find attached a set of fixes for a couple of things in src/bin:
- pg_dump/pg_dumpall:
-- getFormattedTypeName, convertTSFunction and myFormatType return
strdup'd results that are never free'd.
-- convertTSFunction returns const char. I fail to see the point of
that... In my opinion
On Mon, Jun 8, 2015 at 3:48 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Hi all,
Please find attached a set of fixes for a couple of things in src/bin:
- pg_dump/pg_dumpall:
-- getFormattedTypeName, convertTSFunction and myFormatType return
strdup'd results that are never free'd.
--
On Mon, Jun 8, 2015 at 10:26 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Mon, Jun 8, 2015 at 3:48 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Hi all,
Please find attached a set of fixes for a couple of things in src/bin:
- pg_dump/pg_dumpall:
-- getFormattedTypeName,
On Fri, Jun 5, 2015 at 10:45 PM, Fujii Masao wrote:
Why don't we call InstallXLogFileSegment() at the end of XLogFileCopy()?
If we do that, the risk of memory leak you're worried will disappear at all.
Yes, that looks fine, XLogFileCopy() would copy to a temporary file,
then install it
On Fri, Jun 5, 2015 at 12:39 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, Jun 4, 2015 at 10:40 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Mon, Jun 1, 2015 at 4:24 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, May 28, 2015 at 9:09 PM, Michael Paquier
On Thu, Jun 4, 2015 at 10:40 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Mon, Jun 1, 2015 at 4:24 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, May 28, 2015 at 9:09 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Since commit de768844, XLogFileCopy of xlog.c
On Mon, Jun 1, 2015 at 4:24 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Thu, May 28, 2015 at 9:09 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Since commit de768844, XLogFileCopy of xlog.c returns to caller a
pstrdup'd string that can be used afterwards for other things.
On Thu, May 28, 2015 at 9:09 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
Since commit de768844, XLogFileCopy of xlog.c returns to caller a
pstrdup'd string that can be used afterwards for other things.
XLogFileCopy is used in only one place, and it happens that the result
string is
Hi all,
Since commit de768844, XLogFileCopy of xlog.c returns to caller a
pstrdup'd string that can be used afterwards for other things.
XLogFileCopy is used in only one place, and it happens that the result
string is never freed at all, leaking memory.
Attached is a patch to fix the problem.
Heikki Linnakangas hlinnakan...@vmware.com writes:
[ assorted GIN leaks ]
I think we need a more whole-sale approach. I'm thinking of adding a new
memory context to contain everything related to the scan keys, which can
then be destroyed in whole.
We haven't heard any complaints about
While looking at the segfault that Olaf Gawenda reported (bug #12694), I
realized that the GIN fast scan patch introduced a small memory leak to
re-scanning a GIN index. In a nutshell, freeScanKeys() needs to pfree()
the two new arrays, requiredEntries and additionalEntries.
After fixing
On Tue, Jan 13, 2015 at 5:45 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
This looks like a false positive to me. PQgetCopyData() will only return a
buffer if its return value is 0
Right. Sorry for the noise.
--
Michael
--
Sent via pgsql-hackers mailing list
On 01/25/2014 11:36 PM, Bruce Momjian wrote:
On Tue, Jun 18, 2013 at 09:07:59PM +0300, Heikki Linnakangas wrote:
Hmm. I could repeat this, and it seems that the catcache for
pg_statistic accumulates negative cache entries. Those slowly take up
the memory.
Digging a bit deeper, this is a
On Tue, Jun 18, 2013 at 09:07:59PM +0300, Heikki Linnakangas wrote:
Hmm. I could repeat this, and it seems that the catcache for
pg_statistic accumulates negative cache entries. Those slowly take up
the memory.
Digging a bit deeper, this is a rather common problem with negative
catcache
From: Jeff Janes jeff.ja...@gmail.com
On Tue, Jun 18, 2013 at 3:40 PM, MauMau maumau...@gmail.com wrote:
Really? Would the catcache be polluted with entries for nonexistent
tables? I'm surprised at this. I don't think it is necessary to speed up
the query that fails with nonexistent tables,
Hello,
I've encountered a memory leak problem when I use a PL/pgsql function which
creates and drops a temporary table. I couldn't find any similar problem in
the mailing list. I'd like to ask you whether this is a PostgreSQL's bug.
Maybe I should post this to pgsql-bugs or pgsql-general,
On 18.06.2013 14:27, MauMau wrote:
The cause of the memory increase appears to be CacheMemoryContext. When
I attached to postgres with gdb and ran call
MemoryContextStats(TopMemoryContext) several times, the size of
CacheMemoryContext kept increasing.
Hmm. I could repeat this, and it seems
On 18.06.2013 15:48, Heikki Linnakangas wrote:
On 18.06.2013 14:27, MauMau wrote:
The cause of the memory increase appears to be CacheMemoryContext. When
I attached to postgres with gdb and ran call
MemoryContextStats(TopMemoryContext) several times, the size of
CacheMemoryContext kept
From: Heikki Linnakangas hlinnakan...@vmware.com
On 18.06.2013 15:48, Heikki Linnakangas wrote:
Hmm. I could repeat this, and it seems that the catcache for
pg_statistic accumulates negative cache entries. Those slowly take up
the memory.
Digging a bit deeper, this is a rather common problem
On Tue, Jun 18, 2013 at 3:40 PM, MauMau maumau...@gmail.com wrote:
From: Heikki Linnakangas hlinnakan...@vmware.com
On 18.06.2013 15:48, Heikki Linnakangas wrote:
Hmm. I could repeat this, and it seems that the catcache for
pg_statistic accumulates negative cache entries. Those slowly take
I'm working on an upgrade of PostgreSQL embedded in a product from
version 8.1.x to 9.1.x. One particular PL/pgSQL function is giving us an
issue as there seems to be a rather severe regression in memory usage --
a query that finishes in 8.1 causes an out of memory exception on 9.1.
Using the
Joe Conway m...@joeconway.com writes:
I'm working on an upgrade of PostgreSQL embedded in a product from
version 8.1.x to 9.1.x. One particular PL/pgSQL function is giving us an
issue as there seems to be a rather severe regression in memory usage --
a query that finishes in 8.1 causes an out
On 05/09/2012 03:08 PM, Tom Lane wrote:
I see no memory leak at all in this example, either in HEAD or 9.1
branch tip. Perhaps whatever you're seeing is an already-fixed bug?
Another likely theory is that you've changed settings from the 8.1
installation. I would expect this example to eat
On 05/09/2012 03:36 PM, Joe Conway wrote:
Good call -- of course that just means my contrived example fails to
duplicate the real issue :-(
In the real example, even with work_mem = 1 MB I see the same behavior
on 9.1.
OK, new script. This more faithfully represents the real life scenario,
On 05/09/2012 05:06 PM, Joe Conway wrote:
OK, new script. This more faithfully represents the real life scenario,
and reproduces the issue on HEAD with out-of-the-box config settings,
versus 8.1 which completes the query having never exceeded a very modest
memory usage:
---
On
Joe Conway m...@joeconway.com writes:
The attached one-liner seems to plug up the majority (although not quite
all) of the leakage.
Looks sane to me. Are you planning to look for the remaining leakage?
regards, tom lane
--
Sent via pgsql-hackers mailing list
On 05/09/2012 10:01 PM, Tom Lane wrote:
Joe Conway m...@joeconway.com writes:
The attached one-liner seems to plug up the majority (although not quite
all) of the leakage.
Looks sane to me. Are you planning to look for the remaining leakage?
Actually, now I'm not so sure there really are
On 27.04.2011 04:19, Heikki Linnakangas wrote:
On 26.04.2011 21:30, Tom Lane wrote:
Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:
The trivial fix is to reset the per-tuple memory context between
iterations.
Have you tested this with SRFs?
ForeignNext seems like quite the
On 26.04.2011 21:30, Tom Lane wrote:
Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:
The trivial fix is to reset the per-tuple memory context between
iterations.
Have you tested this with SRFs?
ForeignNext seems like quite the wrong place for resetting
exprcontext in any case
Foreign data wrapper's IterateForeignScan() function is supposed to be
called in a short-lived memory context, but the memory context is
actually not reset during query execution. That's a pretty bad memory
leak. I've been testing this with file_fdw and a large file, and SELECT
COUNT(*) FROM
Excerpts from Heikki Linnakangas's message of mar abr 26 15:06:51 -0300 2011:
I tried to look around for other executor nodes that might
have the same problem. I didn't see any obvious leaks, although index
scan node seems to call AM's getnext without resetting the memory
context in
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
The trivial fix is to reset the per-tuple memory context between
iterations.
Have you tested this with SRFs?
ForeignNext seems like quite the wrong place for resetting
exprcontext in any case ...
regards,
On Tue, Apr 26, 2011 at 7:15 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Heikki Linnakangas's message of mar abr 26 15:06:51 -0300 2011:
I tried to look around for other executor nodes that might
have the same problem. I didn't see any obvious leaks, although index
scan
Hello
our customer showed a very significant memory leak in xml2.
try to
select xpath_number('data' || generate_series || '/data','/data')
from generate_series(1,50);
attention! take all memory very fast
It never release a memory allocated for context and doctree.
Regards
Pavel Stehule
On Fri, Nov 26, 2010 at 17:59, Pavel Stehule pavel.steh...@gmail.com wrote:
our customer showed a very significant memory leak in xml2.
It never release a memory allocated for context and doctree.
Why did you change doctree and ctxt to global variables?
I'm not sure why /* xmlFreeDoc(doctree);
2010/11/26 Itagaki Takahiro itagaki.takah...@gmail.com:
On Fri, Nov 26, 2010 at 17:59, Pavel Stehule pavel.steh...@gmail.com wrote:
our customer showed a very significant memory leak in xml2.
It never release a memory allocated for context and doctree.
Why did you change doctree and ctxt to
Pavel Stehule pavel.steh...@gmail.com writes:
2010/11/26 Itagaki Takahiro itagaki.takah...@gmail.com:
Why did you change doctree and ctxt to global variables?
I'm not sure why /* xmlFreeDoc(doctree); */ is commented out
at the end of pgxml_xpath(), but is it enough to enable the code?
I am
Excerpts from Tom Lane's message of vie nov 26 16:14:08 -0300 2010:
Those static variables are really ugly, and what's more this patch only
stops some of the leakage. Per experimentation, the result object from
pgxml_xpath has to be freed too, once it's been safely converted to
whatever the
Alvaro Herrera alvhe...@commandprompt.com writes:
This looks great. As this fixes a problem that was reported to us two
days ago, I'm interested in backpatching it. Are you going to do it?
Yeah, I'm on it. It's a bit tedious because the back branches are
different ...
2010/11/26 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
2010/11/26 Itagaki Takahiro itagaki.takah...@gmail.com:
Why did you change doctree and ctxt to global variables?
I'm not sure why /* xmlFreeDoc(doctree); */ is commented out
at the end of pgxml_xpath(), but
Oops, my fault. The list returned by ExecInsertIndexTuples() needs to be
freed otherwise lots of lists (one per row) will build up and not be freed
until the end of the query. This actually accounts for even more memory
than the after-trigger event queue. Patch attached.
Of course the
Dean Rasheed dean.a.rash...@googlemail.com writes:
Oops, my fault. The list returned by ExecInsertIndexTuples() needs to be
freed otherwise lots of lists (one per row) will build up and not be freed
until the end of the query. This actually accounts for even more memory
than the after-trigger
On 31 January 2010 16:03, Tom Lane t...@sss.pgh.pa.us wrote:
Dean Rasheed dean.a.rash...@googlemail.com writes:
Oops, my fault. The list returned by ExecInsertIndexTuples() needs to be
freed otherwise lots of lists (one per row) will build up and not be freed
until the end of the query. This
Dean Rasheed dean.a.rash...@googlemail.com writes:
On 31 January 2010 16:03, Tom Lane t...@sss.pgh.pa.us wrote:
It seems a bit unlikely that this would be the largest memory leak in
that area. Can you show a test case that demonstrates this is worth
worrying about?
create table foo(a int
Neil Conway [EMAIL PROTECTED] writes:
I noticed a minor leak in the per-query context when ExecReScanAgg()
is called for a hashed aggregate. During rescan, build_hash_table() is
called to create a new empty hash table in the aggcontext. However,
build_hash_table() also constructs the
On Thu, Oct 16, 2008 at 5:26 AM, Tom Lane [EMAIL PROTECTED] wrote:
It would probably be cleaner to take that logic out of build_hash_table
altogether, and put it in a separate function to be called by
ExecInitAgg.
Yeah, I considered that -- makes sense. Attached is the patch I
applied to HEAD,
I noticed a minor leak in the per-query context when ExecReScanAgg()
is called for a hashed aggregate. During rescan, build_hash_table() is
called to create a new empty hash table in the aggcontext. However,
build_hash_table() also constructs the hash_needed column list in
the per-query context,
Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
It seems like the impact of this is self-limiting though. The worst-case is
going to be something which executes an extra pfree for every tuple. Or
perhaps one for every expression in a complex query involving lots of
expressions. Saving
Are we leaking memory in vac_update_relstats ?
/* Fetch a copy of the tuple to scribble on */
ctup = SearchSysCacheCopy(RELOID,
ObjectIdGetDatum(relid),
0, 0, 0);
This copy is not subsequently freed in the function.
Thanks,
Pavan
--
Pavan Deolasee wrote:
Are we leaking memory in vac_update_relstats ?
/* Fetch a copy of the tuple to scribble on */
ctup = SearchSysCacheCopy(RELOID,
ObjectIdGetDatum(relid),
0, 0, 0);
This copy is not subsequently freed in the
Hi,
It's palloc'd in the current memory context, so it's not serious. It'll
be freed at the end of the transaction, if not before that. That's the
beauty of memory contexts; no need to worry about small allocations like
that.
That's the beauty of memory contexts for small allocations. But
Hi,
That's the beauty of memory contexts for small allocations. But because of
the 'convenience' of memory contexts we sometimes tend to not pay attention
to doing explicit pfrees. As a general rule I think allocations in
TopMemoryContext should be critically examined. I was bitten by this
NikhilS [EMAIL PROTECTED] writes:
One specific case I want to mention here is hash_create(). For local hash
tables if HASH_CONTEXT is not specified, they get created in a context which
becomes a direct child of TopMemoryContext. Wouldn't it be a better idea to
create the table in
On 7/20/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
Are we leaking memory in vac_update_relstats ?
/* Fetch a copy of the tuple to scribble on */
ctup = SearchSysCacheCopy(RELOID,
ObjectIdGetDatum(relid),
Pavan Deolasee [EMAIL PROTECTED] writes:
On 7/20/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
It's palloc'd in the current memory context, so it's not serious.
Right. But may be for code completeness, we should add that
missing heap_freetuple.
Personally I've been thinking of mounting an
Tom Lane [EMAIL PROTECTED] writes:
Personally I've been thinking of mounting an effort to get rid of
unnecessary pfree's wherever possible. Particularly in user-defined
functions, cleaning up at the end is a waste of code space and
cycles too, because they're typically called in contexts
Gregory Stark [EMAIL PROTECTED] writes:
It seems like the impact of this is self-limiting though. The worst-case is
going to be something which executes an extra pfree for every tuple. Or
perhaps one for every expression in a complex query involving lots of
expressions. Saving a few extra
Tom Lane wrote:
I've actually thought about making short-term memory
contexts use a variant MemoryContext type in which pfree was a no-op and
palloc was simplified by not worrying at all about recycling space.
That sounds like a good idea to me.
cheers
andrew
93 matches
Mail list logo