On 21.2.2015 19:57, Peter Geoghegan wrote:
> On Fri, Feb 20, 2015 at 9:18 PM, Tomas Vondra
> wrote:
>> The gains for text are also very nice, although in this case that only
>> happens for the smallest scale (1M rows), and for larger scales it's
>> actual
Hi Gavin,
On 21.2.2015 06:35, Gavin Flower wrote:
> On 21/02/15 18:18, Tomas Vondra wrote:
>>
>> OK, so I've repeated the benchmarks with both patches applied, and I
>> think the results are interesting. I extended the benchmark a bit - see
>> the SQL script attach
Hi,
On 21.2.2015 02:06, Tomas Vondra wrote:
> On 21.2.2015 02:00, Andrew Gierth wrote:
>>>>>>> "Tomas" == Tomas Vondra writes:
>>
>> >> Right...so don't test a datum sort case, since that isn't supported
>> >> at
On 21.2.2015 02:00, Andrew Gierth wrote:
>>>>>> "Tomas" == Tomas Vondra writes:
>
> >> Right...so don't test a datum sort case, since that isn't supported
> >> at all in the master branch. Your test case is invalid for that
>
On 21.2.2015 01:45, Peter Geoghegan wrote:
> On Fri, Feb 20, 2015 at 4:42 PM, Tomas Vondra
> wrote:
>> Isn't this patch about adding abbreviated keys for Numeric data type?
>> That's how I understood it, and looking into numeric_sortsup.patch seems
>> to confirm
On 21.2.2015 01:17, Peter Geoghegan wrote:
> On Fri, Feb 20, 2015 at 4:11 PM, Tomas Vondra
> wrote:
>>> So you're testing both the patches (numeric + datum tuplesort) at the
>>> same time?
>>
>> No, I was just testing two similar patches separately.
On 21.2.2015 00:20, Kevin Grittner wrote:
> Tomas Vondra wrote:
>
>> I share the view that this would be very valuable, but the scope
>> far exceeds what can be done within a single GSoC project. But
>> maybe we could split that into multiple pieces, and Eric would
>
On 21.2.2015 00:14, Peter Geoghegan wrote:
> On Fri, Feb 20, 2015 at 1:33 PM, Tomas Vondra
> wrote:
>> For example with the same percentile_disc() test as in the other
>> thread:
>>
>> create table stuff as select random()::numeric as randnum from
>> genera
eful, making
it possible to refresh only the MVs that actually need a refresh.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
ric as randnum
from generate_series(1,100);
analyze stuff;
select percentile_disc(0) within group (order by randnum) from stuff;
I get pretty much no difference in runtimes (not even for the smallest
dataset, where the Datum patch speedup was significan
000) 9.2 9.8 0.93
generate_series(1,300) 14.5 15.3 0.95
so for a small dataset the speedup is very nice, but for larger sets
there's ~5% slowdown. Is this expected?
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Sup
On 20.2.2015 21:23, Peter Eisentraut wrote:
> On 2/20/15 3:09 PM, Tomas Vondra wrote:
>> On 20.2.2015 21:01, Peter Eisentraut wrote:
>>> Is there a case where the combining function is different from the
>>> transition function, other than for count?
>>
>> I
and
stddev() aggregates keep state which is equal to
{count(X), sum(X), sum(X*X)}
The 'combine' function gets two such 'state' values, while transition
gets 'state' + next value.
I'm inclined to say that 'combinefn == transfn' is a mi
--
This seems to happen because ordered_set_startup() calls
tuplesort_begin_datum() when (use_tuples == true), which only sets
'onlyKey' and leaves (sortKeys == NULL). So 'mergeruns' fails because it
does not expect that.
--
Tomas Vondrahttp://www.2ndQuadrant.co
urrently stuck because of difficulty with implementing
memory accounting). So that's yet another use case for this (both the
'combine' function and the 'serialize/deserialize').
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 S
On 20.2.2015 02:58, Tom Lane wrote:
> Tomas Vondra writes:
>> I see the patch only works with the top-level snapshot timestamp,
>> stored in globalStats, but since 9.3 (when the stats were split
>> into per-db files) we track per-database timestamps too.
>
>> Shou
On 23.12.2014 11:28, Heikki Linnakangas wrote:
> On 12/07/2014 03:54 AM, Tomas Vondra wrote:
>> The one interesting case is the 'step skew' with statistics_target=10,
>> i.e. estimates based on mere 3000 rows. In that case, the adaptive
>> estimator significantly o
On 19.2.2015 03:14, Tomas Vondra wrote:
>
> I've noticed two unrelated files
Meh, should be "I noticed the patch removes two unrelated files" ...
>
> ../src/test/modules/dummy_seclabel/expected/dummy_seclabel.out
> ../src/test/modules/dummy_seclabel/sql/dum
tests in 1MB patch ;-)
I've noticed two unrelated files
../src/test/modules/dummy_seclabel/expected/dummy_seclabel.out
../src/test/modules/dummy_seclabel/sql/dummy_seclabel.sql
I suppose that's not intentional, right?
--
Tomas Vondrahttp://www.2ndQuadrant.com/
Pos
e that's not necessary, because to query database stats you have
to be connected to that particular database and that should write fresh
stats, so the timestamps should not be very different.
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Suppo
On 16.2.2015 03:38, Andrew Gierth wrote:
>>>>>> "Tomas" == Tomas Vondra
>>>>>> writes:
>
> Tomas> Improving the estimates is always good, but it's not going
> to Tomas> fix the case of non-NULL values (it shouldn't be all
&g
ws=50 loops=1)
Planning time: 0.172 ms
Execution time: 514.663 ms
(10 rows)
Without the patch this runs in ~240 seconds and the number of batches
explodes to ~131k.
In theory it might happen that threre just a few hash values and all of
them are exactly the same within the first N bits (t
On 15.2.2015 21:38, Tom Lane wrote:
> Andres Freund writes:
>> On 2015-02-15 21:07:13 +0100, Tomas Vondra wrote:
>>> On 15.2.2015 20:56, Heikki Linnakangas wrote:
>>>> glibc's malloc() also uses mmap() for larger allocations. Precisely
>>>> because t
On 15.2.2015 21:13, Andres Freund wrote:
> On 2015-02-15 21:07:13 +0100, Tomas Vondra wrote:
>
>> malloc() does that only for allocations over MAP_THRESHOLD, which
>> is 128kB by default. Vast majority of blocks we allocate are <=
>> 8kB, so mmap() almost never happen
On 15.2.2015 20:56, Heikki Linnakangas wrote:
> On 02/15/2015 08:57 PM, Tomas Vondra wrote:
>> One of the wilder ideas (I mentined beer was involved!) was a memory
>> allocator based on mmap [2], bypassing the libc malloc implementation
>> altogether. mmap() has some nice fea
ent allocator starts with 1kB, then 2kB and
finally 4kB).
Ideas, opinions?
[1] http://linux.die.net/man/2/sbrk
[2] http://linux.die.net/man/2/mmap
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/sr
That's 1880 vs 1882 on average, so pretty much no difference. Would be
nice if someone else could try this on their machine(s).
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backen
MemoryContextDelete(astate->mcontext);
> + }
>
> Seems a lot more understandable, and less code too.
Yeah, I agree it's easier to understand.
> I concur with the concerns that the comments could do with more
> work, but haven't attempted to improve them myself.
There were a few comments about this, after the v8 patch, with
recommended comment changes.
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
seems the CF app uses an invalid e-mail address when sending messages to
pgsql-hackers - I've added a comment to one of the patches and got this:
pgsql-hackers-testing@localhost
Unrouteable address
Maybe that's expected as the CF app is new, but I haven't seen it
mentioned in this thre
- feel
> free to ask.
I'll take look. Can you share the patches etc. - either here, or maybe
send it to me directly?
regards
Tomas
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hac
Hi,
On 21.1.2015 09:01, Jeff Davis wrote:
> On Tue, 2015-01-20 at 23:37 +0100, Tomas Vondra wrote:
>> Tom's message where he points that out is here:
>> http://www.postgresql.org/message-id/20707.1396372...@sss.pgh.pa.us
>
> That message also says:
>
> "I
On 21.1.2015 00:38, Michael Paquier wrote:
> On Wed, Jan 21, 2015 at 1:08 AM, Tomas Vondra
>
>> I've tried to reproduce this on my Raspberry PI 'machine' and it's not
>> very difficult to trigger this. About 7 out of 10 'make check' runs fail
&g
l that ugly. I
actually modified both APIs initially, but I think Ali is right that not
breaking the existing API (and keeping the original behavior in that
case) is better. We can break it any time we want in the future, but
it's impossible to "unbreak it" ;-)
regards
--
T
On 25.12.2014 22:28, Tomas Vondra wrote:
> On 25.12.2014 21:14, Andres Freund wrote:
>
>> That's indeed odd. Seems to have been lost when the statsfile was
>> split into multiple files. Alvaro, Tomas?
>
> The goal was to keep the logic as close to the original a
ate. I'll wait a bit and then
post an updated version of the patch (unless it gets commited with the
comment fixes before that).
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing
g when it's better to use a
single memory context, and when it's more efficient to use a
separate memory context for each array build state
2) before makeArrayResult() - explaining that it won't free memory
when allocated in a single memory context (and that a pfree()
On 11.12.2014 23:46, Tomas Vondra wrote:
> On 11.12.2014 22:16, Robert Haas wrote:
>> On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra wrote:
>>
>>> The idea was that if we could increase the load a bit (e.g. using 2
>>> tuples per bucket instead of 1), we will sti
ns_parsed == NULL;
+ fcinfo.argnull[2] = false;
+ fcinfo.argnull[3] = false;
+ fcinfo.argnull[3] = false;
(8) check_default_seqam without a transaction
* If we aren't inside a transaction, we cannot do database access so
* cannot verify the name. Must accept the v
to
initArrayResultArr(), including the get_element_type() call, and remove
the element_type from the signature. This means initArrayResultAny()
will call the get_element_type() twice, but I guess that's negligible.
And everyone who calls initArrayResultArr() will get the error handling
for free
Hi,
On 23.12.2014 10:16, Jeff Davis wrote:
> It seems that these two patches are being reviewed together. Should
> I just combine them into one? My understanding was that some wanted
> to review the memory accounting patch separately.
>
> On Sun, 2014-12-21 at 20:19 +0100, Tom
On 31.12.2014 17:29, Andrew Dunstan wrote:
>
> Sorry, I should have tested it. This seems to work:
>
>if ($branch eq 'REL9_0_STABLE')
>{
> $PGBuild::Options::skip_steps .= ' pl-install-check';
> $main::skip_steps{'pl-install-check'} = 1;
>}
>
> cheers
Meh, no problem
On 28.12.2014 00:46, Noah Misch wrote:
> On Tue, Dec 23, 2014 at 03:32:59PM +0100, Tomas Vondra wrote:
>> On 23.12.2014 15:21, Andrew Dunstan wrote:
>>>
>>> No, config_opts is what's passed to configure. Try something like:
>>&
On 26.12.2014 02:59, Tom Lane wrote:
> Tomas Vondra writes:
>> On 25.12.2014 22:40, Tom Lane wrote:
>>> I think that hamster has basically got a tin can and string for an I/O
>>> subsystem. It's not real clear to me whether there's actually been an
&g
On 25.12.2014 22:40, Tom Lane wrote:
> Tomas Vondra writes:
>> The strange thing is that the split happened ~2 years ago, which is
>> inconsistent with the sudden increase of this kind of issues. So maybe
>> something changed on that particular animal (a failing SD card
On 25.12.2014 22:16, Tom Lane wrote:
> Tomas Vondra writes:
>> On 25.12.2014 20:36, Tom Lane wrote:
>>> BTW, I notice that in the current state of pgstat.c, all the logic
>>> for keeping track of request arrival times is dead code, because
>>
On 25.12.2014 21:14, Andres Freund wrote:
> On 2014-12-25 14:36:42 -0500, Tom Lane wrote:
>
> My guess is that a checkpoint happened at that time. Maybe it'd be a
> good idea to make pg_regress start postgres with log_checkpoints
> enabled? My guess is that we'd find horrendous 'sync' times.
>
>
On 25.12.2014 20:36, Tom Lane wrote:
>
> Yeah, I've been getting more annoyed by that too lately. I keep
> wondering though whether there's an actual bug underneath that
> behavior that we're failing to see. PGSTAT_MAX_WAIT_TIME is already
> 10 seconds; it's hard to credit that increasing it still
On 23.12.2014 15:21, Andrew Dunstan wrote:
>
> No, config_opts is what's passed to configure. Try something like:
>
> if ($branch eq 'REL9_0_STABLE')
> {
> $skip_steps{'pl-install-check'} = 1;
> }
Applied to all three animals.
Tomas
--
Sent via pgsql-hackers mailing list
o patches is
trivial, splitting them not so much.
> On Sun, 2014-12-21 at 20:19 +0100, Tomas Vondra wrote:
>> That's the only conflict, and after fixing it it compiles OK.
>> However, I got a segfault on the very first query I tried :-(
>
> If lookup_hash_entry doesn'
On 23.12.2014 09:19, Noah Misch wrote:
> On Sat, Dec 20, 2014 at 07:28:33PM +0100, Tomas Vondra wrote:
>> On 20.12.2014 19:05, Tom Lane wrote:
>>> Locale cs_CZ.WIN-1250 is evidently marked with a codeset property of
>>> "ANSI_X3.4-1968" (which means ol
On 22.12.2014 18:41, Andres Freund wrote:
> On 2014-12-22 18:17:56 +0100, Tomas Vondra wrote:
>> On 22.12.2014 17:47, Alvaro Herrera wrote:
>>> Tomas Vondra wrote:
>>>> On 22.12.2014 07:36, Tatsuo Ishii wrote:
>>>>> On 22.12.2014 00:28, Tomas Vondra
On 22.12.2014 17:47, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>> On 22.12.2014 07:36, Tatsuo Ishii wrote:
>>> On 22.12.2014 00:28, Tomas Vondra wrote:
>
>>>> (8) Also, I think it's not necessary to define function prototypes for
>>>> exec
On 22.12.2014 07:36, Tatsuo Ishii wrote:
> On 22.12.2014 00:28, Tomas Vondra wrote:
>>
>> (2) The 'executeStatement2' API is a bit awkward as the signarure
>>
>> executeStatement2(PGconn *con, const char *sql, const char *table);
>>
>>
On 22.12.2014 10:07, Petr Jelinek wrote:
> On 21/12/14 18:38, Tomas Vondra wrote:
>>
>> (1) The patch adds a new catalog, but does not bump CATVERSION.
>>
>
> I thought this was always done by committer?
Right. Sorry for the noise.
>
>> (2) The catalog namin
Hi,
On 21.12.2014 15:58, Tatsuo Ishii wrote:
>> On Sun, Dec 14, 2014 at 11:43 AM, Tatsuo Ishii wrote:
If we care enough about that case to attempt the vacuum anyway
then we need to do something about the error message; either
squelch it or check for the existence of the tables befo
On 21.12.2014 20:19, Tomas Vondra wrote:
>
> However, I got a segfault on the very first query I tried :-(
>
> create table test_hash_agg as select i AS a, i AS b, i AS c, i AS d
> from generate_series(1,1000) s(i);
>
> analyze test_hash_agg;
>
>
On 2.12.2014 06:14, Jeff Davis wrote:
> On Sun, 2014-11-30 at 17:49 -0800, Peter Geoghegan wrote:
>> On Mon, Nov 17, 2014 at 11:39 PM, Jeff Davis wrote:
>>> I can also just move isReset there, and keep mem_allocated as a uint64.
>>> That way, if I find later that I want to track the aggregated val
Hi,
On 18.12.2014 13:14, Petr Jelinek wrote:
> Hi,
>
> v2 version of this patch is attached.
I did a review of this v2 patch today. I plan to do a bit more testing,
but these are my comments/questions so far:
(0) There's a TABLESAMPLE page at the wiki, not updated since 2012:
https://wiki.
On 21.12.2014 02:54, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>> Attached is v5 of the patch, fixing an error with releasing a shared
>> memory context (invalid flag values in a few calls).
>
> The functions that gain a new argument should get their comment updated,
>
Attached is v5 of the patch, fixing an error with releasing a shared
memory context (invalid flag values in a few calls).
kind regards
Tomas Vondra
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index d9faf20..9c97755 100644
--- a/src/backend/executor
i Akbar <mailto:the.ap...@gmail.com>>:
>
>
> 2014-12-16 6:27 GMT+07:00 Tomas Vondra <mailto:t...@fuzzy.cz>>:
> Just fast-viewing the patch.
>
> The patch is not implementing the checking for not creating new
> context in
On 20.12.2014 19:35, Tom Lane wrote:
> Tomas Vondra writes:
>> On 20.12.2014 19:05, Tom Lane wrote:
>>> I am betting that you recreated them differently from before.
>
>> And you're probably right. Apparently, I recreated them like this:
>
>>
On 20.12.2014 19:05, Tom Lane wrote:
> Tomas Vondra writes:
>> I believe the locale system (at the OS level) works just like before. I
>> remember I had to manually create the locales while initially setting up
>> the animals. Then, ~2 months ago something happened (I asssume
On 20.12.2014 18:32, Tom Lane wrote:
> Tomas Vondra writes:
>> On 20.12.2014 18:13, Pavel Stehule wrote:
>>> It is Microsoft encoding, - it is not available on Linux
>
>> Not true. It is available on Linux, and the regression tests were
>> running with it for
On 20.12.2014 17:48, Noah Misch wrote:
> On Sat, Dec 20, 2014 at 05:14:03PM +0100, CSPUG wrote:
>> On 20.12.2014 07:39, Noah Misch wrote:
>>> Buildfarm members magpie, treepie and fulmar went absent on
>>> 2014-10-29. Since returning on 2014-11-16, they have consistently
>>> failed with 'initdb: in
On 20.12.2014 18:13, Pavel Stehule wrote:
>
> 2014-12-20 17:48 GMT+01:00 Noah Misch
> $ LANG=cs_CZ.WIN-1250 locale LC_NUMERIC
>
>
> It is Microsoft encoding, - it is not available on Linux
Not true. It is available on Linux, and the regression tests were
running with it for a long time (es
On 15.12.2014 22:35, Jeff Janes wrote:
> On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra <mailto:t...@fuzzy.cz>> wrote:
>
> Hi,
>
> Attached is v2 of the patch lowering array_agg memory requirements.
> Hopefully it addresses the issues issues mentioned by T
On 12.12.2014 22:13, Robert Haas wrote:
> On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra wrote:
>> On 12.12.2014 14:19, Robert Haas wrote:
>>> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra wrote:
>>>
>>>> Regarding the "sufficiently small" -
On 11.12.2014 16:06, Bruce Momjian wrote:
> On Wed, Dec 10, 2014 at 11:00:21PM -0800, Josh Berkus wrote:
>>
>> I will add:
>>
>> 4. commitfest managers have burned out and refuse to do it again
>
> Agreed. The "fun", if it was ever there, has left the commitfest
> process.
I've never been a CFM,
On 12.12.2014 19:07, Bruce Momjian wrote:
> On Fri, Dec 12, 2014 at 10:50:56AM -0500, Tom Lane wrote:
>> Also, one part of the point of the review mechanism is that it's
>> supposed to provide an opportunity for less-senior reviewers to
>> look at parts of the code that they maybe don't know so wel
On 12.12.2014 14:19, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra wrote:
>
>> Regarding the "sufficiently small" - considering today's hardware, we're
>> probably talking about gigabytes. On machines with significant memory
>> p
On 11.12.2014 22:16, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra wrote:
>> No, it's not rescanned. It's scanned only once (for the batch #0), and
>> tuples belonging to the other batches are stored in files. If the number
>> of batches needs t
other transition values). I also
> don't think my patch would interfere with a fix there in the future.
>
> Tomas Vondra suggested an alternative design that more closely resembles
> HashJoin: instead of filling up the hash table and then spilling any new
> groups, the
On 11.12.2014 17:53, Heikki Linnakangas wrote:
> On 10/13/2014 01:00 AM, Tomas Vondra wrote:
>> Hi,
>>
>> attached is a WIP patch implementing multivariate statistics.
>
> Great! Really glad to see you working on this.
>
>> + * FIXME This sample sizing i
On 11.12.2014 20:00, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 12:29 PM, Kevin Grittner wrote:
>>
>> Under what conditions do you see the inner side get loaded into the
>> hash table multiple times?
>
> Huh, interesting. I guess I was thinking that the inner side got
> rescanned for each new
On 8.12.2014 02:01, Michael Paquier wrote:
> On Sun, Nov 16, 2014 at 3:35 AM, Tomas Vondra wrote:
>> Thanks for the link. I've been looking for a good dataset with such
>> data, and this one is by far the best one.
>>
>> The current version of the patch supports on
Hi,
back when we were discussing the hashjoin patches (now committed),
Robert proposed that maybe it'd be a good idea to sometimes increase the
number of tuples per bucket instead of batching.
That is, while initially sizing the hash table - if the hash table with
enough buckets to satisfy NTUP_P
Hi!
This was initially posted to pgsql-performance in this thread:
http://www.postgresql.org/message-id/5472416c.3080...@fuzzy.cz
but pgsql-hackers seems like a more appropriate place for further
discussion.
Anyways, attached is v3 of the patch implementing the adaptive ndistinct
estimator. J
On 2.12.2014 02:52, Tom Lane wrote:
> Tomas Vondra writes:
>> On 2.12.2014 01:33, Tom Lane wrote:
>>> What I suspect you're looking at here is the detritus of creation
>>> of a huge number of memory contexts. mcxt.c keeps its own state
>>> about existin
Dne 2 Prosinec 2014, 10:59, Tomas Vondra napsal(a):
> Dne 2014-12-02 02:52, Tom Lane napsal:
>> Tomas Vondra writes:
>>
>>> Also, this explains the TopMemoryContext size, but not the RSS size
>>> (or am I missing something)?
>>
>> Very possibly you
Dne 2014-12-02 02:52, Tom Lane napsal:
Tomas Vondra writes:
On 2.12.2014 01:33, Tom Lane wrote:
What I suspect you're looking at here is the detritus of creation of
a huge number of memory contexts. mcxt.c keeps its own state about
existing contents in TopMemoryContext. So, if we posit
On 2.12.2014 01:33, Tom Lane wrote:
> Tomas Vondra writes:
>> On 2.12.2014 00:31, Andrew Dunstan wrote:
>>> Doesn't this line:
>>> TopMemoryContext: 136614192 total in 16678 blocks; 136005936 free
>>> (500017 chunks); 608256 used
>>> look prett
On 2.12.2014 00:31, Andrew Dunstan wrote:
>
> On 12/01/2014 05:39 PM, Tomas Vondra wrote:
>> Hi all,
>>
>> while working on the patch decreasing amount of memory consumed by
>> array_agg [1], I've ran into some strange OOM issues. Reproducing them
>> using
Hi all,
while working on the patch decreasing amount of memory consumed by
array_agg [1], I've ran into some strange OOM issues. Reproducing them
using the attached SQL script is rather simple.
[1] https://commitfest.postgresql.org/action/patch_view?id=1652
At first I thought there's some rare
Hi,
Attached is v2 of the patch lowering array_agg memory requirements.
Hopefully it addresses the issues issues mentioned by TL in this thread
(not handling some of the callers appropriately etc.).
The v2 of the patch does this:
* adds 'subcontext' flag to initArrayResult* methods
If it's 't
Moving to pgsql-hackers, as that's a more appropriate place for this
discussion.
On 27.11.2014 11:26, Maxim Boguk wrote:
>
>
> FWIW, I got curious and checked why we decided not to implement this
> while reworking the stats in 9.3, as keeping an is_dirty flag seems as a
> rather stra
On 26.11.2014 23:26, Peter Geoghegan wrote:
> On Wed, Nov 26, 2014 at 2:00 PM, Andrew Dunstan wrote:
>> The client's question is whether this is not a bug. It certainly seems like
>> it should be possible to plan a query without chewing up this much memory,
>> or at least to be able to limit the a
On 25.11.2014 18:11, Heikki Linnakangas wrote:
> On 11/25/2014 06:06 PM, Christoph Berg wrote:
>
>> db1 is registered in pg_database, but the directory is missing on
>> disk.
>
> Yeah, DROP DATABASE cheats. It deletes all the files first, and commits
> the transaction only after that. There's this
On 21.11.2014 00:03, Andres Freund wrote:
> On 2014-11-17 21:03:07 +0100, Tomas Vondra wrote:
>> On 17.11.2014 19:46, Andres Freund wrote:
>>
>>> The MemoryContextData struct is embedded into AllocSetContext.
>>
>> Oh, right. That makes is slight
On 17.11.2014 19:46, Andres Freund wrote:
> On 2014-11-17 19:42:25 +0100, Tomas Vondra wrote:
>> On 17.11.2014 18:04, Andres Freund wrote:
>>> Hi,
>>>
>>> On 2014-11-16 23:31:51 -0800, Jeff Davis wrote:
>>>> *** a/src/include/nodes/me
On 17.11.2014 18:04, Andres Freund wrote:
> Hi,
>
> On 2014-11-16 23:31:51 -0800, Jeff Davis wrote:
>> *** a/src/include/nodes/memnodes.h
>> --- b/src/include/nodes/memnodes.h
>> ***
>> *** 60,65 typedef struct MemoryContextData
>> --- 60,66
>> MemoryContext nextchild;
On 17.11.2014 08:31, Jeff Davis wrote:
> On Sat, 2014-11-15 at 21:36 +, Simon Riggs wrote:
>> Do I understand correctly that we are trying to account for exact
>> memory usage at palloc/pfree time? Why??
>
> Not palloc chunks, only tracking at the level of allocated blocks
> (that we allocate
On 15.11.2014 22:36, Simon Riggs wrote:
> On 16 October 2014 02:26, Jeff Davis wrote:
>
>> The inheritance is awkward anyway, though. If you create a tracked
>> context as a child of an already-tracked context, allocations in
>> the newer one won't count against the original. I don't see a way
On 15.11.2014 18:49, Kevin Grittner
> If you eliminate the quals besides the zipcode column you get 61
> rows and it gets much stranger, with legal municipalities that are
> completely surrounded by Madison that the postal service would
> rather you didn't use in addressing your envelopes, but they
Dne 13 Listopad 2014, 16:51, Katharina Büchse napsal(a):
> On 13.11.2014 14:11, Tomas Vondra wrote:
>
>> The only place where I think this might work are the associative rules.
>> It's simple to specify rules like ("ZIP code" implies "city") and we
>&g
Dne 13 Listopad 2014, 12:31, Simon Riggs napsal(a):
> On 12 October 2014 23:00, Tomas Vondra wrote:
>
>> It however seems to be working sufficiently well at this point, enough
>> to get some useful feedback. So here we go.
>
> This looks interesting and useful.
>
> W
, I see you've responded to me directly (not through the
pgsql-hackers list). I assume that's not on purpose, so I'm adding the
list back into the loop ...
> On 07.11.2014 20:37, Tomas Vondra wrote:
>> On 7.11.2014 13:19, Katharina Büchse wrote:
>>> On 06.11.2014 11:56, T
On 7.11.2014 13:19, Katharina Büchse wrote:
> On 06.11.2014 11:56, Tomas Vondra wrote:
>> Dne 6 Listopad 2014, 11:15, Katharina Büchse napsal(a):
>>>
>>> because correlations might occur only in parts of the data. In this case
>>> a histogram based on a sample
Dne 6 Listopad 2014, 12:05, Gavin Flower napsal(a):
> On 06/11/14 23:57, Tomas Vondra wrote:
>> Dne 6 Listopad 2014, 11:50, Gavin Flower napsal(a):
>>> Could you store a 2 dimensional histogram in a one dimensional array:
>>> A[z] = value, where z = col * rowSize
Dne 6 Listopad 2014, 11:50, Gavin Flower napsal(a):
>
> Could you store a 2 dimensional histogram in a one dimensional array:
> A[z] = value, where z = col * rowSize + row (zero starting index)?
How would that work for columns with different data types?
Tomas
--
Sent via pgsql-hackers mailing
801 - 900 of 1391 matches
Mail list logo