On Apr 9, 2011, at 9:23 PM, Stephen Frost wrote:
Actually, Tom has a point in that work_mem can be set above 1GB (which
is where I had it set previously..). I didn't think it'd actually do
anything given the MaxAlloc limit, but suprisingly, it does (at least,
under 8.4). I'm currently trying
Stephen Frost sfr...@snowman.net writes:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
If we were actually trying to support such large allocations,
what I'd be inclined to do is introduce a separate call along the lines
of MemoryContextAllocLarge() that lacks the safety check.
This sounds like
On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
BTW, it sounded like your argument had to do with whether it would use
HashAgg or not -- that is *not* dependent on the per-palloc limit, and
never has been.
His point was he wanted to be allowed to set work_mem 1GB. This is
On Sun, 2011-04-10 at 03:05 +0100, Greg Stark wrote:
On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
BTW, it sounded like your argument had to do with whether it would use
HashAgg or not -- that is *not* dependent on the per-palloc limit, and
never has been.
His
* Greg Stark (gsst...@mit.edu) wrote:
On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
BTW, it sounded like your argument had to do with whether it would use
HashAgg or not -- that is *not* dependent on the per-palloc limit, and
never has been.
His point was he wanted
Tom, all,
Having run into issues caused by small work_mem, again, I felt the need
to respond to this.
* Tom Lane (t...@sss.pgh.pa.us) wrote:
You would break countless things. It might be okay anyway in a trusted
environment, ie, one without users trying to crash the system, but there
are a
On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
But before expending time on that, I'd want to see some evidence
that
it's actually helpful for production situations. I'm a bit dubious
that you're going to gain much here.
If you want to build an index on a 500GB table and
Devrim G?ND?Z wrote:
On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
But before expending time on that, I'd want to see some evidence
that
it's actually helpful for production situations. I'm a bit dubious
that you're going to gain much here.
If you want to build an
--On 20. Februar 2011 09:32:02 -0500 Bruce Momjian br...@momjian.us wrote:
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
This is something i proposed some time ago, too. At
--On 20. Februar 2011 15:48:06 +0100 Bernd Helmle maili...@oopsware.de
wrote:
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
This is something i proposed some time ago, too. At
Bruce Momjian br...@momjian.us writes:
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
You seem to be confusing one limitation in one code path with the
overall meaning of
Bruce Momjian br...@momjian.us writes:
Tom Lane wrote:
You seem to be confusing one limitation in one code path with the
overall meaning of maintenance_work_mem.
Oh, OK, so sorts are limited, but not hash sizes? Are there any other
uses? Should this be documented somehow? What is the
Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
You seem to be confusing one limitation in one code path with the
Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
Tom Lane wrote:
You seem to be confusing one limitation in one code path with the
overall meaning of maintenance_work_mem.
Oh, OK, so sorts are limited, but not hash sizes? Are there any other
uses? Should this be documented
Tom Kevin,
thank you for your replies. Kevin, I had already employed all the
tricks you mention, except using temporary tables which would be hard
for me due to the structure of my application (but I could try using
something like pgbouncer or so), but thanks a lot for sharing the ideas.
Frederik Ramm frede...@remote.org writes:
The single query where pg9.0 beat pg8.3 by a country mile was a CREATE
INDEX statement on a BIGINT column to a table with about 500 million
records - this cost 2679 seconds on normal 8.3, 2443 seconds on
large-memory 8.3, and aroung 1650 seconds on
On mån, 2011-02-14 at 10:11 -0500, Tom Lane wrote:
But before expending time on that, I'd want to see some evidence that
it's actually helpful for production situations. I'm a bit dubious
that you're going to gain much here.
If you want to build an index on a 500GB table and you have 1TB RAM,
Frederik Ramm frede...@remote.org writes:
Now I assume that there are reasons that you're doing this. memutils.h
has the (for me) cryptic comment about MaxAllocSize: XXX This is
deliberately chosen to correspond to the limiting size of varlena
objects under TOAST. See VARATT_MASK_SIZE in
Frederik Ramm frede...@remote.org wrote:
I am (ab)using a PostgreSQL database (with PostGIS extension) in
a large data processing job - each day, I load several GB of data,
run a lot of analyses on it, and then throw everything away again.
Loading, running, and dumping the results takes
19 matches
Mail list logo