On Thu, Jan 15, 2015 at 5:02 AM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:
Maybe we can try later again, but there's no poin in keeping this in the
current CF.
Any objections?
None, marked as rejected.
--
Michael
On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:
The idea was that if we could increase the load a bit (e.g. using 2
tuples per bucket instead of 1), we will still use a single batch in
some cases (when we miss the work_mem threshold by just a bit). The
lookups will be
On 12.12.2014 14:19, Robert Haas wrote:
On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:
Regarding the sufficiently small - considering today's hardware, we're
probably talking about gigabytes. On machines with significant memory
pressure (forcing the temporary files to
On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra t...@fuzzy.cz wrote:
On 12.12.2014 14:19, Robert Haas wrote:
On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:
Regarding the sufficiently small - considering today's hardware, we're
probably talking about gigabytes. On machines
On 12.12.2014 22:13, Robert Haas wrote:
On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra t...@fuzzy.cz wrote:
On 12.12.2014 14:19, Robert Haas wrote:
On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:
Regarding the sufficiently small - considering today's hardware, we're
On Fri, Dec 12, 2014 at 4:54 PM, Tomas Vondra t...@fuzzy.cz wrote:
Well, this is sort of one of the problems with work_mem. When we
switch to a tape sort, or a tape-based materialize, we're probably far
from out of memory. But trying to set work_mem to the amount of
memory we have can easily
On Fri, Dec 12, 2014 at 5:19 AM, Robert Haas robertmh...@gmail.com wrote:
Well, this is sort of one of the problems with work_mem. When we
switch to a tape sort, or a tape-based materialize, we're probably far
from out of memory. But trying to set work_mem to the amount of
memory we have can
Robert Haas robertmh...@gmail.com wrote:
On Sat, Dec 6, 2014 at 10:08 PM, Tomas Vondra t...@fuzzy.cz wrote:
select a.i, b.i from a join b on (a.i = b.i);
I think the concern is that the inner side might be something more
elaborate than a plain table scan, like an aggregate or join. I
On Thu, Dec 11, 2014 at 12:29 PM, Kevin Grittner kgri...@ymail.com wrote:
Robert Haas robertmh...@gmail.com wrote:
On Sat, Dec 6, 2014 at 10:08 PM, Tomas Vondra t...@fuzzy.cz wrote:
select a.i, b.i from a join b on (a.i = b.i);
I think the concern is that the inner side might be something
On 11.12.2014 20:00, Robert Haas wrote:
On Thu, Dec 11, 2014 at 12:29 PM, Kevin Grittner kgri...@ymail.com wrote:
Under what conditions do you see the inner side get loaded into the
hash table multiple times?
Huh, interesting. I guess I was thinking that the inner side got
rescanned for
On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra t...@fuzzy.cz wrote:
No, it's not rescanned. It's scanned only once (for the batch #0), and
tuples belonging to the other batches are stored in files. If the number
of batches needs to be increased (e.g. because of incorrect estimate of
the inner
On 11.12.2014 22:16, Robert Haas wrote:
On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra t...@fuzzy.cz wrote:
No, it's not rescanned. It's scanned only once (for the batch #0), and
tuples belonging to the other batches are stored in files. If the number
of batches needs to be increased (e.g.
Tomas Vondra t...@fuzzy.cz wrote:
back when we were discussing the hashjoin patches (now committed),
Robert proposed that maybe it'd be a good idea to sometimes increase the
number of tuples per bucket instead of batching.
That is, while initially sizing the hash table - if the hash table
On Sat, Dec 6, 2014 at 10:08 PM, Tomas Vondra t...@fuzzy.cz wrote:
select a.i, b.i from a join b on (a.i = b.i);
I think the concern is that the inner side might be something more
elaborate than a plain table scan, like an aggregate or join. I might
be all wet, but my impression is that you
Hi,
back when we were discussing the hashjoin patches (now committed),
Robert proposed that maybe it'd be a good idea to sometimes increase the
number of tuples per bucket instead of batching.
That is, while initially sizing the hash table - if the hash table with
enough buckets to satisfy
15 matches
Mail list logo