On 17 December 2015 at 16:00, Tomas Vondra <tomas.von...@2ndquadrant.com>

> On 12/17/2015 11:44 AM, Simon Riggs wrote:
>> My understanding is that the bloom filter would be ineffective in any of
>> these cases
>> * Hash table is too small
> Yes, although it depends what you mean by "too small".
> Essentially if we can do with a single batch, then it's cheaper to do a
> single lookup in the hash table instead of multiple lookups in the bloom
> filter. The bloom filter might still win if it fits into L3 cache, but that
> seems rather unlikely.
> * Bloom filter too large
> Too large with respect to what?
> One obvious problem is that the bloom filter is built for all batches at
> once, i.e. for all tuples, so it may be so big won't fit into work_mem (or
> takes a significant part of it). Currently it's not accounted for, but
> that'll need to change.

The benefit seems to be related to cacheing, or at least that memory speed
is critical. If the hash table is too small, or the bloom filter too large
then there would be no benefit from performing the action (Lookup Bloom
then maybe Lookup Hash) compared with just doing (LookupHash).

So the objective must be to get a Bloom Filter that is small enough that it
lives in a higher/faster level of cache than the main Hash table. Or
possibly that we separate that into a two stage process so that the first
level can be applied by a GPU and then later checked against hash outside
of a GPU.

I think you also need to consider whether we use a hash bloom filter or
just simply apply an additional range predicate. The latter idea is similar
to my earlier thoughts here

Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to