It would be useful to have a relation such that all dirtied buffers got
written out even for failed transactions (barring a crash) and such that
read-any-undeleted were easy to do, despite the non-ACIDity. The overhead of a
side transaction seems overkill for such things as logs or advisory rel
> Then your union operation is to just bitwise or the two bloomfilters.
Keep in mind that when performing this sort of union between two
comparably-sized sets, your false-positive rate will increase by about an order
of magnitude. You need to size your bloom filters accordingly, or perform the
>> PARTITION BY RANGE ( a_expr )
>> ...
>> PARTITION BY HASH ( a_expr )
>> PARTITIONS num_partitions;
> Unless someone comes up with a maintenance plan for stable hashfunctions, we
> should probably not dare look into this yet.
What would cover the common use case of per-day quals and drops ove
In a context using normalization, wouldn't you typically want to store a
normalized-text type that could perhaps (depending on locale) take advantage of
simpler, more-efficient comparison functions? Whether you're doing
INSERT/UPDATE, or importing a flat text file, if you canonicalize characters
Normally I'd try a small lookup table (1-byte index to 1-byte value) in this
case. But if the bitscan instruction were even close in performance, it'd be
preferable, due to its more-reliable caching behavior; it should be possible to
capture this at code-configuration time (aligned so as to pro
(Grrr, declension, not declination.)
> "Plural-Forms: nplurals=3; plural=n%10==1 && n%100!=11 ? 0 :n%10>=2 &&
> n%10<=4 && (n%100<10 ||n%100>=20) ? 1 : 2;\n"
Thanks. The above (ignoring backslash-EOL) is the form recommended for Russian
(inter alia(s)) in the Texinfo manual for gettext ("info g
> Russian plural forms for 100, 101, 102 etc. is different, as for 0, 1, 2.
True. The rule IIRC is that except for 11-14 and for collective numerals,
declination follows the last digit.
It would be possible to generalize declination via a language-specific
message-selector function, especially
>>> So at least transiently we use 3x the size of the actual array.
>> I was conjecturing, prior to investigation. Are you saying you know
>> this/have seen this already?
> Well I'm just saying if you realloc a x kilobyte block into a 2x block and
> the allocator can't expand it and has to copy t
We use postgresql 7.4 running on a modified redhat
linux system as our database to store network related
data. The tables have millions of rows and several
joins on these tables are typically done in response
to user queries. The database itself takes about 40Gb
of disk space. Our application uses