Matthew wrote:
On Fri, 22 Feb 2008, Moritz Onken wrote:
I thought of doing all the inserts without having an index and without
doing the check whether the row is already there. After that I'd do a
"group by" and count(*) on that table. Is this a good idea?
That sounds like the fastest way t
On 2008-02-22 12:49, Kynn Jones wrote:
Of course, I expect that using views V and V... would
result in a loss in performance relative to a version that used bona
fide tables T and T. My question is, how can I minimize
this performance loss?
That used to be my thoughts too, but I have found o
Hi. I'm trying to optimize the performance of a database whose main purpose
is to support two (rather similar) kinds of queries. The first kind, which
is expected to be the most common (I estimate it will account for about 90%
of all the queries performed on this DB), has the following general
st
Hi -
I'm wondering if anyone has had success doing a simultaneous
load of one Pg dump to two different servers? The load command
is actually run from two different workstations, but reading the
same pgdump-file.
We use this command from the command line (Solaris-10 OS):
uncompress -c pgdump-fil
SORRY -
these are the commands (i.e. pgserver-A and pgserver-B)
==
Hi -
I'm wondering if anyone has had success doing a simultaneous
load of one Pg dump to two different servers? The load command
is actually run from two different workstations, but reading the
same pgdump-file.
We use th
On Fri, 22 Feb 2008, Moritz Onken wrote:
I need to store a lot of 3-tuples of words (e.g. "he", "can", "drink"), order
matters!
The source is about 4 GB of these 3-tuples.
I need to store them in a table and check whether one of them is already
stored, and if that's the case to increment a colu
Hi,
I need to store a lot of 3-tuples of words (e.g. "he", "can",
"drink"), order matters!
The source is about 4 GB of these 3-tuples.
I need to store them in a table and check whether one of them is
already stored, and if that's the case to increment a column named
"count" (or something).
Tom Lane writes:
> Guillaume Cottenceau <[EMAIL PROTECTED]> writes:
>> I have made a comparison restoring a production dump with default
>> and large maintenance_work_mem. The speedup improvement here is
>> only of 5% (12'30 => 11'50).
>
>> Apprently, on the restored database, data is 1337 MB[1]