Luke
> Lonergan
> Sent: Thursday, April 20, 2006 5:03 PM
> To: [EMAIL PROTECTED]; Simon Dale; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Quick Performance Poll
>
>
> Jim,
>
> On 4/20/06 7:40 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wro
n
> Sent: Thursday, April 20, 2006 5:03 PM
> To: [EMAIL PROTECTED]; Simon Dale; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Quick Performance Poll
>
>
> Jim,
>
> On 4/20/06 7:40 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote:
>
>> First of all
20, 2006 5:03 PM
To: [EMAIL PROTECTED]; Simon Dale; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Quick Performance Poll
Jim,
On 4/20/06 7:40 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote:
> First of all this is NOT a single table and yes I am using
> part
Interested in doing a case study for the website?
On Thu, Apr 20, 2006 at 09:36:25AM -0400, Jim Buttafuoco wrote:
>
> Simon,
>
> I have many databases over 1T with the largest being ~6T. All of my
> databases store telecom data, such as call detail
> records. The access is very fast when look
Markus,
On 4/20/06 8:11 AM, "Markus Schaber" <[EMAIL PROTECTED]> wrote:
> Are they capable to index custom datatypes like the PostGIS geometries
> that use the GIST mechanism? This could probably speed up our Geo
> Databases for Map rendering, containing static data that is updated
> approx. 2 ti
Hi, Luke,
Luke Lonergan wrote:
> The current drawback to bitmap index is that it isn't very maintainable
> under insert/update, although it is safe for those operations. For now, you
> have to drop index, do inserts/updates, rebuild index.
So they effectively turn the table into a read-only tab
lt;[EMAIL PROTECTED]>,
pgsql-performance@postgresql.org
Sent: Thu, 20 Apr 2006 08:03:10 -0700
Subject: Re: [PERFORM] Quick Performance Poll
> Jim,
>
> On 4/20/06 7:40 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote:
>
> > First of all this is NOT a single tab
Jim,
On 4/20/06 7:40 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote:
> First of all this is NOT a single table and yes I am using partitioning and
> the constaint exclusion stuff. the largest
> set of tables is over 2T. I have not had to rebuild the biggest database yet,
> but for a smaller one
and small tables
Jim
-- Original Message ---
From: "Luke Lonergan" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], "Simon Dale" <[EMAIL PROTECTED]>,
pgsql-performance@postgresql.org
Sent: Thu, 20 Apr 2006 07:31:33 -0700
Subject: Re: [PERFORM] Quick Pe
Jim,
On 4/20/06 6:36 AM, "Jim Buttafuoco" <[EMAIL PROTECTED]> wrote:
> The access is very fast when looking for a small subset of the data.
I guess you are not using indexes because building a (non bitmap) index on
6TB on a single machine would take days if not weeks.
So if you are using table
memory. SCSI is out of our price
range, but if I had unlimited $ I would go
with SCSI /SCSI raid instead.
Jim
-- Original Message ---
From: "Simon Dale" <[EMAIL PROTECTED]>
To:
Sent: Thu, 20 Apr 2006 14:18:58 +0100
Subject: [PERFORM] Quick Performance Poll
>
Hi,
I was just wondering whether anyone has had success with
storing more than 1TB of data with PostgreSQL and how they have found the
performance.
We need a database that can store in excess of this amount
and still show good performance. We will probably be implementing several
ta
12 matches
Mail list logo