Ok, thanks for the limits info, but I have that in the manual. Thanks.

But what I really want to know is this:

1) All large objects of all tables inside one DATABASE is kept on only one table. True or false?

Thanks =o)
Rodrigo

On 10/25/05, Nörder-Tuitje, Marcus <[EMAIL PROTECTED]> wrote:
oh, btw, no harm, but :
 
having 5000 tables only to gain access via city name is a major design flaw.
 
you might consider putting all into one table working with a distributed index over yer table (city, loc_texdt, blobfield); creating a partitioned index over city.
 
best regards
-----Urspr√ľngliche Nachricht-----
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]Im Auftrag von Rodrigo Madera
Gesendet: Montag, 24. Oktober 2005 21:12
An: pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] Inefficient escape codes.

Now this interests me a lot.

Please clarify this:

I have 5000 tables, one for each city:

City1_Photos, City2_Photos, ... City5000_Photos.

Each of these tables are: CREATE TABLE CityN_Photos (location text, lo_id largeobectypeiforgot)

So, what's the limit for these large objects? I heard I could only have 4 billion records for the whole database (not for each table). Is this true? If this isn't true, then would postgres manage to create all the large objects I ask him to?

Also, this would be a performance penalty, wouldn't it?

Much thanks for the knowledge shared,
Rodrigo




Reply via email to