full and reindex, it's at about 75
megs.
If i gzip compress that 25 meg file it's only 6.3 megs so I'd think if I could
make a toastable type it'd benefit.
Need to look into it now, I may be completely off my rocker.
Thank you
Shane Ambler <[EMAIL PROTECTED]> wrote: Zoo
ur, with duplicate date columns, seems like it could
> add up though.
>>>Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB
>>>additional storage over a year. Not sure it's worth worrying about.
Ahh yes probably better to make it a date w. timestamp co
don't know if it takes that much
more space, or there's a significant performance penalty in using it
2,500 rows per hour, with duplicate date columns, seems like it could add up
though.
thanks
Richard Huxton wrote: Zoolin Lin wrote:
> Hi,
>
> I have database with a huge amount of dat
Hi,
I have database with a huge amount of data so i'm trying to make it as fast as
possible and minimize space.
One thing i've done is join on a prepopulated date lookup table to prevent a
bunch of rows with duplicate date columns. Without this I'd have about 2500
rows per hour with the exact