I thought I'd make a followup to the question of storage bloat. I
tried approaching the problem by living a little loosely with database
normalization and use arrays instead of flattening everything out in
the tables.
So my original table,
CREATE TABLE original (
dbIndex integer,
index1 smallint,
> I don't see orders-of-magnitude bloat here though. You've got 16 bytes
> of useful data per row (which I suppose was 12 bytes in the flat file?).
> There will be 28 bytes of overhead per table row. In addition the index
> will require 12 data bytes + 12 overhead bytes per entry; allowing for
>
"Tony and Bryn Reina" <[EMAIL PROTECTED]> writes:
> CREATE TABLE SegmentValues (
> dbIndex integer REFERENCES EntityFile (dbIndex),
> dwunitid smallint,
> dwsampleindex smallint,
> dtimestamp float4,
> dvalue float4,
> PRIMARY KEY (dbIndex, dtimest
On Thursday 08 April 2004 5:51 am, Tony and Bryn Reina wrote:
> Yep. That's after a 'vacuum verbose analyze'.
No, he asked if you had run a "vacuum full". A "standard" vacuum just
marks space available for reuse - it does not shrink file sizes. A
"vacuum full" will shrink the files on disk.
Are
"Tony and Bryn Reina" <[EMAIL PROTECTED]> writes:
> There's only about 11 tables in the DB. I included them at the bottom
> in case you're interested.
What are the actual sizes of the tables? Probably the most useful way
you could present that info is "vacuum verbose" output for each table.
> Well, an important question is where is that space going? It'd be
> interesting to give a breakup by the directories and then which files (and
> using the contrib/oid2name to get which table/indexes/etc they are).
>
> At least 16MB of that is probably going into the transaction log (IIRC
> that's
[EMAIL PROTECTED] (Tony Reina) writes:
> However, a 65X bloat in space seems excessive.
Without concrete details about your schema, it's hard to answer that.
I suspect you made some inefficient choices, but have no data.
For starters you should do a database-wide VACUUM to ensure
pg_class.relpage
On Thu, 8 Apr 2004, Tony Reina wrote:
> I'm developing a database for scientific recordings. These recordings
> are traditionally saved as binary flat files for simplicity and
> compact storage. Although I think ultimately having a database is
> better than 1,000s of flat files in terms of data ac
On Thu, 8 Apr 2004, Tony Reina wrote:
> Has anyone run across similar storage concerns? I'd be interested in
> knowing if I just have really poorly designed tables, or if something
> else is going on here. I figure a bloat of 3-4X would be permissible
> (and possibly expected). But this bloat just
dtimestamp;
- Original Message -
From: "Douglas Trainor" <[EMAIL PROTECTED]>
To: "Tony Reina" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, April 08, 2004 1:41 PM
Subject: Re: [ADMIN] Database storage bloat
> Saying
Yep. That's after a 'vacuum verbose analyze'.
-Tony
- Original Message -
From: "Uwe C. Schroeder" <[EMAIL PROTECTED]>
To: "Tony Reina" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, April 08, 2004 11:57 AM
Subject: Re: [
Saying "we've set field sizes to their theoretical skinniness" makes me
think that
you may have the wrong data types. For example, you may have used CHAR
and not VARCHAR.
douglas
Tony Reina wrote:
I'm developing a database for scientific recordings. These recordings
are traditionally saved
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Did you run vacuum full after your import ?
On Thursday 08 April 2004 02:15 am, Tony Reina wrote:
> I'm developing a database for scientific recordings. These recordings
> are traditionally saved as binary flat files for simplicity and
> compact stor
13 matches
Mail list logo