I have read somewhere that text fields are "compressed". What I am curious about is how the compression of text fields by PostgreSQL might be affecting the performance of my software. I currently store entire copies of documents in a table called "fulltext" as such:
CREATE TABLE fulltext ( uid serial8 PRIMARY KEY, content text NOT NULL, contentidx txtidx ) ; As you can see, I am using contrib/tsearch to find documents for display, and then I dump out 'content' to the user. Anyway, when I first created this table, I was concerned about the size of 'content' so I linked my program to zlib and deflate the content field before storing it into the table. This means that every time someone views a document I have to inflate it, also if what I have read is correct about the text fields, PostgreSQL is trying to deflate/compress/whatever the field when it stores it so I am duplicating effort. This probably slows down the performance of my software, and because compressed documents are often larger when re-compressed (at least with older algorithms like LZW) I might be using extra space to store my data. So: 1) do text fields get compressed 2) what compression method is used 3) is there a way to view actual storage space used versus the value I get from length() 4) Can I disable the compression to improve storage speed if the compression algorithm is not as good as deflate - brian Wm. Brian McCane | Life is full of doors that won't open Search http://recall.maxbaud.net/ | when you knock, equally spaced amid those Usenet http://freenews.maxbaud.net/ | that open when you don't want them to. Auction http://www.sellit-here.com/ | - Roger Zelazny "Blood of Amber" ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match