Andrew Dunstan wrote:

Does it mean the maximum field size will grow beyond 1Gb?

No. Because it is limited by varlena size. See

Or give better performance?

Yes. List of chunks is stored as linked list and for some operation (e.g. expand) are all chunks opened and their size is checked. On big tables it takes some time. For example if you have 1TB big table and you want to add new block you must go and open all 1024 files.

By the way ./configure script performs check for __LARGE_FILE_ support, but it looks that it is nowhere used.
There could be small time penalty in 64bit arithmetics. However it happens only if large file support is enabled on 32bit OS.


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at


Reply via email to