[GENERAL] Partitioning into thousands of tables?

2010-08-05 Thread Data Growth Pty Ltd
I have a table of around 200 million rows, occupying around 50G of disk. It is slow to write, so I would like to partition it better. The table is roughly: id: integer # unique from sequence external_id : varchar(255) # unique, used to interface with external systems, not updated (only selec

[GENERAL] Enforcing unique column with triggers and hash

2010-05-27 Thread Data Growth Pty Ltd
I have a large table (200 million rows) with a column ( 'url' character varying(255)) that I need to be unique. Currently I do this via a UNIQUE btree index on (lower(url::text)) The index is huge, and I would like to make it much smaller. Accesses to the table via this key are a tiny portion of

[GENERAL] Could not read directory "pg_xlog": Invalid argument (on SSD Raid)

2009-11-04 Thread Data Growth Pty Ltd
I'm frequently getting these errors in my console: 4/11/09 2:25:04 PMorg.postgresql.postgres[192]ERROR: could not read directory "pg_xlog": Invalid argument 4/11/09 2:25:56 PMorg.postgresql.postgres[192]ERROR: could not read directory "pg_xlog": Invalid argument 4/11/09 2:36:03 P