On Fri, Oct 9, 2009 at 10:50 PM, Scott Carey sc...@richrelevance.com wrote:
On 10/9/09 2:02 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Oct 9, 2009 at 1:46 PM, Scott Otis scott.o...@intand.com wrote:
Over the next couple of months we will be creating an instance of our
solution for
On Fri, Oct 9, 2009 at 11:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
I've got 200,000 tables in one db (8.4), and some tools barely work. The
system catalogs get inefficient when large and psql especially has
trouble.
Tab completion takes
On 10/10/2009 01:14 AM, tsuraan wrote:
The most significant impact is that it takes up twice as much space,
including the primary key index. This means fewer entries per block,
which means slower scans and/or more blocks to navigate through. Still,
compared to the rest of the overhead of an
Hello All --
I have implemented table partitioning in order to increase performance
in my database-backed queuing system. My queue is partitioned by
job_id into separate tables that all inherit from a base queue table.
Things were working swimmingly until my system started managing
On Sat, Oct 10, 2009 at 8:44 AM, Chris Kratz chris.kr...@vistashare.com wrote:
alter function pg_table_is_visible(oid) cost 10;
(You'll need to do it as superuser --- if it makes things worse, just
set the cost back to 1.)
Sometimes it does not match
valid tables at all, and sometimes
Brian Karlak zen...@metaweb.com writes:
out of shared memory HINT: You might need to increase
max_locks_per_transaction
You want to do what it says ...
1) We've already tuned postgres to use ~2BG of shared memory -- which
is SHMAX for our kernel. If I try to increase