On Tue, Feb 8, 2011 at 3:23 PM, Shaun Thomas <stho...@peak6.com> wrote:

>
> With 300k rows, count(*) isn't a good test, really. That's just on the edge
> of big-enough that it could be > 1-second to fetch from the disk controller,
>


1 second you say ?  excellent, sign me up

70 seconds is way out of bounds

I don't want a more efficient query to test with, I want the shitty query
that performs badly that isolates an obvious problem.

The default settings are not going to cut it for a database of your size,
> with the volume you say it's getting.
>

not to mention the map reduce jobs I'm hammering it with all night :)

but I did pause those until this is solved

But you need to put in those kernel parameters I suggested. And I know this
> sucks, but you also have to raise your shared_buffers and possibly your
> work_mem and then restart the DB. But this time, pg_ctl to invoke a fast
> stop, and then use the init script in /etc/init.d to restart it.


I'm getting another slicehost slice. hopefully I can clone the whole thing
over without doing a full install and go screw around with it there.

its a fairly complicated install, even with buildout doing most of the
configuration.


=felix

Reply via email to