On Sat, Jul 25, 2015 at 8:50 AM, Craig James wrote:
> The canonical advice here is to avoid more connections than you have CPUs,
> and to use something like pg_pooler to achieve that under heavy load.
>
> We are considering using the Apache mod_perl "fast-CGI" system and perl's
> Apache::DBI modu
On Wed, May 25, 2011 at 10:59, Reuven M. Lerner wrote:
> Hi, everyone. I'm working on a project that's using PostgreSQL 8.3, that
> requires me to translate strings of octal digits into strings of characters
> -- so '141142143' should become 'abc', although the database column
> containing this d
On Wed, May 25, 2011 at 12:45, Reuven M. Lerner wrote:
> Hi, Alex. You wrote:
>> I think select E'\XXX' is what you are looking for (per the fine
>> manual:
>> http://www.postgresql.org/docs/current/static/datatype-binary.html)
>
> I didn't think that I could (easily) build a string like that fr
On Thu, Nov 11, 2010 at 06:41, Marc Mamin wrote:
> There are a few places in our data flow where we have to wait for index
> creation before being able to distribute the process on multiple threads
> again.
Would CREATE INDEX CONCURRENTLY help here?
--
Sent via pgsql-performance mailing list (p
On Wed, Oct 27, 2010 at 21:08, Divakar Singh wrote:
> So another question pops up: What method in PostgreSQL does the stored proc
> use when I issue multiple insert (for loop for 100 thousand records) in the
> stored proc?
It uses prepared statements (unless you are using execute). There is
also
On Wed, Oct 27, 2010 at 08:00, Divakar Singh wrote:
> I am attaching my code below.
> Is any optimization possible in this?
> Do prepared statements help in cutting down the insert time to half for this
> kind of inserts?
In half? not for me. Optimization possible? Sure, using the code you
paste
On Wed, Oct 13, 2010 at 02:38, Neil Whelchel wrote:
> And the cache helps...
> So, we are right back to within 10ms of where we started after INSERTing the
> data, but it took a VACUUM FULL to accomplish this (by making the table fit in
> RAM).
> This is a big problem on a production machine as t
On Wed, Oct 13, 2010 at 07:49, Tom Lane wrote:
> Neil Whelchel writes:
> I concur with Mark's question about whether your UPDATE pushed the table
> size across the limit of what would fit in RAM.
Yeah, you said you have ~2GB of ram, just counting the bytes and the
number of rows (not including
On Mon, Sep 8, 2008 at 8:19 PM, Rainer Mager <[EMAIL PROTECTED]> wrote:
> 1. Move some of the databases to the new drive. If this is a good idea, is
> there a way to do this without a dump/restore? I'd prefer to move the folder
> if possible since that would be much faster.
What like tablespaces?