...@postgresql.org]mailto:[mailto:pgsql-performance-ow...@postgresql.org]
On Behalf Of Genc, Ömer
Sent: Friday, August 21, 2015 8:49 AM
To: pgsql-performance@postgresql.orgmailto:pgsql-performance@postgresql.org
Subject: [PERFORM] Performance bottleneck due to array manipulation
Hey,
i have
Hey,
i have a very long running stored procedure, due to array manipulation in a
stored procedure. The following procedure takes 13 seconds to finish.
BEGIN
point_ids_older_than_one_hour := '{}';
object_ids_to_be_invalidated := '{}';
select ARRAY(SELECT
point_id
=?iso-8859-1?Q?Genc=2C_=D6mer?= oemer.g...@iais.fraunhofer.de writes:
i have a very long running stored procedure, due to array manipulation in a
stored procedure. The following procedure takes 13 seconds to finish.
BEGIN
point_ids_older_than_one_hour := '{}';
Hello,
On Fri, Aug 21, 2015 at 2:48 PM, Genc, Ömer oemer.g...@iais.fraunhofer.de
wrote:
Now I want to delete all entries from ims_point, where the timestamp is
older than one hour. The currently being referenced ids of the table
ims_object_header should be excluded from this deletion.
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Genc, Ömer
Sent: Friday, August 21, 2015 8:49 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Performance bottleneck due to array manipulation
Hey,
i have a very long running
Squid also takes away the work of doing SSL (presuming you're running it
on a different machine). Unfortunately it doesn't support HTTP/1.1 which
means that most generated pages (those that don't set Content-length) end
up forcing squid to close and then reopen the connection to the web
On Fri, 2004-08-06 at 23:18 +, Martin Foster wrote:
Mike Benoit wrote:
On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:
The queries themselves are simple, normally drawing information from one
table with few conditions or in the most complex cases using joins on
two
On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:
I am currently making use of Apache::DBI which overrides the
DBI::disconnect call and keeps a pool of active connections for use
when need be. Since it offloads the pooling to the webserver, it
seems more advantageous then pgpool which while
On 8/8/2004 8:10 AM, Jeff wrote:
On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:
I am currently making use of Apache::DBI which overrides the
DBI::disconnect call and keeps a pool of active connections for use
when need be. Since it offloads the pooling to the webserver, it
seems more
And this is exactly where the pgpool advantage lies.
Especially with the
TPC-W, the Apache is serving a mix of PHP (or whatever CGI
technique is
used) and static content like images. Since the 200+ Apache
kids serve
any of that content by random and the emulated browsers very much
Jeff wrote:
On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:
I am currently making use of Apache::DBI which overrides the
DBI::disconnect call and keeps a pool of active connections for use
when need be. Since it offloads the pooling to the webserver, it
seems more advantageous then pgpool
On 8-8-2004 16:29, Matt Clark wrote:
There are two well-worn and very mature techniques for dealing with the
issue of web apps using one DB connection per apache process, both of which
work extremely well and attack the issue at its source.
1) Use a front-end caching proxy like Squid as an
Arjen van der Meijden wrote:
On 8-8-2004 16:29, Matt Clark wrote:
There are two well-worn and very mature techniques for dealing with the
issue of web apps using one DB connection per apache process, both of
which
work extremely well and attack the issue at its source.
1)Use a front-end
Jeff wrote:
On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:
I am currently making use of Apache::DBI which overrides the
DBI::disconnect call and keeps a pool of active connections for use
when need be. Since it offloads the pooling to the webserver, it
seems more
Arjen van der Meijden wrote:
On 8-8-2004 16:29, Matt Clark wrote:
There are two well-worn and very mature techniques for dealing with the
issue of web apps using one DB connection per apache process, both of
which
work extremely well and attack the issue at its source.
1)
Scott Marlowe wrote:
On Fri, 2004-08-06 at 22:02, Martin Foster wrote:
Scott Marlowe wrote:
On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for
Tom Lane wrote:
Martin Foster [EMAIL PROTECTED] writes:
Gaetano Mendola wrote:
change this values in:
shared_buffers = 5
sort_mem = 16084
wal_buffers = 1500
This value of wal_buffers is simply ridiculous.
Instead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.
There isn't any
This value of wal_buffers is simply ridiculous.
Instead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.
There is no point making WAL buffers higher than 8. I have done much
testing of this and it makes not the slightest difference to performance
that I could measure.
Chris
On 8/3/2004 2:05 PM, Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3) for everything from user information to formatting and display
of specific sections of the site. The server itself, is a dual
processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2
Jan Wieck wrote:
On 8/3/2004 2:05 PM, Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3) for everything from user information to formatting and display
of specific sections of the site. The server itself, is a dual
processor AMD Opteron 1.4Ghz
Christopher Kings-Lynne wrote:
This value of wal_buffers is simply ridiculous.
Instead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.
There is no point making WAL buffers higher than 8. I have done much
testing of this and it makes not the slightest difference to performance
Martin Foster wrote:
Gaetano Mendola wrote:
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of
PostgreSQL (7.4.3) for everything from user information to formatting
and display of specific sections of the site. The server itself, is
a dual processor AMD Opteron
On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:
The queries themselves are simple, normally drawing information from one
table with few conditions or in the most complex cases using joins on
two table or sub queries. These behave very well and always have, the
problem is
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for your
configuration
sort_mem = 2048
wal_buffers = 128 This is really too small for your configuration
effective_cache_size = 16000
change this values in:
Mike Benoit wrote:
On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:
The queries themselves are simple, normally drawing information from one
table with few conditions or in the most complex cases using joins on
two table or sub queries. These behave very well and always have, the
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for your
configuration
sort_mem = 2048
wal_buffers = 128 This is really too small for your
configuration
effective_cache_size = 16000
change
Martin Foster [EMAIL PROTECTED] writes:
Gaetano Mendola wrote:
change this values in:
shared_buffers = 5
sort_mem = 16084
wal_buffers = 1500
This value of wal_buffers is simply ridiculous.
There isn't any reason to set wal_buffers higher than the amount of
WAL log data that will be
On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for your
configuration
sort_mem = 2048
wal_buffers = 128 This is
Scott Marlowe wrote:
On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for your
configuration
sort_mem = 2048
wal_buffers = 128 This is
On Fri, 2004-08-06 at 22:02, Martin Foster wrote:
Scott Marlowe wrote:
On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for your
Scott Marlowe wrote:
On Fri, 2004-08-06 at 22:02, Martin Foster wrote:
Scott Marlowe wrote:
On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Let start from your postgres configuration:
shared_buffers = 8192 This is really too small for
Apache processes running for 30 minutes ?.
My advice : use frames and Javascript !
In your webpage, you have two frames : content and refresh.
content starts empty (say, just a title on top of the page).
refresh is refreshed every five seconds from a script on your
On Thu, Aug 05, 2004 at 08:40:35AM +0200, Pierre-Frédéric Caillaud wrote:
Apache processes running for 30 minutes ?.
My advice : use frames and Javascript !
My advice: Stay out of frames and Javascript if you can avoid it. The first
is severely outdated technology, and the
The queries themselves are simple, normally drawing information from one
table with few conditions or in the most complex cases using joins on
two table or sub queries. These behave very well and always have, the
problem is that these queries take place in rather large amounts due to
the dumb
Gaetano Mendola wrote:
Martin Foster wrote:
Gaetano Mendola wrote:
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of
PostgreSQL (7.4.3) for everything from user information to
formatting and display of specific sections of the site. The
server itself, is a dual
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3) for everything from user information to formatting and display
of specific sections of the site. The server itself, is a dual
processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives
mirrored for
On Tue, 3 Aug 2004, Martin Foster wrote:
to roughly 175 or more. Essentially, the machine seems to struggle
to keep up with continual requests and slows down respectively as
resources are tied down.
I suggest you try to find queries that are slow and check to see if the
plans are
Hello,
It sounds to me like you are IO bound. 2x120GB hard drives just isn't
going to cut it with that many connections (as a general rule). Are you
swapping ?
Sincerely,
Joshua D. Drake
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3) for
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3) for everything from user information to formatting and display
of specific sections of the site. The server itself, is a dual
processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives
Gaetano Mendola wrote:
Martin Foster wrote:
I run a Perl/CGI driven website that makes extensive use of PostgreSQL
(7.4.3) for everything from user information to formatting and display
of specific sections of the site. The server itself, is a dual
processor AMD Opteron 1.4Ghz w/ 2GB Ram and
40 matches
Mail list logo