Thanks,
So I've been able to find what's causing my postgres process memory amount
to grow, but I don't know why it happens.
So, my software is updating 6 rows/second on my main database.
Rubyrep is running on my server with thebackup database doing a replicate
The huge TopMemoryContext problem
Vincent Dautremont vinc...@searidgetech.com writes:
I've found out that when my software does these updates the memory of the
postgres process grows constantly at 24 MB/hour. when I stop my software to
update these rows, the memory of the process stops to grow.
also I've noticed that when I
Hi,
you were right,
I do see those CREATE OR REPLACE FUNCTION a bit more than 1 per second
(approx. 12 times for 10 seconds)
2012-05-23 21:15:45 WET LOG: execute unnamed: CREATE OR
REPLACE FUNCTION rr_ptz_lock() RETURNS TRIGGER AS $change_trigger$
BEGIN
Vincent Dautremont vinc...@searidgetech.com writes:
you were right,
I do see those CREATE OR REPLACE FUNCTION a bit more than 1 per second
(approx. 12 times for 10 seconds)
Hah. Complain to the rubyrep people. It's most likely just a thinko
about where they should issue that command. If
Well,
Thank you very much for your help, it's greatly appreciated.
At least I can now pinpoint the problem and search for a solution or
another reason to upgrade to 9.1 !
Regards,
Vincent.
On Wed, May 23, 2012 at 5:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Vincent Dautremont
H,
after a few days, i'm seeing the following logs in a database (postgresql
8.3.15 on Windows)
running with rubyrep 1.2.0 for syncing a few table small that have frequent
update / insert/ delete.
I don't understand it and I'd like to know what happens and why. How to get
rid of it.
I've seen in
Vincent Dautremont vinc...@searidgetech.com writes:
after a few days, i'm seeing the following logs in a database (postgresql
8.3.15 on Windows)
running with rubyrep 1.2.0 for syncing a few table small that have frequent
update / insert/ delete.
I don't understand it and I'd like to know
Well,
I think that i'm using the database for pretty basic stuffs.
It's mostly used with stored procedures to update/ insert / select a row of
each table.
On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
have pseudo real-time data up to date.
I've got a total of 6
Vincent Dautremont vinc...@searidgetech.com writes:
I think that i'm using the database for pretty basic stuffs.
It's mostly used with stored procedures to update/ insert / select a row of
each table.
On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
have pseudo
Thanks Tom,
when you say,
An entirely blue-sky guess as
to what your code might be doing to trigger such a problem is if you
were constantly replacing the same function's definition via CREATE OR
REPLACE FUNCTION.
Do you mean that what would happen is that when we call the plpgsql
function,
Vincent Dautremont vinc...@searidgetech.com writes:
An entirely blue-sky guess as
to what your code might be doing to trigger such a problem is if you
were constantly replacing the same function's definition via CREATE OR
REPLACE FUNCTION.
Do you mean that what would happen is that when we
Tom Lane ha scritto:
Silvio Brandani silvio.brand...@tech.sdb.it writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the
Tom Lane ha scritto:
Silvio Brandani silvio.brand...@tech.sdb.it writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_type,MAX(oec.ct_feet)
Silvio Brandani ha scritto:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as
Silvio Brandani silvio.brand...@tech.sdb.it writes:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query
Tom Lane ha scritto:
Silvio Brandani silvio.brand...@tech.sdb.it writes:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
Is it really the *exact* same query both ways, or are you doing
Silvio Brandani silvio.brand...@tech.sdb.it writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the postgres log.
I just try it in test
: Silvio Brandani silvio.brand...@tech.sdb.it
Subject: [ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
of memory
2010-08-05 10:52:40 CEST
...@tech.sdb.it wrote:
From: Silvio Brandani silvio.brand...@tech.sdb.it
Subject: [ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
Excerpts from Silvio Brandani's message of vie ago 06 07:56:53 -0400 2010:
it seems the execution plan is different for this query when run from
the application versus the psql . How can I check the execution plan of
a query run by a user??
I can set explain analyze for the query via psql
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on request of
size 48.
any suggestion ?
--
Silvio Brandani
Infrastructure Administrator
SDB Information
Hi Silvio,
I don't know if this is relevant. But, work_mem and some other
parameters inside postgresql.conf are not set. Here is a portion of
the file:
shared_buffers = 32MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem = 1MB
maintenance_work_mem = 16MB
max_stack_depth = 2MB
[]´s
Silvio Brandani silvio.brand...@tech.sdb.it wrote:
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
request of size 48.
What query? On what OS? Is this
Victor Hugo ha scritto:
Hi Silvio,
I don't know if this is relevant. But, work_mem and some other
parameters inside postgresql.conf are not set. Here is a portion of
the file:
shared_buffers = 32MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem = 1MB
maintenance_work_mem = 16MB
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Silvio Brandani silvio.brand...@tech.sdb.it wrote:
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
Tom Lane ha scritto:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Silvio Brandani silvio.brand...@tech.sdb.it wrote:
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]:
Silvio Brandani silvio.brand...@tech.sdb.it writes:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
What query?
[ query with aggregates and GROUP BY ]
Does EXPLAIN show that it's trying to use a hash aggregation plan?
If so, try turning off enable_hashagg. I think the hash table might
be
2010/8/5 Silvio Brandani silvio.brand...@tech.sdb.it:
I have tried to increase the parameters but still fail. what is strange is
that with psql the query works fine and give result immediatly, with
application through odbc the query fail
That's usually the opposite of what you want to do
...@tech.sdb.it
Subject: [ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
of memory
2010-08-05 10:52:40 CEST [12106]: [279-1
Hello,
I am running the following query:
SELECT
indiv_fkey,
MAX(emp_ind),
MAX(prizm_cd_indiv),
MAX(CASE
WHEN div IS NULL THEN NULL
WHEN store_loyal_loc_cd IS NULL THEN NULL
ELSE div || '-' ||
Abu Mushayeed [EMAIL PROTECTED] writes:
AFTER A WHILE THE SYSTEM COMES BACK AND SAYS IN THE LOG FILE:
Please turn off your caps lock key :-(
AggContext: -1501569024 total in 351 blocks; 69904 free (507 chunks);
-1501638928 used
DynaHashTable: 302047256 total in 46 blocks; 275720 free (66
Hi
On a large transaction involving an insert of 8 million
rows, after a while Postgres complains of an out of memory error.
Failed on request of size 32
I get no other message.
Shmmax is set to 1 Gb
Shared_buffers set to 5
Max memory on box is 4Gb..Postgres is the only
Sriram Dandapani [EMAIL PROTECTED] writes:
On a large transaction involving an insert of 8 million rows, after a
while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster than
, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory error with large insert
Sriram Dandapani [EMAIL PROTECTED] writes:
On a large transaction involving an insert of 8 million rows, after a
while Postgres complains of an out of memory
Hi ,
I try to make insert into myTable select on pg 8.0.1 and freeBSD 5.3.
After ~10 sec. pg returns out of memory.
In log file I found:
Portal hash: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
Relcache by OID: 8192 total in 1 blocks; 4040 free (0 chunks); 4152 used
Relcache by
pg 7.4.2 on RH 7.3
Hi,
Am getting Out of memory error and pgSql is getting killed when running the
following statement in psql:
insert into tableA
select * from tableB
where testdate = '2000-01-01' and testdate = '2002-12-31'
TableB contains about 120 million records. The insert/select affects
ow [EMAIL PROTECTED] writes:
insert into tableA
select * from tableB
where testdate = '2000-01-01' and testdate = '2002-12-31'
What's happening is that pgSql gradually takes all (well, almost) physical and
swap memory and then, I think, is getting killed by the kernel.
1) Is this normal?
Hi,
we are using Postgres with a J2EE application (JBoss) and get
intermittent out of memory errors on the Postgres database. We are
running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the
following parameters:
shared_buffers = 8192
sort_mem = 8192
effective_cache_size = 234881024
Jie Liang wrote:
Does 7.3* support this? Can you tell me a bit more about it, please?
Hash aggregate..?
I had a similar problem after upgrade to 7.4.2,
Try:
SET enable_hashagg = false;
Before you execute that SELECT stmt
If you don't want disable it in postgresql.conf
Jie Liang
Greetings,
During testing of our application we ran into a very odd error:
very randomly during the test the postgresql log file showed a ERROR:
53200: out of memory
we changed the logging configuration to log statements causing errors
and found the error to be caused by a SELECT,
here is
PROTECTED]; [EMAIL PROTECTED]
Subject: [ADMIN] out of memory error
Greetings,
During testing of our application we ran into a very odd error:
very randomly during the test the postgresql log file showed a ERROR:
53200: out of memory
we changed the logging configuration to log statements
42 matches
Mail list logo