Well,
Thank you very much for your help, it's greatly appreciated.
At least I can now pinpoint the problem and search for a solution or
another reason to upgrade to 9.1 !
Regards,
Vincent.
On Wed, May 23, 2012 at 5:33 PM, Tom Lane wrote:
> Vincent Dautremont writes:
> > you were right,
> > I d
Vincent Dautremont writes:
> you were right,
> I do see those CREATE OR REPLACE FUNCTION a bit more than 1 per second
> (approx. 12 times for 10 seconds)
Hah. Complain to the rubyrep people. It's most likely just a thinko
about where they should issue that command. If they actually are
changin
Hi,
you were right,
I do see those CREATE OR REPLACE FUNCTION a bit more than 1 per second
(approx. 12 times for 10 seconds)
2012-05-23 21:15:45 WET LOG: execute : CREATE OR
> REPLACE FUNCTION "rr_ptz_lock"() RETURNS TRIGGER AS $change_trigger$
> BEGIN
>
Vincent Dautremont writes:
> I've found out that when my software does these updates the memory of the
> postgres process grows constantly at 24 MB/hour. when I stop my software to
> update these rows, the memory of the process stops to grow.
> also I've noticed that when I stop rubyrep, this post
Thanks,
So I've been able to find what's causing my postgres process memory amount
to grow, but I don't know why it happens.
So, my software is updating 6 rows/second on my main database.
Rubyrep is running on my server with thebackup database doing a "replicate"
The huge TopMemoryContext problem
Vincent Dautremont writes:
>> An entirely blue-sky guess as
>> to what your code might be doing to trigger such a problem is if you
>> were constantly replacing the same function's definition via CREATE OR
>> REPLACE FUNCTION.
> Do you mean that what would happen is that when we call the plpgsql
Thanks Tom,
when you say,
> An entirely blue-sky guess as
> to what your code might be doing to trigger such a problem is if you
> were constantly replacing the same function's definition via CREATE OR
> REPLACE FUNCTION.
>
Do you mean that what would happen is that when we call the plpgsql
funct
Vincent Dautremont writes:
> I think that i'm using the database for pretty basic stuffs.
> It's mostly used with stored procedures to update/ insert / select a row of
> each table.
> On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
> have pseudo real-time data up to dat
Well,
I think that i'm using the database for pretty basic stuffs.
It's mostly used with stored procedures to update/ insert / select a row of
each table.
On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
have pseudo real-time data up to date.
I've got a total of 6 clients
Vincent Dautremont writes:
> after a few days, i'm seeing the following logs in a database (postgresql
> 8.3.15 on Windows)
> running with rubyrep 1.2.0 for syncing a few table small that have frequent
> update / insert/ delete.
> I don't understand it and I'd like to know what happens and why. H
H,
after a few days, i'm seeing the following logs in a database (postgresql
8.3.15 on Windows)
running with rubyrep 1.2.0 for syncing a few table small that have frequent
update / insert/ delete.
I don't understand it and I'd like to know what happens and why. How to get
rid of it.
I've seen in
Tom Lane ha scritto:
Silvio Brandani writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the postgres log.
I just try it in t
Tom Lane ha scritto:
Silvio Brandani writes:
Tom Lane ha scritto:
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Is it exactly the same, the query text is from the postgres log.
I just try it in t
Silvio Brandani writes:
> Tom Lane ha scritto:
>> Is it really the *exact* same query both ways, or are you doing
>> something like parameterizing the query in the application?
> Is it exactly the same, the query text is from the postgres log.
> I just try it in test environment and we have same
Tom Lane ha scritto:
Silvio Brandani writes:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the q
Silvio Brandani writes:
>> Still problems of Out of Memory:
>> the query is the following and if I run it from psql is working fine,
>> but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Silvio Brandani ha scritto:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_ty
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_type,MAX(oec.ct_feet) ::char(3)
Excerpts from Silvio Brandani's message of vie ago 06 07:56:53 -0400 2010:
> it seems the execution plan is different for this query when run from
> the application versus the psql . How can I check the execution plan of
> a query run by a user??
> I can set explain analyze for the query via psq
From: Silvio Brandani
Subject: [ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] D
ADMIN] out of memory error
To: pgsql-admin@postgresql.org
Date: Thursday, August 5, 2010, 9:01 AM
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out
of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL:
Failed on request of
Silvio ,
I had a similar problem when starting the database from an account that didn't
have the appropriate ulimits set. Check the ulimit values using ulimit -a.
HTH,
Bob Lunney
--- On Thu, 8/5/10, Silvio Brandani wrote:
> From: Silvio Brandani
> Subject: [ADMIN] out of m
2010/8/5 Silvio Brandani :
>>
>
> I have tried to increase the parameters but still fail. what is strange is
> that with psql the query works fine and give result immediatly, with
> application through odbc the query fail
That's usually the opposite of what you want to do here.
--
Sent via pgsql
Silvio Brandani writes:
>> "Kevin Grittner" writes:
>>> What query?
[ query with aggregates and GROUP BY ]
Does EXPLAIN show that it's trying to use a hash aggregation plan?
If so, try turning off enable_hashagg. I think the hash table might
be ballooning far past the number of entries the pla
Tom Lane ha scritto:
"Kevin Grittner" writes:
Silvio Brandani wrote:
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
request of size 48.
"Kevin Grittner" writes:
> Silvio Brandani wrote:
>> a query on our production database give following errror:
>>
>> 2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
>> 2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
>> request of size 48.
> What query? On what OS?
Victor Hugo ha scritto:
Hi Silvio,
I don't know if this is relevant. But, work_mem and some other
parameters inside postgresql.conf are not set. Here is a portion of
the file:
shared_buffers = 32MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem = 1MB
maintenance_work_mem = 16MB
max_s
Silvio Brandani wrote:
> a query on our production database give following errror:
>
> 2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
> 2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on
> request of size 48.
What query? On what OS? Is this a 32-bit or 64-bit buil
Hi Silvio,
I don't know if this is relevant. But, work_mem and some other
parameters inside postgresql.conf are not set. Here is a portion of
the file:
shared_buffers = 32MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem = 1MB
maintenance_work_mem = 16MB
max_stack_depth = 2MB
[]´s
Vi
Hi,
a query on our production database give following errror:
2010-08-05 10:52:40 CEST [12106]: [278-1] ERROR: out of memory
2010-08-05 10:52:40 CEST [12106]: [279-1] DETAIL: Failed on request of
size 48.
any suggestion ?
--
Silvio Brandani
Infrastructure Administrator
SDB Information
"Abu Mushayeed" <[EMAIL PROTECTED]> writes:
> AFTER A WHILE THE SYSTEM COMES BACK AND SAYS IN THE LOG FILE:
Please turn off your caps lock key :-(
> AggContext: -1501569024 total in 351 blocks; 69904 free (507 chunks);
> -1501638928 used
> DynaHashTable: 302047256 total in 46 blocks; 275720 free
Hello,
I am running the following query:
SELECT
indiv_fkey,
MAX(emp_ind),
MAX(prizm_cd_indiv),
MAX(CASE
WHEN div IS NULL THEN NULL
WHEN store_loyal_loc_cd IS NULL THEN NULL
ELSE div || '-' ||
esday, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory error with large insert
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Post
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster th
Hi
On a large transaction involving an insert of 8 million
rows, after a while Postgres complains of an out of memory error.
Failed on request of size 32
I get no other message.
Shmmax is set to 1 Gb
Shared_buffers set to 5
Max memory on box is 4Gb..Postgres is the
Hi,
I send some additional info :
pginfo wrote:sklad05=# explain analyze select
S.IDS_NUM,S.IDS_SKLAD,SUM(S.KOL),S.sernum FROM A_GAR_PROD_R S where ids
< 9742465 GROUP BY S.IDS_NUM,S.IDS_SKLAD ,s.sernum limit 10;
QUERY
Hi ,
I try to make insert into myTable select on pg 8.0.1 and freeBSD 5.3.
After ~10 sec. pg returns "out of memory".
In log file I found:
Portal hash: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
Relcache by OID: 8192 total in 1 blocks; 4040 free (0 chunks); 4152 used
Relcache by n
ow <[EMAIL PROTECTED]> writes:
> insert into tableA
> select * from tableB
> where testdate >= '2000-01-01' and testdate <= '2002-12-31'
> What's happening is that pgSql gradually takes all (well, almost) physical and
> swap memory and then, I think, is getting killed by the kernel.
> 1) Is this
pg 7.4.2 on RH 7.3
Hi,
Am getting "Out of memory error" and pgSql is getting killed when running the
following statement in psql:
insert into tableA
select * from tableB
where testdate >= '2000-01-01' and testdate <= '2002-12-31'
TableB contains about 120 million records. The insert/select affe
Hi,
we are using Postgres with a J2EE application (JBoss) and get
intermittent "out of memory" errors on the Postgres database. We are
running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the
following parameters:
shared_buffers = 8192
sort_mem = 8192
effective_cache_size = 23488102
Jie Liang wrote:
Does 7.3* support this? Can you tell me a bit more about it, please?
Hash aggregate..?
>I had a similar problem after upgrade to 7.4.2,
>Try:
>SET enable_hashagg = false;
>Before you execute that SELECT stmt
>If you don't want disable it in postgresql.conf
>
>Jie Liang
>
>
EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: [ADMIN] out of memory error
Greetings,
During testing of our application we ran into a very odd error:
very randomly during the test the postgresql log file showed a "ERROR:
53200: out of memory"
we changed the logging configuration to
Greetings,
During testing of our application we ran into a very odd error:
very randomly during the test the postgresql log file showed a "ERROR:
53200: out of memory"
we changed the logging configuration to log statements causing errors
and found the error to be caused by a SELECT,
here is t
43 matches
Mail list logo