given the aggressive nature of the statement it is not something
I want to do on a high transaction db unless I absolutely must.
Thanks in advance,
--
Joe Maldonado
OUPUT:
db=# vacuum verbose test_table;
INFO: vacuuming "public.test_table"
INFO: "test_table": removed 13
ual time=0.002..0.002 rows=0 loops=1)
One-Time Filter: NULL::boolean
-> Seq Scan
on test (cost=1.00..10020.38 rows=826 width=0) (never
executed)
Filter: (("key")::text <> 'something'::text)
Total runtime: 0.110 ms
(6 rows)
-- Joe Maldonado
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
While researching this locking issue I got some of the logs and found
that in one of the cases there was a SELECT running for a long time,
about 2 hours. This select statement does not usually take more than a
few seconds
- Joe Maldonado
Joe Maldonado wrote:
Thanks...I just wanted to verify that it was the intended behaviour
prior to going in and changing code :)
- Joe Maldonado
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
It seems that TRUNCATE is first posting a lock on the table and then
Thanks...I just wanted to verify that it was the intended behaviour
prior to going in and changing code :)
- Joe Maldonado
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
It seems that TRUNCATE is first posting a lock on the table and then
waiting for other transacti
transactions to finish before truncating the table
thus blocking all other operations.
Is this what is actually going on or am I missing something else? and is
there a way to prevent this condition from happening?
Thanks in advance,
- Joe Maldonado
---(end of broadcast
What's at the top and bottom of that?
>PG prints out a memory stats dump like this when it runs out of memory.
>The dump itself isn't much use to anyone but a developer; what you want
>to look into is what triggered it. The error message appearing just
>after (or maybe just before, I forget) shou
Tom Lane wrote:
>Joe Maldonado <[EMAIL PROTECTED]> writes:
>
>
>>I have these messages on my 7.4.7 database log...
>>TopMemoryContext: 87494704 total in 10676 blocks; 179400 free (61
>>chunks); 87315304 used
>>TopTransactionContext: 57344 total in 3 bl
Hello,
I have these messages on my 7.4.7 database log...
TopMemoryContext: 87494704 total in 10676 blocks; 179400 free (61
chunks); 87315304 used
TopTransactionContext: 57344 total in 3 blocks; 648 free (5 chunks);
56696 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
SPI
Hello all,
I am in a position where I'm torn between using ext2 vs ext3 to keep the
pg_data, pg_xlog, and pg_clog contents.
The main concern is that switching to ext2 will not respond well to an
improper shutdown, power loss.
My question is what is the prefered filesystem to keep this data to be
Hello all,
I frequently find that TRUNCATE table and CREATE or REPLACE FUNCTION
are both very slow taking 50 secs or more to complete. We have to run
both commands every minute, so this makes our application
non-functional. But it is not a slow deterioration over time.
Sometimes they run under a se
Hello,
After a VACUUM FULL I saw that pg_attribute tables indexes haven't
been deleted as reported by a subsequent vacuum analyze. But the pages
corresponding to just the table has been deleted to 196 pages from
about 181557 pages. Are all system tables affected by this ? How can
we reclaim thi
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
After a create or replace view, the new view definition is not being
used by plpgsql functions that use the view. Is this a known bug ? Is
there a workaround it ?
Start a fresh backend session. The old query plan is pres
After a create or replace view, the new view definition is not being
used by plpgsql functions that use the view. Is this a known bug ? Is
there a workaround it ?
For instance, selecting from afunc() still returns the old view's results.
create table c ( a int );
create or replace view a as select
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
Can concurrent updates/deletes slow down vacuum when it is progressing ? I
mean to ask if vacuum would have to redo or stall its work because of the
updates/deletes. Is it even possible that it goes into a long loop while
such u
Can concurrent updates/deletes slow down vacuum when it is progressing ? I
mean to ask if vacuum would have to redo or stall its work because of the
updates/deletes. Is it even possible that it goes into a long loop while
such updates occur ?
The reason for my question is that I'm seeing vacuuming
Hello all,
I have a few somewhat simple questions
Does the postmaster vacuum it's internal (pg_*) tables?
if not
what is the best way to vacuum them without having to vacuum the
entire db?
and how often is this recommended to be done?
Thanks,
-Joe
---(end of bro
Hello,
How come within a create schema block I cannot create a sequence?
I have entered in:
CREATE SCHEMA joe
CREATE SEQUENCE joe_seq start 1
CREATE TABLE joe_table (int id, varchar name)
;
and I get a syntax error for SEQUENCE. though if it is just tables I do not
-Joe
--
;[SQL] How
to avoid "Out of memory" using aggregate functions? ". Is this fixed? Why is the
postmaster exceeding it's 102MB sort mem size when doing these queries and not paging out the data?
-Joe Maldonado
--
Using M2, Opera's revolutionary e-mail c
Hello all,
I am in the process of planning disk utilization for postgres and was
wondering what was the storage size was for btree, rtree and hash indexes.
Thanks,
-Joe
--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)-
, Richard Huxton wrote:
> On Tuesday 15 Jul 2003 9:09 pm, Joe Maldonado wrote:
> > Hello,
> > Vacuum analyze is taking a really long time on a fairly small table and
> > during the time the vacuum is running all "select * from ;"
> > seems to hang untill
Sorry forgot to mention we are running postgres 7.2.3.
-Joe
On Tue, 2003-07-15 at 16:15, scott.marlowe wrote:
> On 15 Jul 2003, Joe Maldonado wrote:
>
> > Hello,
> > Vacuum analyze is taking a really long time on a fairly small table and
> > during the time the vacuu
22 matches
Mail list logo