NH Lau <[EMAIL PROTECTED]> writes:
> I would like to know
> whether we should vacuum the template1 and template0
> database as well, in addition to the normal database
> we created.
template0 no --- in fact you wouldn't even be able to connect to it to
do so (and I do not recommend overriding the
Dear List,
I am using postgresql 7.2.1. I would like to know
whether we should vacuum the template1 and template0
database as well, in addition to the normal database
we created.
Your reply is very much appreciated.
Regards
Lau NH
__
Do you Yaho
Hi all!
Another problem :
a web application working under apache2 and using postgresql 7.3.4
is having stack while most of the postmaster proccesses are in state
of SELECT waiting (accroding to ps ax).
after stopping the httpd the postmaster proccesses remains!
that eats a lot of CPU!
and i can see
If the reccord you are inserting do not depend on data in the record you
want to delete, why not simply use a trigger?
Before insert delete the record with the same key!
Michael Paesold wrote:
Tsirkin Evgeny wrote:
one moer question ,how did you tested it?
I have tested this here. I don't real
Hi to
all,
I have a database, where in the tables I have
around 100 constrains (link to other tables) that don't have a name
"" or they have a name like "$1" "$2". Now, I have a module which
bases on the same structure, but I get some query errors from a
"" constraint. I really don't know
Tsirkin Evgeny wrote:
> one moer question ,how did you tested it?
>
> > I have tested this here. I don't really know if this is just the case
with
> > Best Regards,
> > Michael Paesold
First I created the your schedule table. Then I opened two psql sessions...
Session ASession B
one moer question ,how did you tested it?
> I have tested this here. I don't really know if this is just the case with
> Best Regards,
> Michael Paesold
>
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
>
--
Evgeny.
---
You are greate Michael!
Thanks.
On Mon, 6 Sep 2004, Michael Paesold wrote:
> I wrote:
>
> > BEGIN;
> > SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> >
> > DELETE FROM schedule WHERE studentid = ... ;
> > INSERT INTO schedule (studentid, ...) VALUES (... );
> > INSERT INTO schedule (studentid,
I wrote:
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
>
> DELETE FROM schedule WHERE studentid = ... ;
> INSERT INTO schedule (studentid, ...) VALUES (... );
> INSERT INTO schedule (studentid, ...) VALUES (... );
>
> COMMIT;
>
> If you do it like in the above sql code, there is still a
Tsirkin Evgeny wrote:
> On Mon, 6 Sep 2004, Michael Paesold wrote:
> Does not the Serializable Isolation Level do insure that?
> what i thought is that while using this level then i am getting
> the BEGIN and COMMIT to behave the same as the code you wrote!
> since the second concarent transaction
- Original Message -
From: "Tsirkin Evgeny" <[EMAIL PROTECTED]>
To: "Andrei Bintintan" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, September 06, 2004 12:19 PM
Subject: Re: [ADMIN] duplicates
> yes i understand that i can create a primary key/unique etc...
> however my quest
yes i understand that i can create a primary key/unique etc...
however my question is if i have to understand if and why
the i got duplicate rows inserted.
so here is the picture:
an application is deleting rows and inserting right after that
new ones some of which are the same as the old one,and i
- Original Message -
From: "Tsirkin Evgeny" <[EMAIL PROTECTED]>
To: "Andrei Bintintan" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, September 06, 2004 10:57 AM
Subject: Re: [ADMIN] duplicates
> On Mon, 6 Sep 2004, Andrei Bintintan wrote:
>
> > If still got problems, please p
On Mon, 6 Sep 2004, Andrei Bintintan wrote:
> If still got problems, please post some queries, be more specific.
>
> Best regards,
> Andy.
>
Ok i will try:
CREATE TABLE schedule (
studentiddecimal(9),
groupid decimal(10),
maslulsignid decimal(7),
tfusot
I am not sure that I understand clearly your problem. Are you sure that your
query's are written correctly?
For duplicates you can make a uniqe indexing so this will avoid any
dupplicates in your table.
CREATE UNIQUE INDEX table_column_uniqueidx ON table(column);
If the rows are dissapearing, ple
15 matches
Mail list logo