Re: [ADMIN] "DELETE FROM" protection

2004-02-20 Thread Matt Clark
BEGIN;
DELETE FROM mytable;
!!! OOOPS 
ROLLBACK;


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Jeremy Smith
> Sent: 20 February 2004 06:06
> To: [EMAIL PROTECTED]
> Subject: [ADMIN] "DELETE FROM" protection
> 
> 
> 
> This may be an all-time idiotic question, but when I used phpmysql, when I
> would type in a "DELETE FROM" query in the SQL window, it would make me
> confirm it before I allowed it to go through.  I don't think in all of the
> presumably thousands of times that I used it that I ever canceled out of the
> statement, but I always liked that it is there.
> 
> So now with pgsql, when I am typing "DELETE FROM" until I get to the
> "WHERE" part of the statement, I get a little nervous because I know hitting
> Enter by mistake will wipe out that table.  Of course, I have backups, but
> it is a live site with alot of traffic and I would hate to have to shut
> things down while I restored the DB.
> 
> Anyway, this may all seem silly, but is there a setting in pgsql to do this?
> 
> Thanks,
> Jeremy
> 
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] "DELETE FROM" protection

2004-02-20 Thread Dave Ewart
On Friday, 20.02.2004 at 10:12 +, Matt Clark wrote:

> > So now with pgsql, when I am typing "DELETE FROM" until I get to
> > the "WHERE" part of the statement, I get a little nervous because I
> > know hitting Enter by mistake will wipe out that table.  [...]

How about typing the "WHERE" part of the statement first, then
'left-arrowing' back to the start of the statement and do "DELETE FROM
..." *last*?

Dave.
-- 
Dave Ewart
[EMAIL PROTECTED]
Computing Manager, Epidemiology Unit, Oxford
Cancer Research UK
PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] ANALYZE crashes randomly

2004-02-20 Thread Olivier Hubaut
Tom Lane wrote:

Olivier Hubaut <[EMAIL PROTECTED]> writes:

PANIC:  could not open transaction-commit log directory 
(/usr/local/pgsql/annot/pg_clog): Too many open files


Try reducing max_files_per_process (in postgresql.conf) and/or
increasing the kernel's limit on number of open files (I think
you set this via sysctl in /etc/rc, but it may vary depending on
OS X version).
			regards, tom lane

Thank you, it works fine by reducing the max_files_per_process to 200 
(instead of the default '1000' value).

But it's amazing to me that i had to reduce it so much as the postmaster 
is almost the only application running on this server. There only 3 or 4 
simultaneous connections and the kernel max files is set to 12500!

Is that comportement normal or not?

--
Signature en cours de maintenance,
Veuillez patienter...
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] "DELETE FROM" protection

2004-02-20 Thread Yuji Shinozaki

I've gotten myself into the habit of always writing out a

SELECT ... FROM ... WHERE ...;

first, and then command-line editing it to

DELETE FROM ... WHERE ...;

Putting it in a transaction (BEGIN, COMMIT or ROLLBACK) is probably the
best practice.

yuji



On Fri, 20 Feb 2004, Dave Ewart wrote:

> On Friday, 20.02.2004 at 10:12 +, Matt Clark wrote:
>
> > > So now with pgsql, when I am typing "DELETE FROM" until I get to
> > > the "WHERE" part of the statement, I get a little nervous because I
> > > know hitting Enter by mistake will wipe out that table.  [...]
>
> How about typing the "WHERE" part of the statement first, then
> 'left-arrowing' back to the start of the statement and do "DELETE FROM
> ..." *last*?
>
> Dave.
> --
> Dave Ewart
> [EMAIL PROTECTED]
> Computing Manager, Epidemiology Unit, Oxford
> Cancer Research UK
> PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>
>

Yuji Shinozaki  Computer Systems Senior Engineer
[EMAIL PROTECTED]   Advanced Technologies Group
(434)924-7171   Information Technology & Communication
http://www.people.virginia.edu/~ys2nUniversity of Virginia


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] ANALYZE crashes randomly

2004-02-20 Thread Tom Lane
Olivier Hubaut <[EMAIL PROTECTED]> writes:
> But it's amazing to me that i had to reduce it so much as the postmaster 
> is almost the only application running on this server. There only 3 or 4 
> simultaneous connections and the kernel max files is set to 12500!

But how many open files are needed by the rest of an OS X system?
(I have no idea, but you could probably find out using lsof or a
similar tool.)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] ANALYZE crashes randomly

2004-02-20 Thread Tom Lane
Olivier Hubaut <[EMAIL PROTECTED]> writes:
>>> PANIC:  could not open transaction-commit log directory 
>>> (/usr/local/pgsql/annot/pg_clog): Too many open files

> But it's amazing to me that i had to reduce it so much as the postmaster 
> is almost the only application running on this server.

Oh, wait, I bet you are running into the per-process open file limit not
the kernel limit.  The per-process limit is usually pretty low on
Darwin, and checking the code I see

xldir = opendir(XLogDir);
if (xldir == NULL)
ereport(PANIC,
(errcode_for_file_access(),
errmsg("could not open transaction log directory \"%s\": %m",
   XLogDir)));

That is, we don't have a fallback path to recover when the error is
ENFILE or EMFILE.  Looks to me like all the opendir() calls in the
backend ought to be handled by fd.c with code to release other open
files at need.

In the meantime, though, it seems fishy that Postgres wouldn't have
detected and allowed for the per-process file limit.  Which version of
OS X did you say you were using?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] ANALYZE crashes randomly

2004-02-20 Thread Olivier Hubaut
Tom Lane wrote:

Olivier Hubaut <[EMAIL PROTECTED]> writes:

PANIC:  could not open transaction-commit log directory 
(/usr/local/pgsql/annot/pg_clog): Too many open files


But it's amazing to me that i had to reduce it so much as the postmaster 
is almost the only application running on this server.


Oh, wait, I bet you are running into the per-process open file limit not
the kernel limit.  The per-process limit is usually pretty low on
Darwin, and checking the code I see
xldir = opendir(XLogDir);
if (xldir == NULL)
ereport(PANIC,
(errcode_for_file_access(),
errmsg("could not open transaction log directory \"%s\": %m",
   XLogDir)));
That is, we don't have a fallback path to recover when the error is
ENFILE or EMFILE.  Looks to me like all the opendir() calls in the
backend ought to be handled by fd.c with code to release other open
files at need.
In the meantime, though, it seems fishy that Postgres wouldn't have
detected and allowed for the per-process file limit.  Which version of
OS X did you say you were using?
			regards, tom lane


Once again, thanks for your help

For the moment, we are running on OS X 10.2.8 but we'll change soon for 
OS X 10.3

--
Signature en cours de maintenance,
Veuillez patienter...
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[ADMIN] Panic involving a LC_COLLATE issue

2004-02-20 Thread Marco Gaiani
Hi all,

I have experienced a distressing problem lately with my hosting provider 
(they run RedHat), they had to downgrade their version of Postgresql 
from 7.3.4 to 7.3.3 due to 7.3.4 "crashing" constantly. Instead of doing 
a pg_dump of my databases they copied the data directory somewhere else 
as data_old and went on with the downgrade of Postgres. They have not 
been able to restore my databases but sent me a copy of the data directory.

I have tried to mount it with Mandrake 9.2 and Postgres 7.3.3 but I get 
this error:

DEBUG:  FindExec: searching PATH ...
DEBUG:  ValidateBinary: can't stat "/bin/postgres"
DEBUG:  FindExec: found "/usr/bin/postgres" using PATH
DEBUG:  invoking IpcMemoryCreate(size=983040)
PANIC:  The database cluster was initialized with LC_COLLATE 
'en_US.iso885915',
   which is not recognized by setlocale().
   It looks like you need to initdb.
Aborted

Any ideas on how to solve this, desesperatly yours

--
Marco Gaiani
Unidad de Promoción y Relaciones
FUNDACITE ARAGUA
mailto: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [ADMIN] Panic involving a LC_COLLATE issue

2004-02-20 Thread Andrew Sullivan
On Fri, Feb 20, 2004 at 01:45:40PM -0400, Marco Gaiani wrote:
> Hi all,
> 
> I have experienced a distressing problem lately with my hosting provider 
> (they run RedHat), they had to downgrade their version of Postgresql 
> from 7.3.4 to 7.3.3 due to 7.3.4 "crashing" constantly. Instead of doing 

First, this sounds pretty dodgy to me.  I have a feeling that they're
overlooking some other problem.  But that won't help you.

> a pg_dump of my databases they copied the data directory somewhere else 
> as data_old and went on with the downgrade of Postgres. They have not 
> been able to restore my databases but sent me a copy of the data directory.

This is ok -- the binaries are compatible across major versions, so a
7.3.3 tree should work with 7.3.x

> 'en_US.iso885915',
>which is not recognized by setlocale().
>It looks like you need to initdb.
> Aborted

You need the locale support offered by the Red Hat system.  Seems you
need ISO 8859-15.  My bet is either that Mandrake's locale doesn't
support that, you don't have the right libs, or the binary you've
installed wasn't compiled with the right support.  Can you actually
set your LANG to iso885915?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] "DELETE FROM" protection

2004-02-20 Thread Bruce Momjian
Yuji Shinozaki wrote:
> 
> I've gotten myself into the habit of always writing out a
> 
>   SELECT ... FROM ... WHERE ...;
> 
> first, and then command-line editing it to
> 
>   DELETE FROM ... WHERE ...;
> 
> Putting it in a transaction (BEGIN, COMMIT or ROLLBACK) is probably the
> best practice.

I used to do this with Informix before a DELETE:

>   SELECT COUNT(*) FROM ... WHERE ...;

   
and Informix had that "Are your sure" check in dbaccess too, but after I
did the COUNT(*), the prompt was just annoying.

Also, what interfaces allow you to just press ENTER to send a command? 
With psql, you have to terminate it with a semicolon or nothing happens.

I think there is justification for an "Are you sure" only if a single
keystroke sends the command.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread Jeremy Smith

I have newly installed PostgreSQL onto my server, the server's main function
is to serve up a fantasy football site that has a tremendous number of
queries per page.  Right now with very low traffic I am seeing a server load
of 2.0+.  That got me a little concerned, so I looked at "top" and noticed
that postgres is taking anywhere from 60 - 100 percent of my CPU at any
given time.  There are also 116 sleeping processes out of 123.  This all
seems very bad, do you guys have any idea what might be causing it or how it
can be addressed?  How do I go about cleaning out the sleeping processes?

Thanks alot,
Jeremy


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread Lamar Owen
On Friday 20 February 2004 01:39 pm, Jeremy Smith wrote:
> I have newly installed PostgreSQL onto my server, the server's main
> function is to serve up a fantasy football site that has a tremendous
> number of queries per page.  Right now with very low traffic I am seeing a
> server load of 2.0+.

Not too bad.

>  That got me a little concerned, so I looked at "top"
> and noticed that postgres is taking anywhere from 60 - 100 percent of my
> CPU at any given time.

That would be good.

>  There are also 116 sleeping processes out of 123. 
> This all seems very bad, do you guys have any idea what might be causing it
> or how it can be addressed?  

>How do I go about cleaning out the sleeping
> processes?

You don't.  Sleeping processes is a normal thing with a multitasking OS.

How fast does the page load?  That would be the big question.  Run apache 
bench (ab) against the page and see how many pages per second yu can get.  A 
load of 2.0, an average CPU of 60-100%, and 7 running processes is not bad at 
all.  It just means your server is working.
-- 
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
(828)862-5554
www.pari.edu


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread scott.marlowe
On Fri, 20 Feb 2004, Jeremy Smith wrote:

> 
> I have newly installed PostgreSQL onto my server, the server's main function
> is to serve up a fantasy football site that has a tremendous number of
> queries per page.  Right now with very low traffic I am seeing a server load
> of 2.0+.  That got me a little concerned, so I looked at "top" and noticed
> that postgres is taking anywhere from 60 - 100 percent of my CPU at any
> given time.  There are also 116 sleeping processes out of 123.  This all
> seems very bad, do you guys have any idea what might be causing it or how it
> can be addressed?  How do I go about cleaning out the sleeping processes?

Don't worry about sleeping processes, you should have a good hundred 
sleeping on any unix box.  My workstation has 152, my server has 173, and 
the response time on both is way sub second.

Now, about postgresql, what is it doing when it's chewing up 100% cpu? 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread Mitch Pirtle
Jeremy Smith wrote:

I have newly installed PostgreSQL onto my server, the server's main function
is to serve up a fantasy football site that has a tremendous number of
queries per page.  Right now with very low traffic I am seeing a server load
of 2.0+.  That got me a little concerned, so I looked at "top" and noticed
that postgres is taking anywhere from 60 - 100 percent of my CPU at any
given time.  There are also 116 sleeping processes out of 123.  This all
seems very bad, do you guys have any idea what might be causing it or how it
can be addressed?  How do I go about cleaning out the sleeping processes?
I agree with Lamar's comments, as well as wondering if it is really 
needed to run a 'tremendous number of queries' for each page view...  
Some quick solutions could be to determine if you could:

1) make changes to your design to require fewer hits to the database per 
page,
2) make a view that provided the information without running so many 
separate queries, and/or
3) consider using a caching library like ADOdb to limit the number of 
trips to your database

Any combination of these three could significantly reduce the load on 
your DB box, as well as provide some huge performance gains.  How hard 
is your webserver working?  Are they running on the same box?

-- Mitch

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread scott.marlowe
On Fri, 20 Feb 2004, Lamar Owen wrote:

> How fast does the page load?  That would be the big question.  Run apache 
> bench (ab) against the page and see how many pages per second yu can get.  A 
> load of 2.0, an average CPU of 60-100%, and 7 running processes is not bad at 
> all.  It just means your server is working.

That really depends on the server.  If it's a PII-266 then it's about 
right, if it's a dual AMD Athlon 2800 with 2 gigs of ram something's 
horribly wrong.  So, what kind of hardware is this jer?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[ADMIN] database clustering

2004-02-20 Thread Joe Maldonado
Hello,
I am planning a postgres database cluster.  Each database will be on it's own machine 
and I wanted to enable one as the frontend to the rest.  This is so that applications 
do not need to know which database contains what data.  Is there a way to configure 
postgres to communicate to other dbs via sql so as to be transparent to the user?  I 
know this can be done in Oracle though I'd rather not go through that experience if I 
do not need to.
Thanks,

-Joe

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[ADMIN] User Privileges on large object

2004-02-20 Thread Eric
I would like to ask how to set user privileges on accessing large
object?
I find the documentation on the command "grant" does not mention much
about this.

I has been trying to use Tcl to create large object in the database.
However, I can only create empty large object in the database. I fails
add content to any large object created. Because I find "pg_lo_open"
always fails to return a valid file descriptor every time.

Therefore, I wonder if it is related to the user privileges.

Thank you.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[ADMIN] Index called with Union but not with OR clause

2004-02-20 Thread V Chitra



Hi All,
 
I have a select statement 
 
select * from v_func_actual_costswhere 
parent_project='10478' or proj_pk = '10478'
 
both the fields parent_project and proj_pk have 
indexes based on them, but when I ran explain plan on this statement I 
found that none of the indexes are being called. But, if I make two separate 
statement and combine them with Union ALL, the indexes are being called. The 
select statement in this case is
 
select * from ct_admin.v_func_actual_costswhere 
parent_project='10478'union allselect * from 
ct_admin.v_func_actual_costswhere proj_pk = '10478' 
 
Can anybody help me to find a reason for the same. 
This is just a part of the query so I cannot use the Union ALL 
clause.
 
Thanks in advance
Chitra


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread Jeremy Smith
I agree that my site is a bit bloated, it has more than 2500 total queries,
but it is a bit more complex of an application that might be readily
apparent.  For the curious, this is my site: http://www.xpertleagues.com.
But the issue is that with mysql, at my peak levels last year I had a server
load of 30+ (I know this is horrendous, I am looking into either upgrading
my P4 2.4gig 1gig ram server this year, or distributing across more than one
server) but the site itself never performed as slowly as it is now.  And
amazingly considering the server load last year, the server never crashed.
But now I am actually getting complaints on the lagtime, and I only have one
league actively drafting, last year I had 70+ at peak.

I will look into some of the suggestions you have made, the problem is that
I can't do large scale optimization at the moment because I am still adding
features to the site.  I just wonder if the best mode of attack would be
switching back to mysql until I have added all of the necessary features,
optimizing the queries and code there, and then switching back to pg at a
later date.

Jeremy



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Mitch Pirtle
Sent: Friday, February 20, 2004 1:57 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [ADMIN] PosgreSQL hogging resources?


Jeremy Smith wrote:

>I have newly installed PostgreSQL onto my server, the server's main
function
>is to serve up a fantasy football site that has a tremendous number of
>queries per page.  Right now with very low traffic I am seeing a server
load
>of 2.0+.  That got me a little concerned, so I looked at "top" and noticed
>that postgres is taking anywhere from 60 - 100 percent of my CPU at any
>given time.  There are also 116 sleeping processes out of 123.  This all
>seems very bad, do you guys have any idea what might be causing it or how
it
>can be addressed?  How do I go about cleaning out the sleeping processes?
>
I agree with Lamar's comments, as well as wondering if it is really
needed to run a 'tremendous number of queries' for each page view...
Some quick solutions could be to determine if you could:

1) make changes to your design to require fewer hits to the database per
page,
2) make a view that provided the information without running so many
separate queries, and/or
3) consider using a caching library like ADOdb to limit the number of
trips to your database

Any combination of these three could significantly reduce the load on
your DB box, as well as provide some huge performance gains.  How hard
is your webserver working?  Are they running on the same box?

-- Mitch


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] problems with regressive tests on rh8.0

2004-02-20 Thread Andrew Kornilov
I have linux box under redhat linux 8.0 with original kernel 2.4.20 
On this box I successfully build postgresql 7.3.4
After upgrading kernel to 2.4.24 i deside to upgrade postgresql.
I've  successfully built 7.4.1 was build without any difficulties, but fail all of 
regressive test
I try to build another 7.3.4 with result like building 7.4.1
Is this normal state of affairs?
Is "make install" safe with failed tests?

-- 
Andrew Kornilov


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] pg_user does not exist

2004-02-20 Thread Bethany A . Benzur
Argg... I guess I won't be rebuilding template1 from template0 seeing 
as I have no template0!! :(

template1=> UPDATE pg_database SET datallowconn = TRUE
template1-> WHERE datname = 'template0';
UPDATE 0
I'm running out of ideas here... how will I get my databases out and 
back in without pg_dump?

On Feb 18, 2004, at 9:10 AM, Bethany A.Benzur wrote:

Hmmm ... I'm wondering about version mismatches between your psql and
your backend.  Does "psql -V" agree with the backend version?
[EMAIL PROTECTED] ]$ psql -V
psql (PostgreSQL) 7.2.4
However, I am not totally convinced that that's where your problem is.
Do you see failures when you are connected to other databases besides
template1?
Yes, the same "pg_user does not exist" occurs with each database 
(hence my not being able to dump + restore). I assume that is the case 
because each database is based off of template1 when it is created.

I will try reconstructing template1 from template0 - thanks for that 
link!

this a great start - thanks!
B.
--
Bethany A. Benzur
Computer Support Specialist IV
School of Literature, Communication, and Culture at Georgia Institute 
of Technology
phone: 404.894.7632
helpdesk: [EMAIL PROTECTED]

---(end of 
broadcast)---
TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] Mac OS 10.3 Panther make questions

2004-02-20 Thread Perez
In article <[EMAIL PROTECTED]>,
 [EMAIL PROTECTED] (Tom Lane) wrote:

> Alternatively you can just configure postgres --without-readline.
> psql is a lot less pleasant to use without it, but if you don't
> use psql much you may not care.
> 
>   regards, tom lane

That's what I did.  Only occasionally do I miss it.  The cut&paste
in the Terminal suffices for most things.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] problem with install config

2004-02-20 Thread Wendell
I recently installed (from source) and setup PostgreSQL 7.4.1 
successfully on a PC.

However when I attempted a similar install on another PC (with same 
Debian OS)
BUT with the "--with-java" option, the PostgreSQL "./configure" could 
not find the
ANT built tool, even though ANT (ver 1.6.0) is installed and worked for 
'Apache Cocoon-lenya' CMS application.

The configure in PostgreSQL was looking for "ant.sh" which is absent in 
this version,
having only 'ant' and 'AntRun' commands.

Can you help.

Wendell Anderson
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] database clustering

2004-02-20 Thread Mitch Pirtle
Joe Maldonado wrote:

Hello,
I am planning a postgres database cluster.  Each database will be on 
it's own machine and I wanted to enable one as the frontend to the 
rest.  This is so that applications do not need to know which database 
contains what data.  Is there a way to configure postgres to 
communicate to other dbs via sql so as to be transparent to the user?  
I know this can be done in Oracle though I'd rather not go through 
that experience if I do not need to.
Perhaps this is something that would benefit from the use of sql relay:

   http://sqlrelay.sourceforge.net/

Very neat ideas, and might be another approach for your problem.

-- Mitch

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] "DELETE FROM" protection

2004-02-20 Thread Jeremy Smith
Great point Bruce,

I hadn't really thought of the semi-colon as a safety mechanism, but I guess
it is.

Jeremy

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Bruce Momjian
Sent: Friday, February 20, 2004 1:27 PM
To: Yuji Shinozaki
Cc: Dave Ewart; [EMAIL PROTECTED]
Subject: Re: [ADMIN] "DELETE FROM" protection


Yuji Shinozaki wrote:
>
> I've gotten myself into the habit of always writing out a
>
>   SELECT ... FROM ... WHERE ...;
>
> first, and then command-line editing it to
>
>   DELETE FROM ... WHERE ...;
>
> Putting it in a transaction (BEGIN, COMMIT or ROLLBACK) is probably the
> best practice.

I used to do this with Informix before a DELETE:

>   SELECT COUNT(*) FROM ... WHERE ...;

   
and Informix had that "Are your sure" check in dbaccess too, but after I
did the COUNT(*), the prompt was just annoying.

Also, what interfaces allow you to just press ENTER to send a command?
With psql, you have to terminate it with a semicolon or nothing happens.

I think there is justification for an "Are you sure" only if a single
keystroke sends the command.

--
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread Mitch Pirtle
Jeremy Smith wrote:

I will look into some of the suggestions you have made, the problem is that
I can't do large scale optimization at the moment because I am still adding
features to the site.  I just wonder if the best mode of attack would be
switching back to mysql until I have added all of the necessary features,
optimizing the queries and code there, and then switching back to pg at a
later date.
When you switch to pg, you will be able to move some/lots? of your code 
into the database as views, stored procedures, triggers etc...  So keep 
that in mind while working in MySQL, as you will definitely need a 
different approach.

I inherited a site that had very database-hungry pages, and started 
using stored procs and triggers to automate some of the jobs (like 
updates and such) in the database, instead of making the webserver 
manually send the instructions across the wire.  If you plan on taking 
this approach then you should get your app into pg sooner than later...

-- Mitch, wondering about 12 packs?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] WAL logs and segment files

2004-02-20 Thread Bruce Momjian
Eduardo Leva wrote:
> Hi, guys... I'm reading the version 7.3.4 documentation and I found this: "
> 
> "WAL logs are stored in the directory $PGDATA/pg_xlog, as a set of segment
>  files, each 16 MB in size. Each segment is divided into 8 kB pages. The
>  log record headers are described in access/xlog.h; record content is
>  dependent on the type of event that is being logged. Segment files are
>  given ever-increasing numbers as names, starting at . The
>  numbers do not wrap, at present, but it should take a very long time to
>  exhaust the available stock of numbers. "
> 
> in item 12.2 Implementation. The question is: How do I solve this 
> situation, the remarked situation? The answer is not in the docs. Thanks.

What situation do you need to solve?  The wrapping?  The documention
states "a very long time", but it more of a joke.  That number would be
huge to wrap around and you would be updating your PostgreSQL version
long before it would ever wrap.

The number is actually an int8 that has a maximum value of:

18446744073709551616

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] problems with regressive tests on rh8.0

2004-02-20 Thread V i s h a l Kashyap @ [Sai Hertz And Control Systems]
Dear Andrew Kornilov  ,

I've  successfully built 7.4.1 was build without any difficulties, but fail all of regressive test
 

Check  your Disk Space specially the partition on which  PostgreSQL is 
being build. Re run your regression test and you may get 93 passed result

Do inform me if this helps

--
Best Regards,
Vishal Kashyap
Director / Lead Developer,
Sai Hertz And Control Systems Pvt Ltd,
http://saihertz.rediffblogs.com
Jabber IM: [EMAIL PROTECTED]
ICQ :  264360076
---
You Guys start coding I will take care of what 
this customer needs.
---
I am usually called as Vishal Kashyap
and my Girlfriend calls me Vishal CASH UP.
Because everyone loves me as Vishal Kashyap
and my Girlfriend loves me as CASH.
   ___
  //\\\  
 ( 0_0 )
o0o-o0o-

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] WAL logs and segment files

2004-02-20 Thread kaolin fire
If you know where I can get a 1.5e+11 terabyte drive, I'd love to hear 
it. ;)

-kaolin fire
-http://erif.org/code/fallingup/
On Feb 20, 2004, at 12:29 PM, Bruce Momjian wrote:

Eduardo Leva wrote:
Hi, guys... I'm reading the version 7.3.4 documentation and I found 
this: "

"WAL logs are stored in the directory $PGDATA/pg_xlog, as a set of 
segment
 files, each 16 MB in size. Each segment is divided into 8 kB pages. 
The
 log record headers are described in access/xlog.h; record content is
 dependent on the type of event that is being logged. Segment files 
are
 given ever-increasing numbers as names, starting at 
. The
 numbers do not wrap, at present, but it should take a very long time 
to
 exhaust the available stock of numbers. "

in item 12.2 Implementation. The question is: How do I solve this
situation, the remarked situation? The answer is not in the docs. 
Thanks.
What situation do you need to solve?  The wrapping?  The documention
states "a very long time", but it more of a joke.  That number would be
huge to wrap around and you would be updating your PostgreSQL version
long before it would ever wrap.
The number is actually an int8 that has a maximum value of:

	18446744073709551616

--
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 
19073

---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] PosgreSQL hogging resources?

2004-02-20 Thread scott.marlowe
Are you getting problems with crashing backends in postgresql and such 
showing up?  I'm wondering if you have bad memory or something like that.

In my experience, Linux/apache/php/postgresql never crashes, it just goes 
unresponsive when you get into severe overload.

Is your database vacuum / analyzed often?

Do you have indexes that are being used?

On Fri, 20 Feb 2004, Jeremy Smith wrote:

> I agree that my site is a bit bloated, it has more than 2500 total queries,
> but it is a bit more complex of an application that might be readily
> apparent.  For the curious, this is my site: http://www.xpertleagues.com.
> But the issue is that with mysql, at my peak levels last year I had a server
> load of 30+ (I know this is horrendous, I am looking into either upgrading
> my P4 2.4gig 1gig ram server this year, or distributing across more than one
> server) but the site itself never performed as slowly as it is now.  And
> amazingly considering the server load last year, the server never crashed.
> But now I am actually getting complaints on the lagtime, and I only have one
> league actively drafting, last year I had 70+ at peak.
> 
> I will look into some of the suggestions you have made, the problem is that
> I can't do large scale optimization at the moment because I am still adding
> features to the site.  I just wonder if the best mode of attack would be
> switching back to mysql until I have added all of the necessary features,
> optimizing the queries and code there, and then switching back to pg at a
> later date.
> 
> Jeremy
> 
> 
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Mitch Pirtle
> Sent: Friday, February 20, 2004 1:57 PM
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: [ADMIN] PosgreSQL hogging resources?
> 
> 
> Jeremy Smith wrote:
> 
> >I have newly installed PostgreSQL onto my server, the server's main
> function
> >is to serve up a fantasy football site that has a tremendous number of
> >queries per page.  Right now with very low traffic I am seeing a server
> load
> >of 2.0+.  That got me a little concerned, so I looked at "top" and noticed
> >that postgres is taking anywhere from 60 - 100 percent of my CPU at any
> >given time.  There are also 116 sleeping processes out of 123.  This all
> >seems very bad, do you guys have any idea what might be causing it or how
> it
> >can be addressed?  How do I go about cleaning out the sleeping processes?
> >
> I agree with Lamar's comments, as well as wondering if it is really
> needed to run a 'tremendous number of queries' for each page view...
> Some quick solutions could be to determine if you could:
> 
> 1) make changes to your design to require fewer hits to the database per
> page,
> 2) make a view that provided the information without running so many
> separate queries, and/or
> 3) consider using a caching library like ADOdb to limit the number of
> trips to your database
> 
> Any combination of these three could significantly reduce the load on
> your DB box, as well as provide some huge performance gains.  How hard
> is your webserver working?  Are they running on the same box?
> 
> -- Mitch
> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faqs/FAQ.html
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] WAL logs and segment files

2004-02-20 Thread scott
Take me off your list! You have the wrong scott!
- Original Message - 
From: "Bruce Momjian" <[EMAIL PROTECTED]>
To: "Eduardo Leva" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, February 20, 2004 12:29 PM
Subject: Re: [ADMIN] WAL logs and segment files


> Eduardo Leva wrote:
> > Hi, guys... I'm reading the version 7.3.4 documentation and I found
this: "
> >
> > "WAL logs are stored in the directory $PGDATA/pg_xlog, as a set of
segment
> >  files, each 16 MB in size. Each segment is divided into 8 kB pages. The
> >  log record headers are described in access/xlog.h; record content is
> >  dependent on the type of event that is being logged. Segment files are
> >  given ever-increasing numbers as names, starting at .
The
> >  numbers do not wrap, at present, but it should take a very long time to
> >  exhaust the available stock of numbers. "
> >
> > in item 12.2 Implementation. The question is: How do I solve this
> > situation, the remarked situation? The answer is not in the docs.
Thanks.
>
> What situation do you need to solve?  The wrapping?  The documention
> states "a very long time", but it more of a joke.  That number would be
> huge to wrap around and you would be updating your PostgreSQL version
> long before it would ever wrap.
>
> The number is actually an int8 that has a maximum value of:
>
> 18446744073709551616
>
> -- 
>   Bruce Momjian|  http://candle.pha.pa.us
>   [EMAIL PROTECTED]   |  (610) 359-1001
>   +  If your life is a hard drive, |  13 Roberts Road
>   +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Index called with Union but not with OR clause

2004-02-20 Thread Bruno Wolff III
This discussion really belongs on the performance list and I am copying
that list with mail-followup-to set.

On Fri, Feb 20, 2004 at 12:26:22 +0530,
  V Chitra <[EMAIL PROTECTED]> wrote:
> Hi All,
> 
> I have a select statement 
> 
> select * from v_func_actual_costs
> where parent_project='10478' or proj_pk = '10478'
> 
> both the fields parent_project and proj_pk have indexes based on them, but when I 
> ran explain plan on this statement I found that none of the indexes are being 
> called. But, if I make two separate statement and combine them with Union ALL, the 
> indexes are being called. The select statement in this case is
> 
> select * from ct_admin.v_func_actual_costs
> where parent_project='10478'
> union all
> select * from ct_admin.v_func_actual_costs
> where proj_pk = '10478' 
> 
> Can anybody help me to find a reason for the same. This is just a part of the query 
> so I cannot use the Union ALL clause.

Have you analyzed the databases recently?

Can you supply explain analyze output for the queries?

It isn't necessarily faster to use two index scans instead of one sequential
scan depending on the fraction of the table being returned and some other
factors. If the planner is making the wrong choice in your case, you need
to supply the list with more information to get help figuring out why
the wrong choice is being made.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])