[GENERAL] Re: numeric type and odbc from access 2000

2001-02-08 Thread Stefan Waidele jun.

At 15:47 07.02.2001 +, Richard Huxton wrote:
[...] It is treating "100.200" as float and can't find an '=' operator that
matches numeric with float. You can handle it with

where col3=100.200::numeric

Yes, we can, but Access cannot, I have not found out how Delphi can, either.
If You Quote the values, everything is all right, too.

So whose fault is it? Delphi/Access for not casting/quoting,
or is it a Postgres-Bug, not recognizing 100.200 as a valid NUMERIC(6,3) :-)

But most important how can we work around it from Delphi or Access?
Any Ideas?

Stefan




Re: [GENERAL] Auto-timestamp generator (attached)

2001-02-08 Thread Peter T Mount

Quoting Einar Karttunen [EMAIL PROTECTED]:

 I think that modules like this could be included in the distribution or
 archieved at the ftp. They'd make it easier for people new to sql to
 start using postgresql. Also there would be no performance loss in
 the backend code, as these "modules" don't need any support.

This is what /contrib in the source is for ;-)

Peter

 
 - Einar
 
 On Thu, 8 Feb 2001, Richard Huxton wrote:
  Following the 'new type proposal' discussion recently I decided to
 have a
  play at creating an automatic trigger generator. Attached is the sql
 and an
  example of its use.
 
  Basically you call a function:
select lastchg_addto(mytable,mycol);
  where mycol is of type timestamp. The function builds the
 
  To use it you will need plpgsql enabled (man createlang) and also
 version
  7.1
  After use, there are two functions left - you can remove these with:
drop function lastchg_addto(text,text);
drop function lastchg_remove(text,text);
 
  I've tried to layout the plpgsql for ease of understanding - if you
 want to
  see how the trigger gets created, you can return exec1 or exec2
 instead of
  the success message.
 
  This just a demo - obviously it's fairly simple to put together
 triggers for
  this purpose, but I'd appreciate any thoughts about the approach.
 
  TIA people
 
  Oh - 2 questions for any of the developers/clued up
 
  1. Is there any way to parse a variable-length list of parameters in
  plpgsql?
  2. Is there any chance of a different quoting method for functions?
 e.g.
  create function ... as q[ ...body here ...];
 So we can avoid the  stuff - it's a lot of static
 
  - Richard Huxton
 
 
 



-- 
Peter Mount [EMAIL PROTECTED]
PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/
RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/



[GENERAL] tuples too big

2001-02-08 Thread Michelle Murrain

Hi folks,

I've got a problem - I need a little help. I'm using 6.5.3 from Debian stable.

I've got a database, which has some fields in one table that need to hold a 
fair bit of text (say, 8000-10,000+ characters). Right now, I've got those 
fields as simple text. 

When I try to enter data into those fields, I get the "Tuple too large" 
error. 

I can't chop up the fields into smaller pieces, and 8104 is not enough.

Suggestions?

Michelle
-- 

Michelle Murrain, Ph.D.
President
Norwottuck Technology Resources
[EMAIL PROTECTED]
http://www.norwottuck.com



Re: [GENERAL] monitoring running queries?

2001-02-08 Thread Brice Ruth

Its not working ... I looked at the admin docs and edited the pg_options
file appropriately ... the following is what appears in /var/log/messages:

postgres[23686]: read_pg_options: verbose=2,query=4,syslog=2

But no queries ... I sent SIG_HUP to postmaster, even restarted
postmaster.  The file /var/log/postgresql appears to be empty as well.

-Brice

"Brett W. McCoy" wrote:
 
 On Wed, 7 Feb 2001, Brice Ruth wrote:
 
  Is there any way (besides ps) to view queries that are running on the
  PostgreSQL server?  Something that actually output the query being run
  would be great.
 
 You can set the logging option for query to 4, I believe, and it will log
 the full query by whatever logging facility you are using (syslog,
 redirected stdout or whatever).  Take a look in the admin docs for
 whatever version you are using.  There's a lot fo stuff you can log.
 
 If you are logging queries, you can open up the log file with tail -f and
 monitor the log as queries are being made.
 
 -- Brett
  http://www.chapelperilous.net/~bmccoy/
 ---
 If you think the problem is bad now, just wait until we've solved it.
 -- Arthur Kasspe

-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



[GENERAL] .mips.rpm

2001-02-08 Thread [EMAIL PROTECTED]

Hello,

Where can I find postgresql-7.0.x.mips.rpm for my Qube 2??

I try to find them on the FTPs but I have seen anything

please message me
thanks
__
Voila vous propose une boite aux lettres gratuite sur Voila Mail:
http://mail.voila.fr






Re: [GENERAL] monitoring running queries?

2001-02-08 Thread Brett W. McCoy

On Thu, 8 Feb 2001, Brice Ruth wrote:

 Its not working ... I looked at the admin docs and edited the pg_options
 file appropriately ... the following is what appears in /var/log/messages:

 postgres[23686]: read_pg_options: verbose=2,query=4,syslog=2

 But no queries ... I sent SIG_HUP to postmaster, even restarted
 postmaster.  The file /var/log/postgresql appears to be empty as well.

I don't know how you are starting postgreSQL, but I just have it redirect
everything to a log file under the postgres super user's directory:

nohup postmaster [options] server.log 21 

Make sure you don't use the -S option -- it detaches from the tty and you
will get no output!

-- Brett
 http://www.chapelperilous.net/~bmccoy/
---
Above all else -- sky.




Re: [GENERAL] tuples too big

2001-02-08 Thread martin . chantler


Believe it or not we have the same problem with DB2 on AS/400
In PG you can create large objects which is the proper answer but
there is another way...

A memo table.

This just contains a key to link up to the master record and a sequence
and a long text field, e.g. 6K.
Then write a function that automatically splits large amounts of text
across
rows in the memo file. Another function retrieves the data

We expanded this to have a second key that can store more than one
'notebook' per record.

This works well for things like notes and memo's but probably not so good
for
huge amounts of data. Its flexible and there's no hardcoded limit to the
length of data.

The front end application does the splitting/joining, we use VB but any
language should
be able to do it.

OK, tell me this isn't technically elegant or correct but its simple and it
works!

E.G.

CUSTOMER
ACNO NAME
1Smith

CUSTOMER_M
ACNO SEQ TEXT
11lots of text
12more text

MC.







Michelle Murrain [EMAIL PROTECTED] on 08/02/2001 14:46:12

Please respond to [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:(bcc: Martin Chantler/CIMG/CVG)
Subject:  [GENERAL] tuples too big


Hi folks,

I've got a problem - I need a little help. I'm using 6.5.3 from Debian
stable.

I've got a database, which has some fields in one table that need to hold a
fair bit of text (say, 8000-10,000+ characters). Right now, I've got those
fields as simple text.

When I try to enter data into those fields, I get the "Tuple too large"
error.

I can't chop up the fields into smaller pieces, and 8104 is not enough.

Suggestions?

Michelle
--

Michelle Murrain, Ph.D.
President
Norwottuck Technology Resources
[EMAIL PROTECTED]
http://www.norwottuck.com






--

NOTICE:  The information contained in this electronic mail transmission is
intended by Convergys Corporation for the use of the named individual or
entity to which it is directed and may contain information that is
privileged or otherwise confidential.  If you have received this electronic
mail transmission in error, please delete it from your system without
copying or forwarding it, and notify the sender of the error by reply email
or by telephone (collect), so that the sender's address records can be
corrected.





Re: [GENERAL] .mips.rpm

2001-02-08 Thread Lamar Owen

"[EMAIL PROTECTED]" wrote:
 Where can I find postgresql-7.0.x.mips.rpm for my Qube 2??
 
 I try to find them on the FTPs but I have seen anything

We have no MIPS RPM's available on our FTP site.  More specific
information on OS, libc version, kernel, etc would be required anyway. 
This is because I do not have access to a Qube to build binary RPM's on.

If you are able to rebuild from the source RPM, you can download that
and try to rebuild it. You will need a complete development environment,
including X, installed in order to rebuild from the source RPM.

The cobaltqube.org site lists a couple of archives, but neither have
PostgreSQL 7 RPM's.  The latest Qube RPM for PostgreSQL that I have
found is for 6.5.3.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] monitoring running queries?

2001-02-08 Thread Brice Ruth

Cool, that worked.  Thank you.

-Brice

"Brett W. McCoy" wrote:
 
 On Thu, 8 Feb 2001, Brice Ruth wrote:
 
  Its not working ... I looked at the admin docs and edited the pg_options
  file appropriately ... the following is what appears in /var/log/messages:
 
  postgres[23686]: read_pg_options: verbose=2,query=4,syslog=2
 
  But no queries ... I sent SIG_HUP to postmaster, even restarted
  postmaster.  The file /var/log/postgresql appears to be empty as well.
 
 I don't know how you are starting postgreSQL, but I just have it redirect
 everything to a log file under the postgres super user's directory:
 
 nohup postmaster [options] server.log 21 
 
 Make sure you don't use the -S option -- it detaches from the tty and you
 will get no output!
 
 -- Brett
  http://www.chapelperilous.net/~bmccoy/
 ---
 Above all else -- sky.

-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



Re: [GENERAL] tuples too big

2001-02-08 Thread Lamar Owen

[EMAIL PROTECTED] wrote:
 This works well for things like notes and memo's but probably not so good
 for
 huge amounts of data. Its flexible and there's no hardcoded limit to the
 length of data.

Sounds something like TOAST, part of PostgreSQL 7.1, to be released
soon.  TOAST, however, is in the backend itself and is fully automatic.

Hardcoded row limits on the amount of data per row (there is still a
limit on the number of _columns_ in a row, but not on the size of each
column) are gone in 7.1.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] Trouble with views (7.1beta4 Debian) and PgAccess0.98.7.

2001-02-08 Thread Peter Eisentraut

Emmanuel Charpentier writes:

 Hence a question : shouldn't the system (pg_* and pga_*) tables be somewhat
 documented at least in the reference manual (and maybe cross-referenced in the
 programmer's manual) ? They seem not to be at the time ...

http://www.postgresql.org/devel-corner/docs/postgres/catalogs.htm

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [GENERAL] tuples too big

2001-02-08 Thread Tom Lane

 I've got a problem - I need a little help. I'm using 6.5.3 from Debian
 stable.
 I've got a database, which has some fields in one table that need to hold a
 fair bit of text (say, 8000-10,000+ characters). Right now, I've got those
 fields as simple text.

[ Martin suggests breaking up the large values by hand ]

Another answer is to update to 7.0.3, and redeclare your large fields as
'lztext'.  This is a hack that is going away in 7.1 (it's superseded by
the more general TOAST feature), but it should buy you the extra couple
of K you need for now.  A rough rule of thumb is that LZ compression
will save a factor of 2 or so on chunks of text.

A third possibility is to increase BLCKSZ from 8K to 32K, but that
requires a rebuild from source, so you might not want to mess with that
if you're accustomed to using RPMs.

These two answers are obviously hacks, so I'd agree with Martin's
approach if there were no better way on the horizon.  But with 7.1
nearly out the door, I think it's silly to expend a lot of programming
effort to split up and reassemble records; the need for that will go
away as soon as you migrate to 7.1.  What you want right now is a quick
and easy stopgap.

regards, tom lane



[SQL] Query never returns ...

2001-02-08 Thread Brice Ruth

The following query:

SELECT 
tblSIDEDrugLink.DrugID, 
tblSIDEDrugLink.MedCondID, 
tblMedCond.PatientName AS MedCondPatientName, 
tblMedCond.ProfessionalName AS MedCondProfessionalName, 
tblSIDEDrugLink.Frequency, 
tblSIDEDrugLink.SeverityLevel 
FROM 
tblSIDEDrugLink, 
tblMedCond 
WHERE 
(tblSIDEDrugLink.DrugID IN ('DG-18698')) AND 
(tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID) 
ORDER BY 
tblSIDEDrugLink.DrugID, 
tblSIDEDrugLink.Frequency, 
tblSIDEDrugLink.SeverityLevel, 
tblSIDEDrugLink.MedCondID;

seems to not be liked by PostgreSQL.  Table 'tblSIDEDrugLink' has the
following structure:

CREATE TABLE TBLSIDEDRUGLINK
(
DRUGID  VARCHAR(10) NOT NULL,
MEDCONDID   VARCHAR(10) NOT NULL,
FREQUENCY   INT2,
SEVERITYLEVEL   INT2,
CONSTRAINT PK_TBLSIDEDRUGLINK PRIMARY KEY (DRUGID, MEDCONDID)
);

with the following index:
CREATE INDEX IX_TBLSIDEDRUGLINK1 ON TBLSIDEDRUGLINK (MEDCONDID);

This table has 153,288 rows.

Table 'tblMedCond' has the following structure:

CREATE TABLE TBLMEDCOND
(
MEDCONDID   VARCHAR(10) NOT NULL,
PROFESSIONALNAMEVARCHAR(58),
PATIENTNAME VARCHAR(58),
CONSTRAINT PK_TBLMEDCOND PRIMARY KEY (MEDCONDID)
);

This table has 1,730 rows.

The query above is made by a third-party API that I don't have the
source for, so I can't modify the query in the API, though the
third-party has been quite willing to help out - they may even ship me a
'special' version of the API if there's something in this query that
PostgreSQL for some reason doesn't implement efficiently enough.

If it would help anyone to see the query plan or such - I can modify the
logs to show that, just let me know.

Btw - I've let this query run for a while  I haven't seen it complete
... s ... I don't know if it would ever complete or not.

Any help at all is as always, appreciated.

Sincerest regards,
-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



[GENERAL] Re: full text searching

2001-02-08 Thread Gunnar R|nning

Lincoln Yeoh [EMAIL PROTECTED] writes:

 Sometimes I want it literal too. e.g. I'm searching for car I want car and
 nothing else.

Of course, you want this as well. 

 In the end it seems too complex. I'm starting to think it may be better to
 keep things literal but fast, and do the smart stuff at the client (do
 multiple searches if necessary).

You got commercial products like Verity that is able to these kind of
things. I've used Verity in conjunction with Sybase, apart from stability
problems, that was a very nice combination on doing free text searches. I
could define which columns I wanted indexed, and then I could do you joins
against a proxy table(the index) to do synonym searches, word searches,
regex searches, soundex searches etc. 

Verity was running in a separate process and that Sybase forwards the free
text search to when you join against the proxy table. Maybe we could have a
similar architecture in PostgreSQL as well some day. 

Does anybody know how Oracle has implemented their "context" search or
whatever it is called nowadays ?

regards, 

Gunnar 







[GENERAL] timestamp goober

2001-02-08 Thread Culley Harrelson

columns with default timestamp('now') see to be
defaulting to the time I started posgresql!

What am I doing wrong here?  Is it an os problem? 
Need these columns to capture the current date and
time.

Culley

__
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/



[GENERAL] Re: timestamp goober

2001-02-08 Thread Gregory Wood

 columns with default timestamp('now') see to be
 defaulting to the time I started posgresql!

I noticed that timestamps (in my case CURRENT_TIMESTAMP) are taken from the
beginning of a transaction. You didn't mention how you were accessing the
database, but if you were updating everything inside the same transaction,
everything would be timestamped with the time that transaction began.

Greg





[GENERAL] Re: Query never returns ...

2001-02-08 Thread Brice Ruth

Followup:

This query, run against the same dataset in a MS Access 2000 database,
returns immediately with the resultset.  (I truly mean immediately)  The
computer running the Access db is a laptop, PII 266 w/ 128MB RAM (an old
Dell Latitude).  The server running PostgreSQL is a dual PIII 450 w/
256MB RAM.

This is just meant as an FYI.

-Brice

Brice Ruth wrote:
 
 The following query:
 
 SELECT
 tblSIDEDrugLink.DrugID,
 tblSIDEDrugLink.MedCondID,
 tblMedCond.PatientName AS MedCondPatientName,
 tblMedCond.ProfessionalName AS MedCondProfessionalName,
 tblSIDEDrugLink.Frequency,
 tblSIDEDrugLink.SeverityLevel
 FROM
 tblSIDEDrugLink,
 tblMedCond
 WHERE
 (tblSIDEDrugLink.DrugID IN ('DG-18698')) AND
 (tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID)
 ORDER BY
 tblSIDEDrugLink.DrugID,
 tblSIDEDrugLink.Frequency,
 tblSIDEDrugLink.SeverityLevel,
 tblSIDEDrugLink.MedCondID;
 
 seems to not be liked by PostgreSQL.  Table 'tblSIDEDrugLink' has the
 following structure:
 
 CREATE TABLE TBLSIDEDRUGLINK
 (
 DRUGID  VARCHAR(10) NOT NULL,
 MEDCONDID   VARCHAR(10) NOT NULL,
 FREQUENCY   INT2,
 SEVERITYLEVEL   INT2,
 CONSTRAINT PK_TBLSIDEDRUGLINK PRIMARY KEY (DRUGID, MEDCONDID)
 );
 
 with the following index:
 CREATE INDEX IX_TBLSIDEDRUGLINK1 ON TBLSIDEDRUGLINK (MEDCONDID);
 
 This table has 153,288 rows.
 
 Table 'tblMedCond' has the following structure:
 
 CREATE TABLE TBLMEDCOND
 (
 MEDCONDID   VARCHAR(10) NOT NULL,
 PROFESSIONALNAMEVARCHAR(58),
 PATIENTNAME VARCHAR(58),
 CONSTRAINT PK_TBLMEDCOND PRIMARY KEY (MEDCONDID)
 );
 
 This table has 1,730 rows.
 
 The query above is made by a third-party API that I don't have the
 source for, so I can't modify the query in the API, though the
 third-party has been quite willing to help out - they may even ship me a
 'special' version of the API if there's something in this query that
 PostgreSQL for some reason doesn't implement efficiently enough.
 
 If it would help anyone to see the query plan or such - I can modify the
 logs to show that, just let me know.
 
 Btw - I've let this query run for a while  I haven't seen it complete
 ... s ... I don't know if it would ever complete or not.
 
 Any help at all is as always, appreciated.
 
 Sincerest regards,
 --
 Brice Ruth
 WebProjkt, Inc.
 VP, Director of Internet Technology
 http://www.webprojkt.com/

-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



[GENERAL] Fw: [PHP] Fooling the query optimizer

2001-02-08 Thread Adam Lang

On another list, someone posted this question.  Are they correct, old
problem, etc.?  I'll pass whatever info there is back to the originating
list.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Brent R.Matzelle" [EMAIL PROTECTED]
To: "PostgreSQL PHP" [EMAIL PROTECTED]
Sent: Thursday, February 08, 2001 10:41 AM
Subject: [PHP] Fooling the query optimizer


 Have any of you discovered a way to get around the current query optimizer
 limitation in Postgres?  For example, I have a table that has three
columns
 that I want to index for frequent search duties.  In Postgres I am forced
to
 create three indicies: one including all three columns, one for col2 and
 col3, and one for just col3.  Databases like MySQL can use the first index
 for these types of queries "SELECT * WHERE col2 = x AND col3 = y" and
"SELECT
 * WHERE col3 = y".  Postgres could only perform queries on indicies where
it
 looks like "SELECT * WHERE col1 = x AND col2 = y AND col3 = z" and "SELECT
*
 WHERE col1 = x AND col2 = y" etc.  However adding extra indexes as above
 would decrease the write speed on that table because a simple insert would
 require an update on all three indicies.

 Is there a way to fool Postgres to use the first index by creating a query
 like "SELECT * WHERE col1 = * AND col3 = x"?  I know I'm grasping for
straws
 here, but these issues can kill my database query performance.

 Brent




[SQL] Re: Query never returns ...

2001-02-08 Thread Brice Ruth

FYI - I let the query run for 11.5 minutes before killing it off.  It
had consumed 11 minutes, 18 seconds of CPU time (reported by ps).  The
following messages are from the server log, I'm pasting in all messages
directly following the query:

010208.10:04:29.473 [24041] ProcessQuery
010208.10:15:59.212 [24041] FATAL 1:  The system is shutting down
010208.10:15:59.213 [24041] AbortCurrentTransaction

FATAL: s_lock(4001600c) at spin.c:111, stuck spinlock. Aborting.

FATAL: s_lock(4001600c) at spin.c:111, stuck spinlock. Aborting.
Server process (pid 24041) exited with status 134 at Thu Feb  8 10:17:09 2001
Terminating any active server processes...
Server processes were terminated at Thu Feb  8 10:17:09 2001

Regards,
Brice Ruth

Brice Ruth wrote:
 
 The following query:
 
 SELECT
 tblSIDEDrugLink.DrugID,
 tblSIDEDrugLink.MedCondID,
 tblMedCond.PatientName AS MedCondPatientName,
 tblMedCond.ProfessionalName AS MedCondProfessionalName,
 tblSIDEDrugLink.Frequency,
 tblSIDEDrugLink.SeverityLevel
 FROM
 tblSIDEDrugLink,
 tblMedCond
 WHERE
 (tblSIDEDrugLink.DrugID IN ('DG-18698')) AND
 (tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID)
 ORDER BY
 tblSIDEDrugLink.DrugID,
 tblSIDEDrugLink.Frequency,
 tblSIDEDrugLink.SeverityLevel,
 tblSIDEDrugLink.MedCondID;
 
 seems to not be liked by PostgreSQL.  Table 'tblSIDEDrugLink' has the
 following structure:
 
 CREATE TABLE TBLSIDEDRUGLINK
 (
 DRUGID  VARCHAR(10) NOT NULL,
 MEDCONDID   VARCHAR(10) NOT NULL,
 FREQUENCY   INT2,
 SEVERITYLEVEL   INT2,
 CONSTRAINT PK_TBLSIDEDRUGLINK PRIMARY KEY (DRUGID, MEDCONDID)
 );
 
 with the following index:
 CREATE INDEX IX_TBLSIDEDRUGLINK1 ON TBLSIDEDRUGLINK (MEDCONDID);
 
 This table has 153,288 rows.
 
 Table 'tblMedCond' has the following structure:
 
 CREATE TABLE TBLMEDCOND
 (
 MEDCONDID   VARCHAR(10) NOT NULL,
 PROFESSIONALNAMEVARCHAR(58),
 PATIENTNAME VARCHAR(58),
 CONSTRAINT PK_TBLMEDCOND PRIMARY KEY (MEDCONDID)
 );
 
 This table has 1,730 rows.
 
 The query above is made by a third-party API that I don't have the
 source for, so I can't modify the query in the API, though the
 third-party has been quite willing to help out - they may even ship me a
 'special' version of the API if there's something in this query that
 PostgreSQL for some reason doesn't implement efficiently enough.
 
 If it would help anyone to see the query plan or such - I can modify the
 logs to show that, just let me know.
 
 Btw - I've let this query run for a while  I haven't seen it complete
 ... s ... I don't know if it would ever complete or not.
 
 Any help at all is as always, appreciated.
 
 Sincerest regards,
 --
 Brice Ruth
 WebProjkt, Inc.
 VP, Director of Internet Technology
 http://www.webprojkt.com/

-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



[GENERAL] TOAST

2001-02-08 Thread David Wheeler

Greetings All,

I've seem some references to TOAST in 7.1. Can anyone tell me what it is
and/or where I can read all about it and how it will solve all my problems
using BLOBs with PostgreSQL?

Thanks!

David




Re: [GENERAL] timestamp goober

2001-02-08 Thread Manuel Sugawara

Culley Harrelson [EMAIL PROTECTED] writes:

 columns with default timestamp('now') see to be
 defaulting to the time I started posgresql!

try using: default 'now' instead of: default timestamp('now') in your
table definitions. The latter is evaluated by postgres just once, when
the table is installed, instead of each time a tuple is inserted.

HTH,
Manuel.



[GENERAL] serious performance problem

2001-02-08 Thread Emmanuel Pierre

Hi,

I am running PGSql 7.0.3 over Linux 2/ELF with a ReiserFS
filesystem,
Bi-P3 800 and 2Gb of RAM.

My database jump from 8 in load to 32 without any real reason
nor too
much requests.

I already do vacuum even on the fly ifever that can decrease
load, but
nothing...

I've done many indexed also...

Can someone help me ? 

Emmanuel


-- 
EDENJOB / APR-Job

Email: [EMAIL PROTECTED]Home:www.apr-job.com
Phone: +33 1 47 81 02 41  Tatoo: +33 6 57 60 42 17
Fax:   +33 1 41 92 91 54  eFAX:  +44 0870-122-6748

**

This message and any attachments (the "message") are confidential 
and intended solely for the addressees.
Any unauthorised use or dissemination is prohibited. 
E-mails are susceptible to alteration.   
Neither EDENJOB/APR-JOB or affiliates shall be liable for the 
message if altered, changed or falsified. 

**



RE: [GENERAL] Re: Query never returns ...

2001-02-08 Thread Mayers, Philip J

What does:

EXPLAIN
SELECT 
tblSIDEDrugLink.DrugID, 
tblSIDEDrugLink.MedCondID, 
tblMedCond.PatientName AS MedCondPatientName, 
tblMedCond.ProfessionalName AS MedCondProfessionalName, 
tblSIDEDrugLink.Frequency, 
tblSIDEDrugLink.SeverityLevel 
FROM 
tblSIDEDrugLink, 
tblMedCond 
WHERE 
(tblSIDEDrugLink.DrugID IN ('DG-18698')) AND 
(tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID) 
ORDER BY 
tblSIDEDrugLink.DrugID, 
tblSIDEDrugLink.Frequency, 
tblSIDEDrugLink.SeverityLevel, 
tblSIDEDrugLink.MedCondID;

Give out? (From the command line psql tool.)

Regards,
Phil

+--+
| Phil Mayers, Network Support |
| Centre for Computing Services|
| Imperial College |
+--+  

-Original Message-
From: Brice Ruth [mailto:[EMAIL PROTECTED]]
Sent: 08 February 2001 16:20
To: [EMAIL PROTECTED]@postgresql.org
Subject: [GENERAL] Re: Query never returns ...


FYI - I let the query run for 11.5 minutes before killing it off.  It
had consumed 11 minutes, 18 seconds of CPU time (reported by ps).  The
following messages are from the server log, I'm pasting in all messages
directly following the query:

010208.10:04:29.473 [24041] ProcessQuery
010208.10:15:59.212 [24041] FATAL 1:  The system is shutting down
010208.10:15:59.213 [24041] AbortCurrentTransaction

FATAL: s_lock(4001600c) at spin.c:111, stuck spinlock. Aborting.

FATAL: s_lock(4001600c) at spin.c:111, stuck spinlock. Aborting.
Server process (pid 24041) exited with status 134 at Thu Feb  8 10:17:09
2001
Terminating any active server processes...
Server processes were terminated at Thu Feb  8 10:17:09 2001




RE: [GENERAL] Fw: [PHP] Fooling the query optimizer

2001-02-08 Thread Michael Ansley
Title: RE: [GENERAL] Fw: [PHP] Fooling the query optimizer





Running this query:


SELECT * WHERE col3 = x


with a btree index on (col1, col2, col3) cannot be performed in an efficient manner, in any database, because you have specified the column order to be col1, col2, col3. If somebody claims that MySQL can do this, they're misunderstanding the problem, and/or solution, or there's some fudging going on by somebody.

Imagine the index to look like this:


col1 col2 col3
1 1 1
1 1 2
1 1 3
1 1 4
1 2 1
1 2 2
1 2 3
1 2 4


and a query which says SELECT * WHERE col3 = 4. Now what order are you going to traverse the index in? Remember that you can only use col3, and have to binary search (btree index). If you binary split the index, then you have one 4 in one half, and one four in another, i.e.: it's not going to work.

If MySQL claims it can do this, then the only way that I can think that they are doing this is by creating extra or separate indices behind the scenes, which is inefficient, and not particularly user friendly. Of course, they may have used GiST to create a special index for this ;-) and gotten it working, but I doubt it. Btree indices are by far the most common with simple data.

For any btree index, the index can be used to the point where the index columns and the filter columns diverge, IN ORDER, e.g.: if your index is over columns a, b, c, d, and you filter on a, b, d, e, then the index can be used, only over columns a and b, though (not d!!). In the example cited, the first column in the index is not used in the filter, and so the complete index has to be ignored, i.e.: seq scan.

Cheers...



MikeA





-Original Message-
From: Adam Lang [mailto:[EMAIL PROTECTED]]
Sent: 08 February 2001 16:27
To: PGSQL General
Subject: [GENERAL] Fw: [PHP] Fooling the query optimizer



On another list, someone posted this question. Are they correct, old
problem, etc.? I'll pass whatever info there is back to the originating
list.


Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: Brent R.Matzelle [EMAIL PROTECTED]
To: PostgreSQL PHP [EMAIL PROTECTED]
Sent: Thursday, February 08, 2001 10:41 AM
Subject: [PHP] Fooling the query optimizer



 Have any of you discovered a way to get around the current query optimizer
 limitation in Postgres? For example, I have a table that has three
columns
 that I want to index for frequent search duties. In Postgres I am forced
to
 create three indicies: one including all three columns, one for col2 and
 col3, and one for just col3. Databases like MySQL can use the first index
 for these types of queries SELECT * WHERE col2 = x AND col3 = y and
SELECT
 * WHERE col3 = y. Postgres could only perform queries on indicies where
it
 looks like SELECT * WHERE col1 = x AND col2 = y AND col3 = z and SELECT
*
 WHERE col1 = x AND col2 = y etc. However adding extra indexes as above
 would decrease the write speed on that table because a simple insert would
 require an update on all three indicies.

 Is there a way to fool Postgres to use the first index by creating a query
 like SELECT * WHERE col1 = * AND col3 = x? I know I'm grasping for
straws
 here, but these issues can kill my database query performance.

 Brent




**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
Nick West - Global Infrastructure Manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**



Re: [GENERAL] Re: timestamp goober

2001-02-08 Thread Alex Pilosov

Yes, and that's a feature :)

If you want a wall clock time, use timenow()

-alex

On Thu, 8 Feb 2001, Gregory Wood wrote:

  columns with default timestamp('now') see to be
  defaulting to the time I started posgresql!
 
 I noticed that timestamps (in my case CURRENT_TIMESTAMP) are taken from the
 beginning of a transaction. You didn't mention how you were accessing the
 database, but if you were updating everything inside the same transaction,
 everything would be timestamped with the time that transaction began.
 
 Greg
 
 
 





Re: [GENERAL] Re: timestamp goober

2001-02-08 Thread Gregory Wood

Oh, I just made sure that I started a new transaction :)

I actually prefer that timestamps are handled that way... once I realized
*why* I had triggered INSERTs that were stamped 45 minutes earlier than I
thought they should have been.

Greg

 Yes, and that's a feature :)

 If you want a wall clock time, use timenow()

  I noticed that timestamps (in my case CURRENT_TIMESTAMP) are taken from
the
  beginning of a transaction. You didn't mention how you were accessing
the
  database, but if you were updating everything inside the same
transaction,
  everything would be timestamped with the time that transaction began.





[GENERAL] Re: [SQL] Query never returns ...

2001-02-08 Thread Brice Ruth

Stephan,

Here is what EXPLAIN shows:

NOTICE:  QUERY PLAN:

Sort  (cost=0.02..0.02 rows=1 width=64)
  -  Nested Loop  (cost=0.00..0.01 rows=1 width=64)
-  Seq Scan on tblmedcond  (cost=0.00..0.00 rows=1 width=36)
-  Seq Scan on tblsidedruglink  (cost=0.00..0.00 rows=1 width=28)

As for vacuum analyze - prior to running into these problems, I deleted
all data from the database (using delete from tblname) and then ran
vacuumdb -a, after which I loaded the data into the tables using 'copy
... from' - there have been no updates to the database since then -
merely selects.

-Brice

Stephan Szabo wrote:
 
 What does explain show for the query and have you run
 vacuum analyze recently on the tables?
 
 On Thu, 8 Feb 2001, Brice Ruth wrote:
 
  The following query:
 
  SELECT
tblSIDEDrugLink.DrugID,
tblSIDEDrugLink.MedCondID,
tblMedCond.PatientName AS MedCondPatientName,
tblMedCond.ProfessionalName AS MedCondProfessionalName,
tblSIDEDrugLink.Frequency,
tblSIDEDrugLink.SeverityLevel
  FROM
tblSIDEDrugLink,
tblMedCond
  WHERE
(tblSIDEDrugLink.DrugID IN ('DG-18698')) AND
(tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID)
  ORDER BY
tblSIDEDrugLink.DrugID,
tblSIDEDrugLink.Frequency,
tblSIDEDrugLink.SeverityLevel,
tblSIDEDrugLink.MedCondID;
 
  seems to not be liked by PostgreSQL.  Table 'tblSIDEDrugLink' has the
  following structure:
 
  CREATE TABLE TBLSIDEDRUGLINK
  (
  DRUGID  VARCHAR(10) NOT NULL,
  MEDCONDID   VARCHAR(10) NOT NULL,
  FREQUENCY   INT2,
  SEVERITYLEVEL   INT2,
  CONSTRAINT PK_TBLSIDEDRUGLINK PRIMARY KEY (DRUGID, MEDCONDID)
  );
 
  with the following index:
  CREATE INDEX IX_TBLSIDEDRUGLINK1 ON TBLSIDEDRUGLINK (MEDCONDID);
 
  This table has 153,288 rows.
 
  Table 'tblMedCond' has the following structure:
 
  CREATE TABLE TBLMEDCOND
  (
  MEDCONDID   VARCHAR(10) NOT NULL,
  PROFESSIONALNAMEVARCHAR(58),
  PATIENTNAME VARCHAR(58),
  CONSTRAINT PK_TBLMEDCOND PRIMARY KEY (MEDCONDID)
  );
 
  This table has 1,730 rows.
 
  The query above is made by a third-party API that I don't have the
  source for, so I can't modify the query in the API, though the
  third-party has been quite willing to help out - they may even ship me a
  'special' version of the API if there's something in this query that
  PostgreSQL for some reason doesn't implement efficiently enough.
 
  If it would help anyone to see the query plan or such - I can modify the
  logs to show that, just let me know.
 
  Btw - I've let this query run for a while  I haven't seen it complete
  ... s ... I don't know if it would ever complete or not.
 
  Any help at all is as always, appreciated.
 
  Sincerest regards,
  --
  Brice Ruth
  WebProjkt, Inc.
  VP, Director of Internet Technology
  http://www.webprojkt.com/
 

-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



[GENERAL] Re: [SQL] Query never returns ...

2001-02-08 Thread Brice Ruth

All right ... after reading up on the documentation for vacuum, I
understand why that's necessary.  I've run vacuum analyze on all the
tables, now.  Here are the more realistic results from explain:

NOTICE:  QUERY PLAN:

Sort  (cost=62.46..62.46 rows=14 width=64)
  -  Nested Loop  (cost=0.00..62.19 rows=14 width=64)
-  Index Scan using pk_tblsidedruglink on tblsidedruglink 
(cost=0.00..33.82 rows=14 width=28)
-  Index Scan using pk_tblmedcond on tblmedcond 
(cost=0.00..2.01 rows=1 width=36)

The query runs lightning fast now ... THANK YOU!!! :)

-Brice

Stephan Szabo wrote:
 
 After you load the data, you need to run vacuum analzye.  That'll
 get statistics on the current data in the table.  Of course, I'm
 not sure that'll help in this case.
 
 On Thu, 8 Feb 2001, Brice Ruth wrote:
 
  Stephan,
 
  Here is what EXPLAIN shows:
 
  NOTICE:  QUERY PLAN:
 
  Sort  (cost=0.02..0.02 rows=1 width=64)
-  Nested Loop  (cost=0.00..0.01 rows=1 width=64)
  -  Seq Scan on tblmedcond  (cost=0.00..0.00 rows=1 width=36)
  -  Seq Scan on tblsidedruglink  (cost=0.00..0.00 rows=1 width=28)
 
  As for vacuum analyze - prior to running into these problems, I deleted
  all data from the database (using delete from tblname) and then ran
  vacuumdb -a, after which I loaded the data into the tables using 'copy
  ... from' - there have been no updates to the database since then -
  merely selects.

-- 
Brice Ruth
WebProjkt, Inc.
VP, Director of Internet Technology
http://www.webprojkt.com/



Re: [GENERAL] Fw: [PHP] Fooling the query optimizer

2001-02-08 Thread Tom Lane

"Adam Lang" [EMAIL PROTECTED] forwards:
 In Postgres I am forced to create three indicies: one including all
 three columns, one for col2 and col3, and one for just col3.

Depending on what his queries actually are, perhaps it's sufficient
to create one index on (col3,col2,col1), rather than on (col1,col2,col3)
as I presume his first index currently is.  As Mike Ansley points out,
Postgres can use the first N columns of an index if all N are
constrained by a query's WHERE clause; but there is no point in looking
at index columns beyond an unconstrained column, because if you did
you'd be fighting the index order instead of being helped by it.

I think that the planner used to have some bugs that might interfere
with recognition of these partial-index-match cases, but it's been okay
with them since 7.0 for sure.  To say more, we'd need to know exactly
which PG version he's running and exactly what his queries look like.

regards, tom lane



[GENERAL] Varchar Indexing

2001-02-08 Thread mitch

Is there a size limit on indexable varying character fields?

I need a semi-large varchar field (say 2048 characters) but really need
the speed of an index in searching, I'm left wondering at what point
indexing varchar fields becomes useless (if it ever does)..

Thanks!

-Mitch




[GENERAL] Indicies and write performance

2001-02-08 Thread Brent R.Matzelle

Hello everyone, I just joined the list.  Adam Lang convinced me when he got 
some answers for me from your list and sent it to the PHP PG list.  

Thanks for all of your input.  Let me change my question somewhat to clear up 
my concern.  How much of a write performance hit will I take if I create say 
3 or 4 different indicies on one large table (max of 100K rows)?  I read 
Bruce Momjian's book (not a bad book) and he alluded to these problems but 
was not specific.  I just want to know if any of you had this issue using a 
certain number of tables and indicies so that I might be able to avoid it 
myself.  Thanks.

Brent


Depending on what his queries actually are, perhaps it's
sufficient
to create one index on (col3,col2,col1), rather than on
(col1,col2,col3)
as I presume his first index currently is.  As Mike Ansley
points out,
Postgres can use the first N columns of an index if all N
are
constrained by a query's WHERE clause; but there is no
point in looking
at index columns beyond an unconstrained column, because if
you did
you'd be fighting the index order instead of being helped
by it. I think that the planner used to have some bugs that might
interfere
with recognition of these partial-index-match cases, but
it's been okay
with them since 7.0 for sure.  To say more, we'd need to
know exactly
which PG version he's running and exactly what his queries
look like. regards, tom lane

"First, they ignore you. Then they laugh at you. Then they fight you. Then 
you win."
-Mahatma Ghandi





[GENERAL] About SP's and parameters

2001-02-08 Thread Alfonso Peniche

Hi all:

Is there a way to pass a tablename as a parameter to a SP?

I'll try to explain myself. I want to create a SP that can, for
instance, do a delete from .. so that any particular application all

it has to do is call the SP sending the table to be deleted.

Right now I have:

CREATE FUNCTION deleteanytable (bpchar)
returns integer
as '
begin
delete from $1;

return 1;
end;'
language 'plpgsql';

If I call it:
select deleteanytable ( mytable );

I get a message saying

ERROR:  Attribute 'mytable' not found

Thanks for your help.

P.S. It would be very interesting if a columnname could also be sent as
a parameter to the Stored Procedure.






Re: [GENERAL] Varchar Indexing

2001-02-08 Thread Tom Lane

mitch [EMAIL PROTECTED] writes:
 Is there a size limit on indexable varying character fields?

1/3rd page, about 2700 bytes, if you're using btree index.

Presently, TOAST does not help any :-(

regards, tom lane