[SQL] FTI, paged, ranked searching and efficiency.

2000-11-14 Thread Paul

Hello,

This is going to be a bit long, I hope some of you will take the
trouble to read it :)

I am building a search engine for a section of a (PHP based) website.
I wish the user to be able to a number of words in the search, and the
search results to be ranked by the number of times words occur (both
different words and the same word occuring multiple times are good).

My (simplified) table structure is this:
==
 Table "entry_fti"
 Attribute |Type | Modifier
---+-+--
 string| varchar(25) |
 id| oid |
Index: entry_fti_string_idx

   Table "entry"
   Attribute   | Type  |Modifier
---+---+--
-
 entry_id  |integer| not null default 
nextval('entry_id_seq'::text)
 entry_name|text   |
 entry_description_html|text   |
 entry_image_id|integer| not null default 0
 entry_tn_image_id |integer| not null default 0
 entry_live|boolean| not null default 't'
Index: entry_pkey

   Table "image"
 Attribute  |Type |Modifier
+-+
 image_id   | integer | not null default nextval('image_id_seq'::text)
 image_name | varchar(32) |
 height | integer | not null
 width  | integer | not null
Indices:  image_pkey
==

And my (simplified) query looks like this:
==
SELECT   COUNT(entry_fti.id) AS rating,
 entry.entry_name AS name,
 entry.entry_id AS id,
 entry.entry_description_html AS description_html,
 image.image_name AS thumb1_name,
 image.height AS thumb1_height,
 image.width AS thumb1_width
FROM entry, entry_fti, image
WHEREentry_fti.id=entry.oid
  ANDentry.entrytn_image_id=image.image_id
  ANDentry.entry_live = 't'::bool
  AND(
  entry_fti.string ~'^word1'
  OR
  entry_fti.string ~'^word2'
  OR
   .
   .
  OR
  entry_fti.string ~'^wordn'
 ) 
GROUP BY entry.entry_id,
 entry.entry_name,
 entry.entry_description_html,
 image.image_name,
 image.height,
 image.width
ORDER BY rating DESC 
==

Now this all works, which is good. My problem now is that I want to 
limit the number of results shown on a page to 20 and show the number
of pages of extra results, much like you'd see on any search engine site.
Naturally I immediatly thought of the LIMIT and OFFSET clauses, but then:
a) I'd need to do an extra query, to find out the total number of results
   to show the number of pages on the webpage.
b) I have no idea how to write that query. It'd be a COUNT of 'rating'
   in the above, which would be a COUNT(COUNT(entry_fti.id)) which
   would probably require some hideous (and not legal?) double GROUP
   BY construct. Ouch.

So basically, LIMIT/OFFSET looks like a no go. This leaves me with just
doing the above query, and using PHP to jump to a particular row in the
results depending on what page you are on and pg_numrows() to 
calculate the number of pages.

Would that be particularly inefficient? 
Should I be looking harder for a LIMIT/OFFSET based solution?

Perhaps I'd be better off splitting it into two queries, one to just
get the entry_id list in order, then another query to pull out the
rest of the information for the 20 of those entry_ids that are on the results 
page I wish to show?
That would stop Postgres from gathering so much information that I am just 
going to throw away anyway without looking at.

Any ideas? Have I missed something obvious that will help me? Or better yet, 
can someone who has done this sort of thing before tell me whether I am on the 
right track?

Paul




Re: [SQL] variables in SQL??

2000-08-16 Thread Volker Paul

> what im trying to do is have a Sum of a colum.. as it goes forwards with the
> cursor..
> like so:
> 
> Price|Sum
> 5|5
> 4|9
> 10|19
> 2|21
> 7|28

I think what you mean is called running sum, I had the same problem
before, and I found no other solution than creating a column for it,
and calculating its values by a function.

Yours,

V.Paul



Re: [SQL] how to store a query, that results in a table

2000-09-23 Thread Paul Wehr

Summary:  Proposed solution, and question on efficiency of technique

I don't know if this is what you are looking for, but I have a database where
I needed a relatively complex view (which I can do thanks to the expanded view
buffer in 7.0!, it didn't fit in 6.5.3), but I need to pass an "effective
date" to the view that needed to be in a range (so I couldn't just use a
column in one of the source tables) to get the results I want.  My "solution"
was to come up with an "effective dates" table with one column (primary keyed)
that I can put the dates in.  For example:

create table effective_date (date date, primary key (date) );

create view complex_view as select blah, . , effective_date.date
from tablea, tableb, tablec, effective_date
where tablea.foo=tableb.foo
   
  and effective_date.date between tablec.start_date=tablec.end_date
;

then, when I want to select rows from the view, I have to INSERT the date I
want into the "effective_date" table (which is effectively my "parameter"),
then I can select it from the view.  i.e.:

insert into effective_date values ('09/23/2000');  -- may "fail" if date is
already in the table, but if it is, who cares?

select *
from complex_view
where date='09/23/2000';

Now it would certainly be nicer if I could set some kind of global system
variable to the date, then reference that system variable in the view, but I
couldn't figure out any way to do it.  If anyone in the know is screaming out
loud at this technique, please point me in the right direction, I would love
to be able to skip the "Insert into effective_date..." step.

-paul



Keith Wong wrote:

> This is not really possible with postgresql at the moment.
> Better off trying to work around, perhaps using a view. That way you have a
> way to change the select
> statement without actually modifying your client code.
>
> Keith.
>
> At 06:09 PM 22/09/2000 -0400, Nelson wrote:
> >thank you jie Liang for your response, but my problems are:
> >1. How to store a query in the database.
> >2. How to give a parameter from outside of database, for example:
> >select * from table1 where row1 = my_parameter_outside.
> >Give me an example please.
> >
> >




[SQL] Create table doesn't work in plpgsql

2000-12-19 Thread Volker Paul

Hi,

can I do some table manipulation in plpgsql?
Look at only the "create table" line and the error message:

create function plural (text) returns text as '
   begin 
   create table tmp (num int4); 
   return $1 || ''s''; 
   end;' language 'plpgsql'; 
select plural('test'); 
CREATE
ERROR:  copyObject: don't know how to copy 611

What does the error message mean? Where can I read more about it?

Cheers, Volker



Re: [SQL] Create table doesn't work in plpgsql

2000-12-21 Thread Volker Paul

Hi,

> I don't think you can use DDL(data definition language) in PL/SQL.
> create table is not DML(data munipulation language) instead
> it's a DDL.
Thanks, but that leaves me with a problem.
What I really want to do is something like
select str from person where id=1234;
where str is a string that contains an expression like 
famname || ', ' || givname
i.e. the final select is 
select famname || ', ' || givname from person where id=1234;
I know it's possible by building the select e.g. in bash
and calling psql with it as an argument, but do you see a possibility
that is closer to Postgres, e.g. in plpgsql?


Volker Paul



Re: [SQL] Invoice number

2000-12-21 Thread Volker Paul

Hi,

> I'm wondering how people creates guaranteed sequential numbers - in my case
> for invoice numbers.

See the PostgreSQL book p. 85 and 250, and the online doc about serials:
CREATE TABLE person ( id SERIAL, name TEXT );


Volker Paul



Re: [SQL] Create table doesn't work in plpgsql

2000-12-22 Thread Volker Paul

> Can this be done using tcl or perl?

I'll try them and report what I find out.

V.Paul



Re: [SQL] select returns no line

2001-01-23 Thread Volker Paul

A space or something like that is also what I was thinking of.
I'd suggest to:
select * from users, length(user_login) where user_id=4;
before and after the update. 

V.Paul



Re: [SQL] postgres's users take on onlamp

2001-02-12 Thread Volker Paul

Well, maybe if Postgres' name was Mostgres, the "M"
would stand for Mostgres instead of MySQL ...

V.Paul

clayton cottingham wrote:
> 
> heya:
> just wondering if anyone has any comments on this
> 
> onlamp is o'rielly's new ideal
> that ,really,  has been in use for quite a while
> 
> its anacronym stands for
> linux apache mysql and {php/perl/python}
> 
> more info here:
> 
> http://www.onlamp.com/



[SQL] C/C++ interface

2001-02-13 Thread Volker Paul

Hello,

in the C interface documentation there is an example using:

res = PQexec(conn, "DECLARE mycursor CURSOR FOR select * from
pg_database");
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "DECLARE CURSOR command failed\n");
PQclear(res);
exit_nicely(conn);
}
PQclear(res);
res = PQexec(conn, "FETCH ALL in mycursor");

...etc. So the statements are:

DECLARE mycursor CURSOR FOR select * from pg_database;
FETCH ALL in mycursor;

What's the difference between this and simply doing:
select * from pg_database;

I tried this in psql, the result seemed the same.

What I'm really using, however, is the C++ interface.
Its documentation is not yet complete.
There, too, I tried a version with and without cursor.
The result seems to be the same, but returned int is always 0
for the version without cursor, so I get no information whether
the query succeeded. 

Is someone maintaining the C++ interface and its documentation?


Thanks,

Volker Paul



Re: [SQL] C/C++ interface

2001-02-15 Thread Volker Paul

Tom Lane wrote:
> 
> Volker Paul <[EMAIL PROTECTED]> writes:
> > Is someone maintaining the C++ interface and its documentation?
> 
> Not really.  Feel free to step up and lend a hand ...

I found some functions of the C++ binding library
that are not or scarcely documented, namely:
  int GetIsNull(int tup_num, int field_num);
  int GetIsNull(int tup_num, const char* field_name);
  int GetLine(char* string, int length);
  void PutLine(const char* string);
  const char* OidStatus();
  int EndCopy();

I would like to complete the documentation at these points,
and maybe add some small example C++ programs.

In which form should I write the documentation and where
should I send it?
(I have no possibility at the moment to test SGML documents,
i.e. convert them to HTML.)

Regards,

Volker Paul



[SQL] Temp tables being written to disk. Avoidable?

2001-08-14 Thread Paul McGarry

Hi,

I have an application where I am using fulltextindex to
create a searchable index for a table and wish to return
pages of results, ranked in order of relevance. So for any
particular page view I need to know:
a) Total number of results relevant to search (so I can display
page [1] 2 3 4 5 6)
b) The results to be ranked by 'relevance' to search terms.
c) The details for a part of that range (ie 15 or so entries
per page.

Database structure is something like:

==
txdev43=# \d entry_fti
  Table "entry_fti"
 Attribute | Type  | Modifier 
---+---+--
 string| character varying(25) | 
 id| oid   | 
Index: entry_fti_idx

txdev43=# \d entry
  Table "entry"
Attribute |   Type   |   Modifier
--+--+
 entry_id | integer  | default nextval('ent_id_seq'::text)
 name | text | 
 description_html | text | 
 fti_text | text | 
Indices: entry_oid_idx,
 entry_pkey
==
(The entry table is simplified here, real one has 24 columns).

My original plan was to do two/three queries, ie:
==
SELECT COUNT (entry_fti.id) as rating , id 
INTO TEMP TABLE searchtemp 
FROM entry_fti  
WHERE (entry_fti.string ~'^word1' OR entry_fti.string ~'^word2') 
GROUP BY id

SELECT entry.*, searchtemp.rating 
FROM searchtemp, entry 
WHERE entry.oid=searchtemp.id 
ORDER BY rating DESC 
LIMIT 15 OFFSET 0

SELECT count(*) FROM searchtemp; 
(optional, depending on number of rows returned by previous query)
==

This seemed to be the cheapest way to get all the information I need.
However, I noticed a disk access each time I did the search. I fiddled
with the SHARED_BUFFERS and SORT_MEM and discovered they weren't the
problem but that the SELECT INTO was causing the disk access. If I have
multiple searches occuring concurrently I imagine this becoming a 
problem. Can I avoid having the temp table written to disk (unless
required by running out of memory)? The temp table has a tiny lifespan
and needs to be accessed for each subsequent query so there's no
point in writing it to disk unless completely necessary.

For the moment I have moved to a query like:
==
SELECT COUNT (entry_fti.id) as rating , entry_id, name, description_html
FROM entry_fti, entry 
WHERE (entry_fti.string ~'^word1' OR entry_fti.string ~'^word2') 
AND entry_fti.id=entry.oid 
GROUP BY id ,entry_id, name, description_html
ORDER BY rating DESC
==
which seems significantly less elegant (I need to return all rows and
do my 'OFFSET' and 'LIMIT' manually, also bearing in mind that I'm
pulling 20+ cols from the entry table, not the 3 I have here) but 
doesn't cause any disk access as the entry_fti and entry tables sit
in memory.

Hmm, I should to do some proper benchmarking on this rather than
worrying about the noises coming from the disks..

However, even if the original method does turn out to be faster,
I imagine it could be faster still without the disk writes (though
I don't know anywhere near enough about the PostgreSQL internals
for that to be anything other than a gut feeling).

--
Paul McGarrymailto:[EMAIL PROTECTED] 
Systems Integrator  http://www.opentec.com.au 
Opentec Pty Ltd http://www.iebusiness.com.au
6 Lyon Park RoadPhone: (02) 9870 4718 
North Ryde NSW 2113 Fax:   (02) 9878 1755 

This document and any attachments are intended solely for
the named addressee(s), are confidential, and may be subject to
legal professional privilege. Please notify us (on +61-2 9878 1744)
as soon as possible if you have received this document in error.
Any confidentiality or privilege is not waived or lost because this
email has been sent to you by mistake. This document and any
attachments are subject to copyright.  No part of them should be
reproduced or distributed by any means whatsoever without the
prior consent of the copyright owner.  Opentec does not warrant
that this email and any attachments are error or virus free.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



RE: [SQL] Temp tables being written to disk. Avoidable?

2001-08-14 Thread Paul McGarry

Hi Tom,

Thanks for your response, enlightening as always.

> Not at present --- temp tables are not different from real tables,
> except for some naming shenanigans.  So creation of a temp table will
> involve some disk hits.

Ok, would it be a good idea to modify that for the future? Given that
temp tables:-
a) can't be seen by other connections.
b) are likely to be selected upon heavily close to creation time.
c) are likely to be short lived.
is there any reason to move them out to disk unless strictly
necessary (aside from that it may take a fair bit of 
re-engineering and the core developers have other more important 
and/or more interesting things to be getting on with)?

> Do you really *need* a temp table, as opposed to writing a
> sub-SELECT-in-the-FROM-clause?  ISTM that that feature takes care
> of most of the simple notational reasons for wanting a temp table.

I have rewritten the query with such a subselect and it looks
much more elegant than my previous solution:


SELECT rating , entry.*
FROM  entry , 
(
SELECT COUNT(entry_fti.id) as rating, id
FROM entry_fti
WHERE (entry_fti.string ~'^blu' OR entry_fti.string ~'^word2') 
GROUP BY id
[LIMIT X OFFSET Y]
)  vtable
WHERE vtable.id=entry.oid 
ORDER BY rating DESC


The only I'm missing now is the bit equivalent to the
SELECT COUNT(*) FROM searchtemp.

With the temporary table I was effectively trying to
cache that subselect (without the LIMIT) and do two
queries on it, one returning the total number of
rows and one returning information relating to a
LIMITed number of those rows.

Without a temporary table (unless there's a cunning
trick I haven't thought of) it seems I either have
a choice of:

a) Not knowing how many rows match the full text query.
b) Returning the entry details for all the rows even though
   I only want a small number of them.
c) Doing two queries on the entry_fti table

It'll be interesting to see which of b) c) or using a temporary
table is more expensive. I imagine that could vary heavily
on the contents of the tables and the number of concurrent
searches that are occuring.

Thanks again.

--
Paul McGarrymailto:[EMAIL PROTECTED] 
Systems Integrator  http://www.opentec.com.au 
Opentec Pty Ltd http://www.iebusiness.com.au
6 Lyon Park RoadPhone: (02) 9870 4718 
North Ryde NSW 2113 Fax:   (02) 9878 1755  

This document and any attachments are intended solely for
the named addressee(s), are confidential, and may be subject to
legal professional privilege. Please notify us (on +61-2 9878 1744)
as soon as possible if you have received this document in error.
Any confidentiality or privilege is not waived or lost because this
email has been sent to you by mistake. This document and any
attachments are subject to copyright.  No part of them should be
reproduced or distributed by any means whatsoever without the
prior consent of the copyright owner.  Opentec does not warrant
that this email and any attachments are error or virus free.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



RE: [SQL] Temp tables being written to disk. Avoidable?

2001-08-16 Thread Paul McGarry

Howdy,

> If the temp table doesn't fit in memory, we will have to put it in
> backing store somewhere, and a disk is the logical place, right?  I
> don't see a huge advantage of putting it in memory.  We could prevent
> WAL writes for temp tables.  That would help.

Yes, if it won't fit in memory then clearly it needs to be put out 
to disk. I just thought if it could stay in memory then there is no
real reason to bother the disk at all. 

I do realise that it's probably vastly easier said than done though,
the more you start treating temp tables as a special case the more 
complex looking after them will get.

Just idle speculation on my part really, I just happen to be sitting
next to a server with a very noisy disk which remains idle most of
the time as selects are far more common in my app than insert/updates
The only exception to that is this temp table, which is been using
fairly frequently and makes the thing grind away.


--
Paul McGarrymailto:[EMAIL PROTECTED] 
Systems Integrator  http://www.opentec.com.au 
Opentec Pty Ltd http://www.iebusiness.com.au
6 Lyon Park RoadPhone: (02) 9870 4718 
North Ryde NSW 2113 Fax:   (02) 9878 1755  


This document and any attachments are intended solely for
the named addressee(s), are confidential, and may be subject to
legal professional privilege. Please notify us (on +61-2 9878 1744)
as soon as possible if you have received this document in error.
Any confidentiality or privilege is not waived or lost because this
email has been sent to you by mistake. This document and any
attachments are subject to copyright.  No part of them should be
reproduced or distributed by any means whatsoever without the
prior consent of the copyright owner.  Opentec does not warrant
that this email and any attachments are error or virus free.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Help needed

2002-08-28 Thread Mowat, Paul



Dear 
postgresql,
I am just starting to learn 
SQL.
I have attached a script that 
allows me to create some tables and indexes. I have rows that are inserted into 
the tables.
 
I need some help with creating multiple subqueries and table joins, (six or seven table joins).
Can 
you help?
Paul Mowat Software QA Engineer Enterprise Data Management --- 
BMCSoftware Abingdon UK 
Direct: +44 1235 827408 Fax:    +44 1235 827430 
Email: [EMAIL PROTECTED] 
www.bmc.com 
 


BaseSetofObjects.sql
Description: Binary data


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Inconsistent or incomplete behavior obverse in where clause

2002-11-12 Thread Paul Ogden
Hello,
Our application development group has observed what we 
feel is inconsistent behavior when comparing numeric 
column references to constant/literal values in SQL.  

I would appreciate comments on the best approach to 
this problem that will allow for the highest
portability of our application code.  I have searched
the archives and online docs, but so far have not found 
anyone addressing the problem quite this way.

Assume wuActive is a numeric field ( with scale but no
precision ) in the table WU:
  select count(wuid) from WU where wuActive = 0 --works fine
  select count(wuid) from WU where wuActive = '0' --works fine
  select count(wuid) from WU where wuActive = '0.0' --works fine
  select count(wuid) from WU where wuActive = 0.0 --throws the 
following exception:

"Unable to identify an operator '=' for types 'numeric' and 'double
precision' You will have to retype this query using an explicit cast"

Second, assume tPct is a numeric field ( having scale of 4 and
precision of 1 ) in the table T
  select count(tid) from T where tPct > 77 --works fine
  select count(tid) from T where tPct > '77' --works fine
  select count(tid) from T where tPct > '77.5' --works fine
  select count(tid) from T where tPct > 77.5 -- again throws 
the exception:

"Unable to identify an operator '>' for types 'numeric' and 'double
precision' You will have to retype this query using an explicit cast"

This seems to occur regardless of connectivity drivers used 
(ODBC, JDBC, etc..)

I am aware of the use of type casting to force the desired 
behavior in these situations.  I have also started to go down 
the road of creating functions and operators to force numeric 
to numeric comparison operations when comparing numeric to float, 
but realize that this approach is fraught with pitfalls, in fact 
it is interesting to us to note that with an operator in place 
to force numeric = float comparisons to parse as numeric = numeric, 
we started getting the opposite behavior.  Queries with 'column 
reference' = 0.0 worked fine, but queries with 'column reference' = 0 
threw a variant of the previous exception:

"Unable to identify an operator '=' for types 'numeric' and 'integer'"

Overall, this behavior appears to be inconsistent and is not 
the same behavior I have experienced with many other DBMS's.
Specifically, it seems strange that the parser does not treat 
values 0.0 or 77.5 as numeric(s[,p]) when comparing the values
to a column reference known to be of type numeric (s,[p]).  

Is an unquoted number in the form of NN.N always treated as a 
float?  If the planner could somehow recognize that the constant/
literal value was being compared to a column reference of the
type numeric (s,p) and treat the value accordingly, then would
operator identification no longer be a problem?

We are looking to maintain a high degree of portability in our 
application code, and while "CAST ( expression as type )" is 
fairly portable, no one here feels that it is a portable as
column reference = literal/constant value.   If someone knows
of a better approach, or can point us to documentation of build or
run-time configuration that affects the query planner where this 
issue is concerned, it would be much appreciated.

Thanks,

Paul Ogden
Database Administrator/Programmer
Claresco Corporation
(510) 549-2290   

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] RE: [SQL] System´s database table

2002-11-13 Thread Paul Ogden
It's not ERD but I've found the information in the Developer's Guide
regarding system catalogs to be useful in the past.

This http://www.postgresql.org/idocs/index.php?catalogs.html will
get you started.

Thanks,

Paul Ogden
Claresco Corporation

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:pgsql-sql-owner@;postgresql.org]On Behalf Of Jean-Luc Lachance
> Sent: Wednesday, November 13, 2002 12:37
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SQL] System´s database table
>
>
> While we are on the subject,
> is there any ERD of the system's table somewhere?
>
> JLL
>
>
> Josh Berkus wrote:
> >
> > Pedro,
> >
> > > I´m looking for the name of the table that contains all
> databases in my
> > system. I already see this in the postgre manual, but i´m
> forgot where 
> >
> > pg_database
> >
> > --
> > -Josh Berkus
> >  Aglio Database Solutions
> >  San Francisco
> >
> > ---(end of broadcast)---
> > TIP 4: Don't 'kill -9' the postmaster
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Inconsistent or incomplete behavior obverse in where

2002-11-12 Thread Paul Ogden
Josh,
Thanks for the reply.  Much of what you say is as we expected.  
I see that 7.3 has addressed the "Unable to identify an operator 
'=' for types 'numeric' and 'double precision'" problem, but 
I'm not sure how.  Context-sensitive approach? Overloaded operator
approach? Something else ( is there )?

If the release of 7.3 is soon, perhaps we can get by with the 
band-aid approach of overloading the comparison operators 
until such time as the new version is available.  Production
for us is next spring, so maybe we'll be okay on this one.
This approach would certainly allow our development team to
right their code one way.

> 
> 
> Paul,
> 
> > "Unable to identify an operator '=' for types 'numeric' and 'double
> > precision' You will have to retype this query using an explicit cast"
> 
> This is due, as you surmised, to decimal values defaulting to floats.
>  While there is little problem with an = operator for numeric and
> float, you would not want an implicit cast for a / operator with
> numeric and float.   As a result, I believe that all numeric and float
> operators have been left undefined.
> 
> > I am aware of the use of type casting to force the desired 
> > behavior in these situations.  I have also started to go down 
> > the road of creating functions and operators to force numeric 
> > to numeric comparison operations when comparing numeric to float, 
> > but realize that this approach is fraught with pitfalls, in fact 
> > it is interesting to us to note that with an operator in place 
> > to force numeric = float comparisons to parse as numeric = numeric, 
> > we started getting the opposite behavior.  Queries with 'column 
> > reference' = 0.0 worked fine, but queries with 'column reference' = 0
> > 
> > threw a variant of the previous exception:
> > 
> > "Unable to identify an operator '=' for types 'numeric' and
> > 'integer'"
> 
> Now, that's interesting.   Why would defining a "numeric = float" have
> broken "numeric = integer"?   There's no reason I can think of.
>   Perhaps I will try this myself and see if I encounter the same
> problem, or if your team modified the numeric = integer operator by
> mistake.
> 

No, we made no modifications to numeric = integer.  In fact, issuing
DROP OPERATOR (numeric,float8); 
cleared that problem right up.  And brought us back to square one.

> > Overall, this behavior appears to be inconsistent and is not 
> > the same behavior I have experienced with many other DBMS's.
> > Specifically, it seems strange that the parser does not treat 
> > values 0.0 or 77.5 as numeric(s[,p]) when comparing the values
> > to a column reference known to be of type numeric (s,[p]).  
> > 
> > Is an unquoted number in the form of NN.N always treated as a 
> > float?  
> 
> Yes.   I believe that this is from the SQL 92 spec; hopefully someone
> on this list with a copy of the Guide to the SQL Standard can quote it
> for you.
> 
> > If the planner could somehow recognize that the constant/
> > literal value was being compared to a column reference of the
> > type numeric (s,p) and treat the value accordingly, then would
> > operator identification no longer be a problem?
> 
> It's an interesting idea, and would be wonderful if it could be made to
> work.  However, the challenge of getting the program to correctly
> recognize the context for all literal values *without* making any wrong
> assumptions that would afffect the data could be substantial.
> 
> Most other RDBMSs deal with this, not by any kind of data type
> context-sensitivity, but simply by supporting a large number of
> implicit casts.  This approach can have its own perils, as I have
> experienced with MS SQL Server, where the average of splits for 120,000
> transactions is significantly different if you accidentally let the
> database implicitly cast the values as Float instead of Numeric.
> 
> As such, there was talk on the Hackers list at one time of *reducing*
> the number of implicit casts instead of increasing them.   This would
> obviously make your particular problem even worse, but the proponents
> of reduction point out that implicit casts can get you into real
> trouble if you're not aware of them, wheras forcing explicit casts just
> gets you error messages.
> 
> Hmmm ... in fact, I'd think the perfect solution would be a
> compile-time option or contrib package which allows you to
> enable/disable implicit casts for many data types.
> 

I think this is a great idea.  We're more 

Re: [SQL] numeric problems

2002-11-14 Thread Paul Ogden
Gee, this sounds familiar.

See the question (
http://archives.postgresql.org/pgsql-sql/2002-11/msg00191.php ) I posted of
a similar nature a couple of days ago.

We decided to wait for 7.3/7.4 and in the mean time we're using CAST (
'constant value' as numeric ).

Thanks,

Paul Ogden
Claresco Corporation
(510) 549-2290

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:pgsql-sql-owner@;postgresql.org]On Behalf Of jack
> Sent: Thursday, November 14, 2002 18:22
> To: [EMAIL PROTECTED]
> Subject: [SQL] numeric problems
>
>
> I'm using pgAdmin II version 1.3.82 , psqlODBC 7.2.3 ,
>  postgreSQl 7.2.1. There is a problem when I do an SQL query
>  with a field of numeric, for eample NUMERIC (10.2). Following
>  staement causes an error such as "Unable to indentify an
>  operator '>' for type 'numeric' and 'double precision'..."
>
>  select itemNo, listprice from itmt_info
> where listprice  > 50.99;
>
>  And this one works.
>
>  select itemNo, listprice from itmt_info
> where listprice  > '50.99';
>
> It seems all numeric operators are not available on NUMERIC
> FIELDS. Is there
> anyone has done somethings on this problems? Or should I change
> NUMERIC type
> to FLOAT type?  Please advise, thank you in advance.
>
> Jack
>
>
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Slow self-join on a 100 million record table

2003-01-02 Thread Paul Thornett
I've played with this quite a lot on a fairly similar machine to
yours - i.e. Dual pentium III 700s, 5x4.5Gb 10k Scsi disks Hardware
Raid0, 1 Gb. Ram, Windows 2000 Professional (SP3), Sql Server 2000
Desktop (SP2). As expected my elapse times are almost exactly twice as
long as yours.

I created my 100 million rows using an 8k file of words which I then
copied and appended until it contained 100 million rows. Then I DTssed
it into my table:
CREATE TABLE test
  (wid  int IDENTITY (1, 1) NOT NULL,
  word varchar(20) NOT NULL) ON [PRIMARY]

I went through numerous sequences, finally ending up with the data and
clustered index in the Primary Filegroup on my Raid0 disks, and the
nonclustered primary key (wid) in a secondary Filegroup on a separate
Firewire disk.

I also simplified your Sql (I don't see any need for subqueries), as
follows:

SELECT Refs=COUNT(*), t1.word, t2.word, t3.word
FROM test t1
JOIN test t2
ON t2.wid=t1.wid+1
AND t1.word in ('any','of','a')
AND t2.word in ('corel','woman','person')
JOIN test t3
ON t3.wid=t1.wid+2
AND t3.word in ('ventura','that','which')
GROUP BY t1.word, t2.word, t3.word

This gives me:

Refs   word   word  word
-
23086   a  Corel VENTURA
57715   of Corel VENTURA

All the permutations I tried gave an identical result - it always
takes 30 seconds to run the above query on my machine, or your version
of the same query, or any number of other ideas I tried. Just to
select the first word with no joins takes 5 seconds. I was surprised
the secondary Filegroup didn't improve speed at all.

I feel like I've been on a long journey, and ended up exactly where I
started!!

-- 
Paul Thornett


"Mark Davies" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> I have a database containing 100 million records, in which each
record
> contains (in sequence) all of the words in a 100 million word
> collection of texts.  There are two columns: TheID (offset value)
and
> TheWord (sequential words), e.g.:
>
> TheID  TheWord
>   -
>
> 1  I
> 2  saw
> 3  the
> 4  man
> 5  that
> 6  came
>  . . .
> 1 xxx
>
> To extract strings, I then use self-joins on this one table, in
which
> [ID], [ID-1], [ID+1] etc are used to find preceding and following
> words, e.g.:
> select count(*),w1.w1,w2.w1,w3.w1 from
> ((select w1, ID+1 as ID from seq where w1 in ('the','that','this'))
w1
> inner join
> (select w1, ID as ID from seq where w1 in ('man','woman','person'))
w2
> on w2.ID = w1.ID)
> inner join
> (select w1, ID-1 as ID from seq where w1 in ('who','that','which'))
w3
> on w3.ID=w1.ID
> group by w1.w1,w2.w1,w3.w1
>
> This would yield results like "the man that" (words 3-5 above),"that
> woman who","this man which", etc.
>
> The problem is, the self-join solution is extremely slow.  I have a
> SQL Server 7.0 database with a clustered index on TheWord
(sequential
> words) and a normal index on TheID.  Even with all of this, however,
a
> self-join query like the one just listed takes about 15 seconds on
my
> machine (dual CPU 1.2GHz, 4GB RAM, three 10K rpm SCSI HD w/ RAID-0).
>
> Any suggestions?  Have I messed up in terms of the SQL statement?
> Thanks in advance for any help that you can give.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] [JDBC] maxconnection

2003-06-19 Thread Paul Thomas
On 17/06/2003 10:22 zhuj wrote:
hi,all:
I want to constraint the maximum number of concurrent connections to
25
in postgres
jdbc driver. There are no methods for this
change. How would i do?
The simplest way is to use a connection pool.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] Delete duplicates

2003-06-22 Thread Paul Thomas
On 22/06/2003 10:15 Rudi Starcevic wrote:


Hi,

I have a table with duplicates and trouble with my SQL.
I'd like to keep a single record and remove older duplicates.
For example below of the 6 recods I'd like to keep records
4 and 6.
TABLE: aap
 id | keyword
+-
  1 | LEAGUE PANTHERS
  2 | LEAGUE PANTHERS
  3 | LEAGUE PANTHERS
  4 | LEAGUE PANTHERS
  5 | LEAGUE BRONCOS
  6 | LEAGUE BRONCOS
Here is my SQL so far, it will select records 1 to 5 instead
of 1,2,3 and 5 only.
Any help greatly appreciated. I think I need a Group By somewhere in
there.
select a1.id
from aap a1
where id < ( SELECT max(id) FROM aap AS a2 )
AND EXISTS
(
SELECT *
FROM aap AS a2
WHERE a1.keyword = a2.keyword
)
I just tries this with 7.3.3:

select max(id), keyword from aap where keyword in (select distinct keyword 
from aap) group by keyword;

 max | keyword
---
   6 | LEAGUE BRONCOS
   4 | LEAGUE PANTHERS
(2 rows)
HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] rule causes nextval() to be invoked twice

2003-07-22 Thread paul cannon
'Sup list-

I'm having trouble understanding the behavior of rules with regards to
default values.

Here's my situation: I have a table with a column referencing another.
When inserts are made to the second, I would like a certain
corresponding insert made to the first. Here's the simplest case I can
think of:

-- Begin demo SQL

CREATE TABLE main (
id SERIAL PRIMARY KEY,
contents VARCHAR);

CREATE TABLE othertable (
main_id INTEGER REFERENCES main
);

CREATE RULE main_insert AS
ON INSERT TO main DO
  INSERT INTO othertable VALUES (new.id);

INSERT INTO main(contents) VALUES ('Fails here');

-- End demo SQL

The last INSERT fails with: "$1 referential integrity violation - key
referenced from othertable not found in main"

If I remove the REFERENCES constraint, then I can see why. The insert
made into main behaves as expected; it gets nextval('main_id_seq'),
which comes out to 1. However, the main_insert rule gets _another_
nextval('main_id_seq'), and the value 2 is inserted into othertable.

"select nextval('main_id_seq')" afterwards confirms that the sequence
was incremented twice by the INSERT.

Is PostgreSQL supposed to be behaving that way? If so, what is the
reasoning behind it? Is there any way I can get around that and still
use a SERIAL for my primary key?

Until then, I'll have to make a function to do nextval('main_id_seq')
with every insert, and have the primary key be INTEGER.

Thanks-

-- 
..
| paul cannon [EMAIL PROTECTED] |
| http://people.debian.org/~pik/ |

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] rule causes nextval() to be invoked twice

2003-07-22 Thread paul cannon
On Tue, Jul 22, 2003 at 07:47:00PM -0600, paul cannon wrote:
> Until then, I'll have to make a function to do nextval('main_id_seq')
> with every insert, and have the primary key be INTEGER.

Nevermind- that doesn't work either! Here's the new sample code:

-- Begin demo SQL

CREATE SEQUENCE main_id_seq;
CREATE TABLE main (
id INTEGER PRIMARY KEY,
contents VARCHAR
);

CREATE TABLE othertable (
main_id INTEGER REFERENCES main(id)
);  

CREATE RULE main_insert AS 
  ON INSERT TO main DO
INSERT INTO othertable VALUES (new.id);

INSERT INTO main(id, contents) VALUES (nextval('main_id_seq'), 'Fails here');

-- End demo SQL

The same thing happens. The rule tries to put 2 into othertable. Surely
this is a bug?

-- 
.----.
| paul cannon [EMAIL PROTECTED] |
| http://people.debian.org/~pik/ |

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] How can I to solute this problem?

2003-07-29 Thread Paul Thomas
On 29/07/2003 07:18 LEON wrote:
I use tomcat+linux_postgresql+jsp to develop system.

I start postgresql with 1024 processes.
FE:
postmaster -i -S -N 1024 -B 2048 -D /var/lib/pgsql/data
My jsp doesn't implement connection pool.It directly connects postgresql
by jdbc.
IME, that is not a good way to do it. Use a connection pool.

After I run the Ui some times, the UI(jsp) would report "ieSorry,too many
clientslg" .The exception is SQLException.
I must restart tomcat or postgresql I can continue to running my UI.
My guess would be that you have a bug in your application which is not 
always closing the connection so eventually you exceed max_connections. 
You should always close the connection in a finally{} block so that is 
guaranteed that it will be closed regardless of any earlier exceptions 
which are thrown.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] How can I to solute this problem?

2003-07-29 Thread Paul Thomas
On 29/07/2003 17:09 Achilleus Mantzios wrote:
You may take a look at jboss connection pool
mechanism. (postgresql.xml)
He doesn't mention JBoss, just Tomcat so I don't believe the 
postgresql.xml will help him much. He could use Tomcat's built-in 
connection pooling 
(http://jakarta.apache.org/tomcat/tomcat-4.1-doc/jndi-datasource-examples-howto.html) 
but should note that there is an error in the web.xml fragment: 
jdbc/mydb should be 
jdbc/postgres. Leon, if you need more help 
please feel free to contact me off-list.

Also a good way of ensuring that your app will
not leave open stale postgresql connections
is to initially configure your pool to only have a small
number of connections (e.g. 2,3).
Good advice for any form of connection pooling.

If your J2EE components (e.g. jsps) are fast enough
you should start facing connection shortages
only after the load on your machine increases
significantly. (and then safely increase the maximum
number of connections in your configuration (both postgresql wise
and app server (conn pool) wise).
On Tue, 29 Jul 2003, Paul Thomas wrote:

>
> On 29/07/2003 07:18 LEON wrote:
> > I use tomcat+linux_postgresql+jsp to develop system.
> >
> > I start postgresql with 1024 processes.
> > FE:
> > postmaster -i -S -N 1024 -B 2048 -D /var/lib/pgsql/data
> >
> > My jsp doesn't implement connection pool.It directly connects
postgresql
> > by jdbc.
>
> IME, that is not a good way to do it. Use a connection pool.
>
> > After I run the Ui some times, the UI(jsp) would report "ieSorry,too
many
> > clientslg" .The exception is SQLException.
> >
> > I must restart tomcat or postgresql I can continue to running my UI.
>
> My guess would be that you have a bug in your application which is not
> always closing the connection so eventually you exceed max_connections.
> You should always close the connection in a finally{} block so that is
> guaranteed that it will be closed regardless of any earlier exceptions
> which are thrown.
>
> HTH
>
>
--
==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-210-8981112
fax:+30-210-8981877
email:  achill at matrix dot gatewaynet dot com
mantzios at softlab dot ece dot ntua dot gr
---(end of broadcast)---
TIP 6: Have you searched our list archives?
   http://archives.postgresql.org

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] How can I to solute this problem?

2003-07-30 Thread Paul Thomas
On 30/07/2003 10:43 LEON wrote:
Hi,Thank your help yesterday.

Now I met a new question.
When I continued to clicking UI(jsp) some time, I will get a error in
page.It is "Http 404" and Exception is "NullPoinerExcaption".
I checked the log of tomcat. It recorded a message"connected database
failed. The user perporty is missing. It is mandoatory".
netstat | grep postgres |wc -l
The connection number is 278
lsof -i :5432 | wc -l
60
How can I solute this problem?
Is there bug in postgresql's jdbc?
Could you give me help?
You must supply the user name in your DriverManager.getConnection(url, 
user, password).

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] virus warning

2003-09-19 Thread Paul Thomas
On 19/09/2003 07:17 Tomasz Myrta wrote:
Hi
Recently I receive massive mail attack. This attack comes from some 
postgresql mailing list users. All send-to adresses are taken from users 
mailboxes which contain postgresql posts. Currently I found two kinds of 
viruses:
1. Empty post with "Undelivered message to..." body
2. Microsoft "Dear Customer... " based on www.microsoft.com design.
Both mails contains some .exe attachement.

Regards,
Tomasz Myrta
So far I've had nearly 150 of these in the last 12 hours or so. Somebody 
on these lists has a lot of explaining to do! Fortunately my spam filters 
are up to scratch and I run Linux :) Others may not be so lucky.

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] virus warning

2003-09-19 Thread Paul Thomas
On 19/09/2003 16:37 scott.marlowe wrote:
Keep in mind, if you check the headers on the emails you'll see that they

are forged.  I've been getting about 20 emails a day telling me a message

I know I didn't send was infected with a virus.

I got 432 last night inbound, some with names forged from this list,
others from names unknown.
But I don't think it's not the folks on this list, I think it's a
windows worm that looks in people's email, harvests names at random, and
forged email based on it.
Does seem to be. Just download another 200+. Mildly anoying for me with a 
DSL line. I really feel for those on dial-up :(

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Help on update that subselects other records in table, uses joins

2003-10-29 Thread Paul Ganainm

[EMAIL PROTECTED] says...


> I suggest Joe Conway's "SQL for Smarties" 


Surely that's Joe Celko?


>  or "SQL Queries for Mere Mortals" 
> from another author.


Michael J. Hernandez, John L. Viescas 



Paul...
 

-- 

plinehan__AT__yahoo__DOT__com

C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro

Please do not top-post.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Using UNION inside a cursor

2003-11-04 Thread Paul Ganainm

[EMAIL PROTECTED] says...


>  (SELECT  * FROM history  WHERE obs_type = \'AA\' )
>  UNION 
>  (SELECT  * FROM  history  WHERE   obs_type = \'TA\');


Maybe I'm just confused here, but what's to stop you using 

SELECT * FROM History WHERE (Obs_Type = \'AA\' AND Obs_Type = \'TA\')


? Or have I missed something really obvious?


Paul...


-- 

plinehan__AT__yahoo__DOT__com

C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro

Please do not top-post.


---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Arrays - a good idea?

2003-11-18 Thread Paul Ganainm


Hi all,


Even though PostgreSQL supports arrays, is it a good idea to use them? I 
mean, they

a) don't conform to the relational model

and 

b) are not transportable


so if one is designing an app, should one use them? 

When should they not/never be used? What are the criteria for justifying 
their use?


Paul...


-- 

plinehan__AT__yahoo__DOT__com

C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro

Please do not top-post.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Fetch a single record

2003-12-09 Thread Paul Thomas
On 05/12/2003 21:51 David Shadovitz wrote:
I'm looking for the fastest way to fetch a single record from a table.
 I don't care what record it is.
[snip]
Have you also tried
  SELECT * from mytable limit 1
If you genuinely don't care what the record is (I assume you're justing 
testing that table is not empty?) then this might be the way to go. My 
rather limited knowledge of PG internals leads me to believe that this 
will generally cause just one page being read from disk (I'm assuming the 
99% case of no mega-sized text/bytea/whatever columns here). I'd be 
interested to know just how far off the mark by understanding is...

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] How to specify the beginning of the month in Postgres SQL syntax?

2003-12-09 Thread Paul Thomas
On 07/12/2003 16:16 [EMAIL PROTECTED] wrote:
Hello,

I need to create a view in Postgres that has a where clause of the
date < beginning of month.
i.e.:
SELECT supplier_number, Sum(amount) AS due
FROM purchase_orders
WHERE date < '2003-12-1' AND paid = 0
GROUP BY supplier_number
ORDER BY supplier_number ASC
As you can see, I've specified the 1st of December this year as the
where clause. What I want is an SQL statement that automatically
generates the first of the month. How do I do this?
I do shed-loads of these date-related queries and although it's feasible 
to write some SQL/UDF function to do what you're asking, in my experience 
it is better to process the date in your app and pass it across as a 
parameter. That way you could use the same piece of SQL to get, for 
example, data which is > month owing just by passing 2003-11-01 as the 
date. Probably what you need is to write a function which takes an 
arbitary date and returns the first date in that month/year. You _could_ 
write this as PostgreSQL User Defined Function but writing it as part of 
your app will give you a) greater flexibility as the function will be 
easily available to other parts of your application b) if your app 
language/dev environment has a source-level debugger, you will be able to 
benefit from it when debugging your function and c) someone trying to 
maintain your app in 4 years time will only need to know your application 
language, SQL and a possible (very!) few PostgreSQL-specific variations 
from the SQL language definition.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] Pgaccess problem

2003-12-16 Thread Paul Thomas
On 16/12/2003 13:29 vijaykumar M wrote:
Hi All,
I am trying to install pgaccess on a redhat 7.2 linux server with 
postgres7.4.  Everything seems to be alright but when I go to run 
pgaccess I get an error message saying

Application initialization failed: no display name and no $DISPLAY 
environment v
ariable
Error in startup script: invalid command name "image"
   while executing
"image create bitmap dnarw -data  {
#define down_arrow_width 15
#define down_arrow_height 15
static char down_arrow_bits[] = {
   0x00,0x80,0x00,0x80,0x0..."
   (file "/usr/local/pgaccess/main.tcl" line 5)

Can anyone Offer any advice on this problem?
pgaccess is a TCL application. You need to be running under X. 
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] [JDBC] Insert Row to ResultSet problem....java.sql.SQLException: No Primary Keys

2003-12-22 Thread Paul Thomas
On 16/12/2003 15:24 Jason L. van Brackel wrote:
[snip]
. this is where I get this exception
java.sql.SQLException: No Primary Keys
 ~~~
	at 
org.postgresql.jdbc2.AbstractJdbc2ResultSet.isUpdateable(AbstractJdbc2ResultSet.java:1363)
	at 
org.postgresql.jdbc2.AbstractJdbc2ResultSet.moveToInsertRow(AbstractJdbc2ResultSet.java:697)
	at 
com.cimconsultants.EFRMigration.MigrationUtil.main(MigrationUtil.java:62)
[snip]
I'm experienced with Java, but very new to JDBC and PostgreSQL.  I'm
using the JDK 1.4, PostgreSQL 7.4, and the binary pg74jdbc3.jar
driver.
A total shot in the dark but do you have a primary key on the table?

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] ecpg - problem compiling prepare and declare statements

2004-01-05 Thread Paul Tilles
Using psql V7.4.1 and ecpg V3.1.0 on RH Linux 7.2, gcc compiler:

The following statement precompiles, compiles and links fine:

EXEC SQL declare state_cursor cursor for "select unique state from
location where post = 1";

I then change the above to

EXEC SQL BEGIN DECLARE SECTION;
const char *stmt="select unique state from location where post = 1";
EXEC SQL END DECLARE SECTION;

EXEC SQL declare state_cursor cursor for :stmt;

These statements generate the following warning from the gcc compiler:

warning: passing arg 3 of  ECPGprepare discards qualifiers from pointer
target type

and the link fails.

What am I doing wrong?

TIA,
Paul Tilles


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] Execute permissions for stored functions

2004-01-18 Thread Paul Hart
Hi all,

From what I've seen in the archives, questions like this have kind of 
been answered in the past, but I was wondering if there have been any 
changes in this area, or if anyone has good ideas on how to do what I'm 
about to ask :)

In RDBMSs such as Oracle, stored PL/SQL functions run with the 
permissions of the user that creates the function. Users who are given 
EXECUTE privileges then call the function with the permissions of the 
creator of the function.

Is this how things work with PL/pgSQL in PostgreSQL? From my 
understanding, the answer is 'no.' If the answer really is 'no,' then 
how do I achieve the same thing?

The main benefit for this is in security - I have a dynamic web 
application that requires (a lot of) access to a PostgreSQL database. I 
want to make sure that the user doesn't have direct access to change 
the content of tables, but rather to alter their contents, in 
predetermined ways, through a set of functions. It's another layer that 
protects against hacking, and because my project involves a lot of 
monetary transactions (and database transactions), I want to reduce my 
potential for malicious abuse.

Many thanks in advance for you help,

Paul

smime.p7s
Description: S/MIME cryptographic signature


Re: [SQL] Help! Error with postgresql!

2004-02-01 Thread Paul Hart
Would you mind telling us what's in your SQL file? it seems that 
there's a problem in it.

On 28 Jan 2004, at 05:12, [EMAIL PROTECTED] wrote:

Hello,
I have problem when i execute the .sql file in my psql command prompt.
The error code is below:
pylori=# \i log-clinique.sql
CREATE
psql:log-clinique.sql:100: ERROR:  parser: parse error at or near ".3"
psql:log-clinique.sql:107: ERROR:  parser: parse error at or near ".3"
psql:log-clinique.sql:113: ERROR:  parser: parse error at or near ".3"
 
pylori=# \i trigger-clinique.sql
" does not exist in the system catalog.
createlang: external error
DROP
CREATE
DROP
CREATE
pylori=#
I hope some of you can help me with this.
Thank a lot,
raingsey 
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] SQL query

2004-02-03 Thread Paul Czubilinski
Hello,

I would like to retrieve all the records from table A which have given
lang_id and its modification date is later then modification date of
the same id record with lang_id='pl'.

Example:

table A - data example
==
id  | modification_date   | lang_id
+-+--
abc | 2002-10-11 10:12:11 | en
abc | 2002-11-12 11:12:11 | pl
abc | 2002-11-11 18:12:00 | de
sample  | 2003-04-15 22:43:14 | pl
sample  | 2003-05-16 11:10:15 | en
sample  | 2003-11-11 18:11:10 | de

If given lang_id would be 'en' the following record should be choosed:

sample  | 2003-05-16 11:10:15 | en

(it has mofication date later then record with id = sample and lang_id
= pl)

If you have any idea how to make SQL query for this, please help me.

Thanx,
Paul Czubilinski

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Index not used - now me

2004-02-09 Thread Paul Thomas
On 09/02/2004 12:50 Christoph Haller wrote:
I know there have been dozens of threads on this subject and
I have searched the archives well (I hope at least), but still ...
I have
select version();
   version
--
 PostgreSQL 7.3.4 on hppa-hp-hpux10.20, compiled by GCC 2.8.1
show enable_seqscan ;
 enable_seqscan

 off
\d ParDef_DimRange
Table "public.pardef_dimrange"
Column |   Type   | Modifiers
---+--+---
 primary_key   | integer  | not null
 dim_pointer   | smallint | not null
 dimensions_nr | smallint | not null
 first | smallint | not null
 last  | smallint | not null
 max_range | smallint | not null
Indexes: pd_dptr_index btree (dim_pointer),
 pd_pkey_index btree (primary_key)
explain analyze SELECT Dim_Pointer,Dimensions_Nr,First,Last   FROM
ParDef_DimRange
 WHERE Dim_Pointer = 162::smallintORDER BY Dim_Pointer,Dimensions_Nr;
QUERY PLAN
---
 Sort  (cost=7.02..7.03 rows=2 width=8) (actual time=0.72..0.75 rows=2
loops=1)
   Sort Key: dim_pointer, dimensions_nr
   ->  Index Scan using pd_dptr_index on pardef_dimrange
(cost=0.00..7.01 rows=2 width=8) (actual time=0.20..0.28 rows=2 loops=1)
 Index Cond: (dim_pointer = 162::smallint)
 Total runtime: 1.24 msec
excellent, but

explain analyze SELECT Dim_Pointer,Dimensions_Nr,First,Last   FROM
ParDef_DimRange
 WHERE Dim_Pointer = 162ORDER BY Dim_Pointer,Dimensions_Nr;
  QUERY PLAN
---
 Sort  (cost=10062.22..10062.23 rows=2 width=8) (actual
time=32.44..32.46 rows=2 loops=1)
   Sort Key: dim_pointer, dimensions_nr
   ->  Seq Scan on pardef_dimrange  (cost=1.00..10062.21
rows=2 width=8) (actual time=11.06..31.93 rows=2 loops=1)
 Filter: (dim_pointer = 162)
 Total runtime: 32.79 msec
That's not nice. Will this go away on 7.4?
No. AFAIK, 7.4 is still very strict about column types so will still need 
to explicitly cast to smallint.

\d Transfer_ModRange
   Table "public.transfer_modrange"
 Column |   Type   | Modifiers
+--+---
 module_pointer | smallint | not null
 from_module| smallint | not null
 to_module  | smallint | not null
 primary_key| integer  | not null
Indexes: tmr_primkey_index btree (primary_key)
explain analyze SELECT Module_Pointer FROM Transfer_ModRange
 WHERE Primary_Key  = 13 AND From_Module <= 2 AND To_Module   >= 2 ;
  QUERY PLAN
--
 Index Scan using tmr_primkey_index on transfer_modrange
(cost=0.00..115.09 rows=14 width=2) (actual time=2.11..2.11 rows=0
loops=1)
   Index Cond: (primary_key = 13)
   Filter: ((from_module <= 2) AND (to_module >= 2))
 Total runtime: 2.46 msec
Now
set enable_seqscan to on ;
explain analyze SELECT Module_Pointer FROM Transfer_ModRange
 WHERE Primary_Key  = 13 AND From_Module <= 2 AND To_Module   >= 2 ;
  QUERY PLAN
--
 Seq Scan on transfer_modrange  (cost=0.00..104.93 rows=14 width=2)
(actual time=45.91..45.91 rows=0 loops=1)
   Filter: ((primary_key = 13) AND (from_module <= 2) AND (to_module >=
2))
 Total runtime: 46.19 msec
That's odd. May I please have an explanation for this.
Probably I should mention both tables have far less than 10.000 tuples.
VACUUM and ANALYZE was done just before.
That's because it's acually more efficent to do a seqscan on your small 
table. When you have only a small table (like many of us do when testing), 
the whole table will probably fit on one 8K page so the lowest cost 
operation (= quickest) is to get that page.  It was disabling seqscan that 
was forcing an index scan to appear to be the least costly operation. BTW, 
you can't actually prevent PG doing a seqscan if there's no alternative 
plan. All set enable_seqscan = false does is make a seqscan appear very 
expensive so that the planner is less likely to pick it.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+

Re: [SQL] Index not used - now me

2004-02-09 Thread Paul Thomas
On 09/02/2004 15:02 Christoph Haller wrote:
[snip]
Thanks for your reply so far, but there is one thing I still don't
understand.
You wrote
It was disabling seqscan that
was forcing an index scan to appear to be the least costly operation.
Why appear? If the Index Scan has a Total runtime: 2.46 msec and the Seq
Scan
a Total runtime: 46.19 msec, then the Index Scan is much faster.
Or am I completely off the track reading the explain analyze output?
No, I think it's me who's not reading the output correctly :( I didn't 
look closely enough to spot the run time difference. How many rows are 
there in the table?

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] where not unique

2004-03-12 Thread Paul Thomas
On 12/03/2004 13:02 Gary Stainburn wrote:
Hi folks,

I've got a table which contains amongst other things a stock number and a

registration number.  I need to print out a list of stock number and reg
number where reg number is not unique (cherished number plate transfer
not
completed).  I've tried variations of a theme based on
select stock_number, registration from stock where registration in
  (select registration, count(registration) as count
 from stock where count > 1 group by registration);
but I have two problems.  Firstly with the sub-select I get:

usedcars=# select registration, count(registration) as count from stock
where
count > 1 group by registration;
ERROR:  Attribute 'count' not found
usedcars=#
although if I miss out the where clause I get the expected results.

Secondly, when I run the full query I get:

usedcars=# select stock_number, registration from stock
usedcars-# where registration in
usedcars-# (select registration, count(registration) as count from stock
group
by registration);
ERROR:  Subselect has too many fields
usedcars=#
which is obviously because of the count field.

Can anyone tell me where I'm going wroing with these count fields?
(I've tried renaming the field to regcount in case it was a reserved word
problem)
If I understand you correctly, you've got something like

  mytable
stockno  regno
--
SN1  REG1
SN2  REG2
SN3  REG3
SN4  REG2
and you want to list REG2. Something like

select regno from mytable group by regno having count(stockno) > 1;

might do it.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] Invalid Characters

2004-03-15 Thread Paul Thomas
On 15/03/2004 10:37 Raman wrote:
Hi All,

I have imported some data from csv files in my DB. Now that data has some
invalid characters stored in it.
so I get this exception:
Exception in ChannelDAO Invalid character data was found.  This is most
likely caused by stored data containing characters that are invalid for
the character set
 the database was created in.  The most common example of this is storing
8bit data in a SQL_ASCII database.
Now, I tried using to_ascii() fn in my select query but it didn't work.
Cn you please suggest any solution so that invalid characters should not
come to my webpage while displaying.
Regards
Raman Garg
Use UNICODE as the db encoding. You'll have to drop and re-create the db 
though.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] how to turn off autocommit in psql

2004-03-26 Thread Paul Thomas
On 26/03/2004 01:25 Kemin Zhou wrote:
I search far and wide and found a lot of disscussions about the 
autocommit, but none about how to do it. After reading 50 pages, my 
brain is numb.  Could any one give me a simple help?
Thanks

Kemin
Use tranactions.

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] partial unique constraint

2004-04-06 Thread Paul Thomas
On 06/04/2004 15:29 Robert Treat wrote:
Trying to come up with the proper syntax to meet the following criteria:

create table foo (bar integer, baz boolean UNIQUE (bar, baz = true));

note the above syntax is not correct, but should demonstrate what i'm
trying to do; I want to add a unique constraint such that we only allow
one case of bar and baz = true...  i can have unlimited bar and baz =
false, and there can be multiple bar and baz = true if the bars are
different...  did some doc reading and mail list searching but a valid
syntax for this seems to be escaping me...
btw I'm pretty sure I could do this with an external trigger, but am
wondering about a constraint oriented approach
What about

create table foo (bar integer, baz boolean);

create unique index foo_bar_baz on foo(bar, baz) where baz = true;

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] problem porting MySQL SQL to Postgres

2004-04-15 Thread Paul Thomas
On 15/04/2004 11:25 Dan Field wrote:
I've stumbled across a query I don't quite understand the error message 
for.

This query is pulled from a working MySQL setup:

SELECT
	DEWEY_ID, DEWEY_HUNDREDS, DEWEY_TENS, DEWEY_ONES, 	 
DEWEY_POINT_ONES, DEWEY_POINT_TENS, 	 	DEWEY_POINT_HUNDREDS, 
DEWEY_POINT_THOUSANDS,  	DEWEY_TYPE, DEWEY_LANG,  DEWEY_SUBJECT  
FROM lu_dewey
WHERE
	(DEWEY_HUNDREDS = 9) AND
	(DEWEY_TENS >= 0) AND 	 	(DEWEY_TENS <= 9) AND
	(DEWEY_ONES = 0 || DEWEY_ONES = NULL) AND
	(DEWEY_POINT_ONES = 0 || DEWEY_POINT_ONES = NULL) AND
	(DEWEY_POINT_TENS = 0 || DEWEY_POINT_TENS = NULL) AND
	(DEWEY_POINT_HUNDREDS = 0 || DEWEY_POINT_HUNDREDS = NULL) AND
	(DEWEY_POINT_THOUSANDS = 0 || DEWEY_POINT_THOUSANDS = NULL) AND
	(DEWEY_TYPE = 't') AND
	(DEWEY_LANG = 'en')
ORDER BY DEWEY_TENS

However I'm getting the following error:

 ERROR:  Unable to identify an operator '=' for types 'character' and 
'boolean' You will have to retype this query using an explicit cast.

Any help would be much appreciated
You're trying to use the string concatenation operator (||) in a boolean 
test.. That's an invalid comparison according to the SQL specs. You need 
to use the SQL OR operator e.e.,

(DEWEY_POINT_TENS = 0 OR DEWEY_POINT_TENS = NULL) AND
  ~~
Your "= NULL" tests are also not valid SQL (should be IS NULL). MySQL does 
not follow the specs in a number or areas. PostgreSQL is very 
standards-complient. Write valid SQL and you should be OK.

HTH

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] problem porting MySQL SQL to Postgres

2004-04-18 Thread Paul Thomas
On 18/04/2004 19:37 Markus Bertheau wrote:
В Чтв, 15.04.2004, в 13:15, Paul Thomas пишет:
> On 15/04/2004 11:25 Dan Field wrote:
> Your "= NULL" tests are also not valid SQL (should be IS NULL).

I think = NULL _is_ valid SQL, it just doesn't do what you think.
It's valid in an assignment

	update foo set bar = null;

PostgreSQL can be coerced into accepting where foo = null with the 
transform_null_equals run-time option but that doesn't make it legal SQL 
IFAIK.

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] \D TO FILE

2004-05-03 Thread Paul Thomas
On 03/05/2004 12:49 Eric Anderson Vianet SAO wrote:
How could I record the ´ \d table ´ command to a file?
\o file
HTH
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Performance issue

2004-04-27 Thread Paul Thomas
On 27/04/2004 10:12 Michael L. Hostbaek wrote:
[snip]
Is this normal ? If I run the same select on another table in the same
database with ~40.000 rows, it takes approx 820.00ms...
You would probably get better answers on the correct list but my guess is 
that your fsm setting might be too low for the number of dead tuples you 
are trying to reclaim so the vacuum is not clearing all the dead stuff 
out. There's plenty of stuff about this in the archives and the docs.

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] v7.2 triggers and foreign keys

2004-05-20 Thread Paul Gimpelj



Hi,
 
I have two tables a and b.; b references 
a.
I used pgaccess to create tables.
and psql to create the triggers and foreign keys. a 
has sequential id, and primary key same column.
 
As constraints they were implicit, and cascade was 
off.
 
On delete of the of a b row, postgres also deleted 
the a table row.
 
I then created a2 and b2 with action queries. and 
the delete acted as expected.
 
Where am i going wrong with this?
Do I need to install 7.4?
 
Also, created tables a3 and b3, with sequential ids 
using pgaccess same as a and b, and the seq id's don't increment with these 
tables.
 
Thanks.
paul
 


Re: [SQL] Doubt

2004-09-28 Thread Paul Thomas
On 22/09/2004 12:11 S.Sreejith wrote:
 Sir,
  I am new to PostgreSQL. I have a doubt. Now i am doing one project
in .NET technology with postgres(linux) as database. I am using pgADMIN
and pgManager as tools for database releted activities. Now i am able to
create functions with 'sql' language. When i select 'pgsql' as language
for creating query, an error poping up. How to create Query using 'pgsql'
language. if any new tools need to be installed. Plz rectify. Mail me back
at [EMAIL PROTECTED]
I bet you haven't enabled pl/pgsql use createlang. Read the docs.
--
Paul Thomas
+--+---+
| Thomas Micro Systems Limited | Software Solutions for Business   |
| Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk |
+--+---+
---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Can tsearch replace ilike queries ?

2005-02-08 Thread Antony Paul
Hi all,
I read in another thread about using tsearch for ilike queries. My
question is can it rprovide the full ilike functionality with a better
performance than ilike ?.

rgds
Antony Paul

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Maximum length of a query

2005-02-08 Thread Antony Paul
Hi all,
What is the maximum length allowed for a query in PG 7.3.3 ?. I
need arbitrarily long queries to be executed through JDBC.

rgds
Antony Paul

---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] How to know a temp table exists ?

2005-02-08 Thread Antony Paul
Hi all,
If using a connection pool how to know a temp table exists ?

rgds
Antony Paul

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Querying for name/value pairs in reverse

2006-07-15 Thread Paul S
This is definitely doable.  
 
one "Set" way that I could think of doing this would be to first compile a temp table with all of the Value/Pairs that your looking to search for and then just JOIN the ID's (in this case it would be the Value and Pair) to the ErrorValues table.  This should give you all of the ErrorID's in the ErrorValues table which would then allow you to JOIN up against the Errors table to get information like CreateDate.  

 
Another way, would be to use XML instead of Value/Pair to turn it into a Node and Value type of thing.  You could then use XQuery to search inside of the XML attribute for what you were looking for.  SET theory would be a better alternative but this is just a different idea.  

 
One Naming convention tip, I like to name my PrimaryKey's something more descriptive than just ID.  You'll notice that your ErrorValues table had to include the foreign key called ErrorID that actually relates to attribute ID in the Errors table.  When your looking at miles and miles of code or reviewing JOIN syntax "
a.ID = b.ErrorID" sometimes it's easier to validate if it looks like this. "a.ErroID = b.ErrorID".  Just my 2 cents...
 
Hope that helps.
 
-Paul
 
On 7/15/06, Stefan Arentz <[EMAIL PROTECTED]> wrote:
I'm no SQL expert by any means so I'm wondering if something like thisis possible.I have two tables like this:
create table Errors (Id serial not null,CreateDate timestamp not null,primary key (Id));create table ErrorValues (Id serial not null,ErrorId int not null,Name varchar(255) not null,
Value text not null,primary key (Id),foreign key (ErrorId) references Errors (Id));Now, selecting specific ErrorValues with a bunch of names that arerelated to an Error is of course pretty simple. But I want to go the
other way. I want to query for:'give me all Errors that have the Code=1234 AND Address=1.2.3.4 ANDType=OSX Name/Value pairs'What is a good recipe to deal with this? Is something like this
possible with standard sql? Is there a nice PG way to do this?Are there good books that cover real world stuff like this?So many questions from a SQL noob.S.---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to  choose an index scan if your joining column's datatypes do not  match


[SQL] Removing duplicate rows

2007-02-06 Thread Paul Lambert
I have some tables which I need to load from flat data files that can 
contain upwards of five million rows.


It's possible (owing to the structure of the system the data is 
originating from) that on occasion the data extract that generates these 
flat files can include two records that have the same values in the 
fields that make up the primary key of the table in my Postgres 
database, thus throwing an error when I attempt to load.


I'm assuming the best way to get around this is to load the data into a 
temporary table with "copy from" and then do a "select distinct into" my 
real table.


The question I have is the best way to set up this temporary table in my 
 reload script. (Having 14 servers running Postgres, each with 37 
tables, I don't want to be creating temporary tables manually each time 
I need to reload the databases)


As the table definitions get updated frequently, I'd like to put in my 
script to create the table as a copy of the existing 'real' table. I'm 
assuming "create table as" would be what I need to use, but what would I 
use as the select sql - i.e. how do I use select to pull the definition 
of a table.


Apologies if this is a stupid question, I'm still fairly new to Postgres 
so I'm not sure what system tables are available for pulling out this 
kind of information.


Thanks in advance for any assistance you can offer.

Regards,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Removing duplicate rows

2007-02-06 Thread Paul Lambert

Andrew Sullivan wrote:

On Wed, Feb 07, 2007 at 07:15:02AM +0900, Paul Lambert wrote:
I'm assuming the best way to get around this is to load the data into a 
temporary table with "copy from" and then do a "select distinct into" my 
real table.


You might find that sort|uniq at the command prompt would be better. 
That said,


Sort on Weenblows is a bastard to work with, and I don't believe it has 
a unique option. I probably should have mentioned this was on Weenblows.	




script to create the table as a copy of the existing 'real' table. I'm 
assuming "create table as" would be what I need to use, but what would I 
use as the select sql - i.e. how do I use select to pull the definition 
of a table.


an easy way to do this is "CREATE TABLE name AS SELECT . . . WHERE
1=0".  You get a table with no rows.  (WHERE FALSE and similar
constructs all work equally well.)


Thanks... I figured it would be blatantly obvious.

This does the trick:
CREATE TABLE billing_code_temp AS SELECT * FROM billing_code WHERE 1=0



A



Appreciate the help.

P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[SQL] COPY FROM query.

2007-02-11 Thread Paul Lambert
I'm attempting to copy from a table into a file using a select query 
inside the copy.


The following is my command:

COPY (SELECT DISTINCT ON (dealer_id,appraisal_id) * FROM 
appraisals_temp) TO 'c:\autodrs_appraisal_new.txt' WITH DELIMITER AS '^' 
CSV HEADER;


I get the following returned:

WARNING:  nonstandard use of escape in a string literal
LINE 1: ...ealer_id,appraisal_id) * FROM appraisals_temp) TO 'C:\autodr...
 ^
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.

ERROR: relative path not allowed for COPY to file
SQL state: 42602


(The caret character is pointing to the M in FROM)

As far as I can see this looks to be structured the same as an example 
in the manual:


COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO 
'/usr1/proj/bray/sql/a_list_countries.copy';


(with the exception of me using a distinct on)

If I run the inner select statement by itself I get results returned, 
but as soon as I wrap it with a copy from I get the above error.


As a secondary question, is there any way to get select into to put the 
results in an existing table rather then having it create a new table?


To give you a better understanding of what I am doing, my current 
problem is as follows.


I have a data extract running on a VMS system which generates a caret 
delimited text file. This file may (due to database setup on the VMS 
side which can't be fixed) contain entries which violate the duplicate 
key constraints of my Postgres database (i.e. the same record appears in 
the extracted text file twice) which will give me errors when I try to 
copy the file into the table in Postgres. In order to get around this, I 
am making a temporary table with no primary key, copying the data from 
the file into this table, doing the above select distinct to get only 
unique records and putting the result into the original table (which has 
been truncated of course)
I can't find any way of selecting into an existing table, only selecting 
into a new table, which means setting up the primary/foreign keys and 
such again which I don't want to have to do each time I run the script.


The following is the script I'm using:

DROP TABLE appraisals_temp;
CREATE TABLE appraisals_temp AS SELECT * FROM appraisals WHERE 1=0;
TRUNCATE TABLE appraisals;
COPY appraisals_temp FROM 'c:\autodrs_appraisal.txt' WITH DELIMITER AS 
'^' CSV HEADER;
COPY (SELECT DISTINCT ON (dealer_id,appraisal_id) * FROM 
appraisals_temp) TO 'c:\autodrs_appraisal_new.txt' WITH DELIMITER AS '^' 
CSV HEADER;
COPY appraisals FROM 'c:\autodrs_appraisal_new.txt' WITH DELIMITER AS 
'^' CSV HEADER;


I'd be greatful if anyone could explain why my copy to does not work, 
also greatful if anyone can offer any suggestions on a better way to do 
what I am doing (if such a way exists - which I'm sure it would)


Thanks in advance.

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] COPY FROM query.

2007-02-11 Thread Paul Lambert
An INSERT INTO will fix my problem with needing the extra copy from and 
copy to.


I'm still curious as to why i was being told I couldn't specify a 
relative path though.


P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] COPY FROM query.

2007-02-11 Thread Paul Lambert

Joe wrote:

Hi Paul,

On Mon, 2007-02-12 at 08:19 +0900, Paul Lambert wrote:
I'm attempting to copy from a table into a file using a select query 
inside the copy.


The following is my command:

COPY (SELECT DISTINCT ON (dealer_id,appraisal_id) * FROM 
appraisals_temp) TO 'c:\autodrs_appraisal_new.txt' WITH DELIMITER AS '^' 
CSV HEADER;


I get the following returned:

WARNING:  nonstandard use of escape in a string literal
LINE 1: ...ealer_id,appraisal_id) * FROM appraisals_temp) TO 'C:\autodr...
  ^
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.

ERROR: relative path not allowed for COPY to file
SQL state: 42602


(The caret character is pointing to the M in FROM)


I believe that on Windows you need to use double backslashes, i.e., 'c:\
\autodrs_appraisal_new.txt', although the regular slash may also work,
i.e., 'c:/autodrs_appraisal_new.txt'.

Joe


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




If this is the case, it is strange that the first copy statement works 
as that is also only using a single backslash.


Having said that, if I switch it to a forward slash it works without 
error... odd, given that weenblows standard is backslash.


Thanks for the help though.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] COPY FROM query.

2007-02-11 Thread Paul Lambert

Phillip Smith wrote:
"Having said that, if I switch it to a forward slash it works without 
error... odd, given that weenblows standard is backslash."


Yes, but PostgreSQL uses a back-slash as an escape character, which needs to
be used to escape itself at an application level before the O/S gets to deal
with it :)

-p




OK, that makes sense, cheers for the help.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Issue with copying data from a text file.

2007-03-19 Thread Paul Lambert
I have a procedure in place that copies data from a caret delimited text 
file into a table storing some information.


One of the fields in the table contains an item description which may 
contain item dimensions such as - 17" alloy wheels


The problem I am getting when I do my load is I believe due to the 
presence of the double quotation marks giving the copy the impression 
that it is to include the information following as a single text string 
until it gets to the next set of double quotes. As a result, I get the 
following:


AutoDRS=#   COPY deal_lines_temp_load FROM 
'c:/temp/autodrs_deal_lines.txt'

WITH DELIMITER AS '^' CSV HEADER;
ERROR:  value too long for type character varying(30)
CONTEXT:  COPY deal_lines_temp_load, line 87, column order_desc: "17 5 
spoke alloy wheels.^1291.18^117.38^983.69^1291.18^^C^^


The column as you can see is defined as a 30 character field, the load 
contains in this column ^17" 5 spoke alloy wheels.^


I note an option in the COPY command to specify the quote character, 
defaulting to double quote. The problem being a single quote will also 
be used in the data, as will other characters. Is there any way to get a 
copy to have no quote character? I.e. read the file and put whatever is 
between the caret characters straight into the appropriate field exactly 
as is.


TIA,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Issue with copying data from a text file.

2007-03-19 Thread Paul Lambert

Phillip Smith wrote:

Can you not export the source file with escape characters? ie,

^17\" Alloy Wheels^

~p



The source file comes from extracts on our main application which sits 
inside an in-house pretending-to-be-a-dbms file system. The content of 
these extracts would be difficult to change - the extract program would 
need to parse the data looking for quotes and preceed them with the 
necessary escape character.


Not being a proper database dump it's not a simple matter of flicking a 
switch to get it to include the escape character. The way the extracts 
are written would require a few dozen lines of code to each extract, and 
theres about 40ish extracts.


Plus I don't maintain that side of our code, and those that do can be a 
bit lazy and I'd likely be waiting months to get it done - if they even 
decide to do it.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Issue with copying data from a text file.

2007-03-19 Thread Paul Lambert

Andrej Ricnik-Bay wrote:

On 3/20/07, Paul Lambert <[EMAIL PROTECTED]> wrote:


The source file comes from extracts on our main application which sits
inside an in-house pretending-to-be-a-dbms file system. The content of
these extracts would be difficult to change - the extract program would
need to parse the data looking for quotes and preceed them with the
necessary escape character.

Not being a proper database dump it's not a simple matter of flicking a
switch to get it to include the escape character. The way the extracts
are written would require a few dozen lines of code to each extract, and
theres about 40ish extracts.

Plus I don't maintain that side of our code, and those that do can be a
bit lazy and I'd likely be waiting months to get it done - if they even
decide to do it.

Pipe it through sed and replace the Carets with TABS?
sed 's/^/\t/g' c:/temp/autodrs_deal_lines.txt > 
c:/temp/autodrs_deal_lines.tab


Then use copy like so:
\copy table from 'c:/temp/autodrs_deal_lines.tab' delimiter E'\t' null ''


Cheers,
Andrej




The data contains tabs... don't ask why... I don't have a clue :P

I'll do something along the lines of sed... but not with sed, I'll use 
the command line interpreter on the OpenVMS systems where the extracts 
run. I just thought there might have been a quicker way to switch it off 
in the copy command, i.e. specifying "quote none" as one of the 
parameters to the command. I guess not...


Thanks for the pointers.

P.

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Issue with copying data from a text file.

2007-03-19 Thread Paul Lambert

Phillip Smith wrote:
Maybe use char 254 or 253 or something similar... Anything that isn't 
going to be found in the file.


Have you tried using a string as a delimiter? QUOTE 
'THIS.STRING.ISNT.IN.THE.TEXT.FILE'


~p



I changed it to QUOTE '\f' to set the quote character to form-feed which 
is not going to appear in the file and that appears to do the trick 
without any changes to the source file.


Thanks,
P.

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[SQL] Moving a simple function to pl/pgsql (Novice question)

2007-04-03 Thread Paul Lambert

Forgive me in advance for this terribly novice question...

I have the following function which was written in MS SQL Servers 
trigger/rule language...


CREATE TRIGGER [Sync_Deals] ON [dbo].[Deals]
FOR INSERT, UPDATE
AS
begin
declare @Found int
declare @deal varchar(10)
select @deal = deal_address from inserted
Select @Found = count(*) from dealbook.dbo.deals where deal_address = 
@deal
if @Found > 0
delete from dealbook.dbo.deals where deal_address = @deal
insert into dealbook.dbo.deals select * from inserted
end

The purpose being when a row in a table in one database is updated, it 
will copy (or replicate I guess) the record into a different table into 
another database in the same server. (deleting said record first if it 
already exists)


What is the best way to do this within Postgres? I assume a trigger is 
the way to go here as well, but having not written a trigger in PG I'm a 
little stuck as to where to start. My client goes live with their new 
system on Tuesday and this function forms part of some custom additions 
they wrote on their previous SQL server database. I'd thus like to make 
sure I know what I'm doing to finalise the conversion before the weekend.


It seems to me I need to create a function to do the copy, and then 
create a trigger to call the function - but I'm not entirely sure - you 
can probably tell I haven't done anything with triggers, functions or 
rules yet. :-)


(BTW: No I am not trying to get someone to do my work for me :-P - I 
have about 30ish triggers of various nature to convert, I just need 
somewhere to start so I can figure out how it's done then I can do the 
rest myself)


TIA,
P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Moving a simple function to pl/pgsql (Novice question)

2007-04-04 Thread Paul Lambert

Richard Broersma Jr wrote:

--- Paul Lambert <[EMAIL PROTECTED]> wrote:
The purpose being when a row in a table in one database is updated, it 
will copy (or replicate I guess) the record into a different table into 
another database in the same server. (deleting said record first if it 
already exists)


What is the best way to do this within Postgres?


In postgresql, there are two parts to implementing a trigger; the trigger 
function and the actual
trigger on the table that calls this function.

Here is a good example:
http://www.postgresql.org/docs/8.2/interactive/plpgsql-trigger.html#PLPGSQL-TRIGGER-AUDIT-EXAMPLE

One other point to be aware of, Postgresql does not allow your to reference 
other databases in the
DB-cluster directly.  It is possible but you need to add a contrib module 
called dblink:
http://pgfoundry.org/projects/snapshot/

Hope this helps,
Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




Perfect, thanks Richard.

Regards,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] Windows postgres

2007-04-22 Thread Paul Lambert

Senthil wrote:

Respected sir/madam,

Please can you send me link of windows postgres sql download page.

thanks
senthil


---(end of broadcast)---
TIP 6: explain analyze is your friend




http://www.postgresql.org/ftp/binary/v8.2.4/win32/

--
Paul Lambert
Technical Support Team Leader and Database Administrator
AutoLedgers
Level 3, 823 Wellington Street, West Perth, W.A. 6005
Postal: P.O. Box 106, West Perth, W.A. 6872
Ph: 08 9217 5086 Fax: 08 9217 5055
AutoLedgers Technical Support Desk: 1800 649 987 (Free call) 08 9217 
5050 (Perth local and mobile)

Email: [EMAIL PROTECTED] <http://www.reynolds.com.au>

For AutoLedgers technical support, please send an email to 
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Query RE using COPY

2007-05-07 Thread Paul Lambert

I use the COPY command to load data from a file into tables in my database.

The following is an example done in psql:
	COPY deals_temp_load FROM 'c:/temp/autodrs_deals.txt' WITH DELIMITER AS 
'^' QUOTE '\f' CSV HEADER;


If a column is added to the table but has not been added to the extracts 
which create the autodrs_deals.txt file, this errors thusly:

ERROR:  missing data for column "location"
CONTEXT:  COPY deals_temp_load, line 2: "line containing data removed 
for confidentiality - suffice to say it does not contain the 'location' 
column"


Is there any way of telling it to ignore columns that have not been 
specified in the file used to load? I have tried giving the column a 
default value but I still get the error. The column in this case 
'location' is a nullable column and does not always have data (yes, bad 
design for a database, but we'll skip that point for now) thus I am not 
concerned if the load procedure doesn't supply it.


BTW, this is done on Weendoze.

Thanks,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Query RE using COPY

2007-05-07 Thread Paul Lambert

Jonah H. Harris wrote:

COPY mytbl (col1, col2, col3, col4, col6) FROM 'myfile.txt' WITH
DELIMITER AS '^' QUOTE '\f' CSV HEADER;




I would rather not do it this way as I use the same load script at all 
customer sites where the extracts and requirements may vary. I.e. one 
customer may not use the location field mentioned above, but another might.


What I would prefer to do, if possible, is set that column in the 
database to 'not required' so that if I get an extract that doesn't have 
that column in the file, the copy doesn't care.


As it stands now I generally have to drop all the unneeded columns from 
 numerous tables, perform my load and then re-add them back again.


When we're talking about 40 or more tables in each database some tables 
with several hundred columns... your suggestion would be a bit 
cumbersome - particularly if it is only one or two columns in each table 
that that the client doesn't need.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Query RE using COPY

2007-05-07 Thread Paul Lambert

Phillip Smith wrote:

Can you modify the 'extract' and make the extra column "\n" which is the
null escape?

That would be the only other option.




Right now the software that does the extracts is developed by our 
applications developers, but I'll be taking that onto my side in the 
near future, just looking for a workaround until then.


--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[SQL] Passing input to a view?

2007-05-09 Thread Paul Lambert
Is it possible to define a view to use input parameters rather than a 
hard-wired value in a where clause?


I.e. I have the following view:

CREATE OR REPLACE VIEW invoiced_repairs AS
SELECT ro_header.ro_number, ro_header.received_date, vehicles.vehicle_id,
	vehicles.model, vehicles.engine, vehicles.transmission, 
ro_header.odometer_reading,

ro_header.service_description,
(
 CASE WHEN (vehicles.month_of_manufacture <> '' AND 
vehicles.year_of_manufacture <> '')

 THEN
(vehicles.month_of_manufacture || '/' || 
vehicles.year_of_manufacture)
 ELSE
''
 END
) AS date_of_manufacture,
vehicles.identification_number, vehicles.engine_number from ro_header
INNER JOIN vehicles USING (vehicle_address)
WHERE ro_header.received_date between '1-jan-2007' AND '10-May-2007';
ALTER TABLE invoiced_repairs OWNER TO postgres;

Instead of having it set to 1-Jan-2007 and 10-May-2007 for the dates, 
can I specify that those values should be passed to the view? Or would I 
need to change this to an SQL language function?


I can't find anything in the documentation that answers this, but maybe 
I'm not looking hard enough.


Thanks in advance.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] pg_dump?

2007-05-14 Thread Paul Lambert

Richard Dunne wrote:
PostgreSQL begginer. Whats the best way of exporting(dumping) a database/table/view/query to a file? 
I am using PostgreSQL v 8.2.  I am starting postgres server with 
C:\WINDOWS\system32\net.exe  start pgsql-8.2, (start service)
C:\postgresql-8.2.3-1\bin\psql.exe  -h localhost -p 5432 postgres "Richard" 
(psql to 'postgres')
both of which are run from shortcuts in the start menu. 
my cli starts with "postgres #", then "databasename #" after connecting using \c.


Richard.




With exactly what you said in the subject... pg_dump

http://www.postgresql.org/docs/8.2/static/app-pgdump.html

Your other option would be with an SQL 'COPY'

http://www.postgresql.org/docs/8.2/static/sql-copy.html

The difference between the two is dump will give you output including 
the SQL used to put it back into the database, i.e. INSERT INTO 
statements whereas copy will give you a flat delimited file.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] ignoring primary key violations in COPY command

2007-05-19 Thread Paul Lambert

[EMAIL PROTECTED] wrote:

Hi all,

We are importing the data from the CSV file into the database using COPY 
command. But when the ‘primary key ‘ violation occurs , it stops


Then and there and data is not updated into the tables. Also in the CSV 
file the number of columns is not fixed , its varies


In number , it can range anywhere between 1 -22, I need to figure out a 
way such that I need to update only those columns which are present


In the CSV file . This of course  I would like to accomplish using COPY 
command , Please let me know if at all this is possible in postgresql 8.1


/Thanks and regards,/

Rajendra Singh



In my opinion your best bet in terms of getting around the primary key 
violation is to create a temporary table without a primary key, copy 
your data into that table, then do a select into your main table from 
that table.


Eg.. I do the following:

CREATE TABLE creditors_temp_load AS SELECT * FROM creditors WHERE 1=0;
TRUNCATE TABLE creditors;
COPY creditors_temp_load FROM 'c:/temp/autodrs_creditors.txt' WITH 
DELIMITER AS '^' QUOTE '\f' CSV HEADER;
INSERT INTO creditors (SELECT DISTINCT ON (dealer_id,supplier_no) * FROM 
creditors_temp_load WHERE (dealer_id,supplier_no) is not null);


The first statement creates a copy of the 'creditors' table without any 
records (none in the creditors table have 1 equal to zero)

The second copies the data from the file into the temp table.
Finally an insert into the 'creditors' table is done by a select 
distinct on the temp table where the two fields listed are the primary 
key for that table.


I don't believe there is any way of getting around not having all the 
fields present - copy expects to find a match between fields in the file 
and fields in the destination table. If your record length in the load 
file is going to vary you may need to consider writing a program to read 
the data from the file and load it in.


Regards,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Using escape strings in an insert statement.

2007-07-02 Thread Paul Lambert

I'm having a little glitch in using escape strings within strings.

The following will best explain my issue:

tester=# create table testing (test_text text);
CREATE TABLE

tester=# insert into testing (test_text) values ('abc\\123');
WARNING:  nonstandard use of \\ in a string literal
LINE 1: insert into testing (test_text) values ('abc\\123');
^
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
INSERT 0 1

^^^ This works, but I still get an error/warning telling me to use E'\\' 
 - which I do:


tester=# insert into testing (test_text) values ('abcE'\\'123');
Invalid command \. Try \? for help.
tester(#

^^^ And this just doesn't work at all.


The help itself (ch 4.1.2.1) tells me to use double backslash "Thus, to 
include a backslash character, write two backslashes (\\). "


Which one is the correct syntax and how can I make it not return 
anything other than a successful insert?


Noob question, I know...

But TIA.

(Version is 8.2.3 on Weenblowz if that is of any relevance)

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] Using escape strings in an insert statement.

2007-07-02 Thread Paul Lambert

Michael Glaesemann wrote:


On Jul 2, 2007, at 17:45 , Paul Lambert wrote:


tester=# insert into testing (test_text) values ('abcE'\\'123');


This should be
INSERT INTO testing (test_text) values (E'abc\123');

The help itself (ch 4.1.2.1) tells me to use double backslash "Thus, 
to include a backslash character, write two backslashes (\\). "


Note that the String Constants section (4.1.2.1) says put the E "before 
the opening single quote".


http://www.postgresql.org/docs/8.2/interactive/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS 





Missed that part - my apologies. Time to read the manuals more 
thoroughly it would seem.


An escape string constant is specified by writing the letter E (upper 
or lower case) just before the opening single quote, e.g. E'foo'.


Also be sure to read the Caution section.

Using \ as an escape character is the old non-standard PostgreSQL escape 
syntax that the WARNING (above) is, uh, warning you about. With 
standard_conforming_strings on (i.e., follow the SQL spec), the 
backslash is just a backslash character.


Which one is the correct syntax and how can I make it not return 
anything other than a successful insert?


Depends on the setting of standard_conforming_strings.

Michael Glaesemann
grzm seespotcode net



Looks like it's a toss-up between turning standard_conforming_strings on 
or turning escape_string_warning off, both seem to have the same effect 
in not giving the error anymore.


I'll go with your suggestion though, I guess since it's a new install 
not a previous upgrade of an old it's technically more correct to 
conform to current standards rather than attempt to conform to old behavior.


Cheers for the help - much appreciated.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Join question

2007-07-26 Thread Paul Lambert

Phillip Smith wrote:

Whoops, I forgot the JOIN conditions! Fixed below

-Original Message-
From: Phillip Smith [mailto:[EMAIL PROTECTED] 
Sent: Friday, 27 July 2007 11:47

To: '[email protected]'
Subject: RE: [SQL] Join question


This might give you a starting point if I understand you correctly...

SELECT h.invoice_number,
 h.customer,
 l.item,
 l.amount
FROMlines AS l
JOINheaders AS h ON l.invoice_number = h.invoice_number
UNION
SELECT h.invoice_number,
 h.customer,
 s.item,
 s.amount
FROMsundries AS s
JOINheaders AS h ON s.invoice_number = h.invoice_number
ORDER BY invoice_number, item



Forgot all about union - I had two individual views, one for 
invoice+parts, one for invoice+sundries... didn't think of union to 
combine the two views together.


Perfect solution, you've done it for me again Phillip, thanks.

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[SQL] Join question

2007-07-26 Thread Paul Lambert
I have a database in a parts sales environment that I am having a little 
difficulty with a join query - trying to figure out which way to join 
things.


I have a parts_invoice_header table, containing the header record for 
each invoice.
I have a parts_invoice_lines table, containing the parts details for 
each invoice.
I have a parts_invoice_sundries table, containing additional sundry 
charges (freight, delivery etc) on each invoice.


For each record in the invoice_header table, there can be 0 or more 
records in either of the two other tables.


I want to search for an invoice in the header file and get the details 
of all matching records from the lines and sundries. Assuming the lines 
and sundries tables both have the same column names, I should be able to 
end up with something like:


table |header   header line/sundries
  |
column| Invoice numbercustomeritem   amount
line  | abc   457ABC  10.00
sundry| abc   FREIGHT  5.00
line  | abc   FGOIL   15.00

What would be the best method of joining to create a result-set such as 
this?


TIA for any assistance,
P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Assistance with a trigger

2007-07-25 Thread Paul Lambert

chester c young wrote:


http://www.postgresql.org/docs/8.2/interactive/plpgsql.html

I found the doc to be quite good.

I'd start there, and then, if you have problems, email again.



I'd read that before... but another re-read triggered a thought pattern 
and I've got it working now - it's also a lot cleaner than the SQL 
Server implementation:


CREATE OR REPLACE function fn_update_so_tran() RETURNS TRIGGER AS 
$trg_update_so_tran$

DECLARE
BEGIN
   IF (NEW.tran_status = 'U') OR (NEW.tran_status = 'D') THEN
  UPDATE parts_purchasing SET
 qty_received=qty_received + NEW.qty_received,
 qty_invoiced = qty_invoiced + NEW.qty_invoiced,
 amt_invoiced = amt_invoiced + NEW.amt_invoiced,
 amt_received = amt_received + NEW.amt_received
  WHERE dealer_id = NEW.dealer_id
 AND so_tran_address = NEW.so_tran_address
 AND this_tran_address = so_tran_address;
   END IF;
   RETURN NULL;
END;
$trg_update_so_tran$ LANGUAGE plpgsql;
ALTER FUNCTION fn_update_so_tran() OWNER TO "AutoDRS";

CREATE TRIGGER trg_update_so_tran AFTER INSERT OR UPDATE on 
parts_purchasing FOR EACH ROW EXECUTE PROCEDURE fn_update_so_tran();


I'm liking PostgreSQL more and more with each new thing I try :)

Thanks muchly.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[SQL] Assistance with a trigger

2007-07-25 Thread Paul Lambert
I have some triggers in an MS SQL server database which I need to copy 
across to a PG database. I've not yet done triggers in PG so I was 
hoping to get a little bit of a pointer on the first one as a place to 
start and work my way through from there.


The trigger looks like thus:

CREATE TRIGGER update_so_tran ON parts_purchasing FOR INSERT, UPDATE
AS

begin
declare @found int
declare @update varchar(10)
declare @dealerid varchar(6)
declare @trx_address varchar(10)
declare @quan_received numeric(14,4)
declare @amn_received numeric(14,4)
declare @quan_invoiced numeric(14,4)
declare @amn_invoiced numeric(14,4)
declare @line_no int
select @update = tran_status from inserted
if @update = "U" or @update = "D"
select @dealerid = dealer_id from inserted
select @trx_address = so_tran_address from inserted
select @quan_received = qty_received from inserted
select @amn_received = amt_received from inserted
select @quan_invoiced = qty_invoiced from inserted
select @amn_invoiced = amt_invoiced from inserted
select @line_no = line_number from inserted
update parts_purchasing set qty_received=qty_received + 
@quan_received,
qty_invoiced = qty_invoiced + 
@quan_invoiced,
amt_invoiced = amt_invoiced + 
@amn_invoiced,
amt_received = amt_received + 
@amn_received
where dealer_id = @dealerid AND so_tran_address = 
@trx_address;
end if
end


Basically in words, I have a parts_purchasing table which contains 
master order records as well as sub-records for the actual receival and 
invoicing of the orders. whenever a receival and invoice transaction 
comes through I need to update the qty_received, qty_invoiced, 
amt_received and amt_invoiced on the master record. It's pretty straight 
forward in SQL server but I'm having a little bit of difficulty in 
wrapping my head around it in PG. I know I have to create a function and 
call that from the trigger I'm just having a small bit of trouble in 
working out how to write this function.


If anyone could give me a starting point that would be highly appreciated.

Cheers,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[SQL] Count of rows

2007-08-02 Thread Paul Lambert
What's the best way to count how many rows are in each table via SQL? Or 
is it even possible?


I'm trying something like:

select tablename, count(*) from (select tablename from pg_tables where 
schemaname = 'public') as test group by tablename;


But obviously this just gives a count of 1 for each table in the public 
schema.


Can it be done or would I have to write a function?

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Implementing an regex filter

2007-08-08 Thread Paul Lambert

Enrico Weigelt wrote:

Hi folks,


Any hints for futher optimization appreciated :)


thx



It doesn't look like you have any indexes - I'd add one to at least 
articles.title and blacklist.title to start with and probably also 
user_results.article_id and articles.inode_id.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Extracting hostname from URI column

2007-09-11 Thread Paul Lambert

[EMAIL PROTECTED] wrote:

Hi,

I'm trying to use substr() and position() functions to extract the full host 
name (and later a domain) from a column that holds URLs.
This is what I'm trying, but it clearly doesn't do the job.

=> select substr(href, position('://' in href)+3, position('://' in 
href)+3+position('/' in href)), href from url where id <10;
 substr |   href
+--
 texturizer.net | http://texturizer.net/firebird/extensions/
 texturizer.net | http://texturizer.net/firebird/themes/
 forums.mozilla | http://forums.mozillazine.org/index.php?c=4
 www.mozillazin | http://www.mozillazine.org/
 devedge.netsca | http://devedge.netscape.com/viewsource/2002/bookmarks/
 www.google.com | http://www.google.com/search?&q=%s
 groups.google. | http://groups.google.com/groups?scoring=d&q=%s
 www.google.com | http://www.google.com/search?q=%s&btnI=I'm+Feeling+Lucky
 dictionary.ref | http://dictionary.reference.com/search?q=%s

The 3rd param to the substr function is clearly wrong.  Is it even doable 
without writing a procedure?

Finally, is this the fastest way to get this data, or is there  regex-based 
function that might be faster?

Thanks,
Otis



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




Try this:

test=# select substr(href, position('://' in href)+3, position('/' in 
substr(

href,position('://' in href)+3))-1), href from url;
  substr  |   href

--+-
-
 devedge.netscape.com | 
http://devedge.netscape.com/viewsource/2002/bookmark

s/
 texturizer.net   | http://texturizer.net/firebird/extensions/
 texturizer.net   | http://texturizer.net/firebird/themes/
 forums.mozillazine.org   | http://forums.mozillazine.org/index.php?c=4
 www.mozillazine.org  | http://www.mozillazine.org/
 devedge.netscape.com | 
http://devedge.netscape.com/viewsource/2002/bookmark

s/
 www.google.com   | http://www.google.com/search?&q=%s
 groups.google.com| http://groups.google.com/groups?scoring=d&q=%s
 www.google.com   | 
http://www.google.com/search?q=%s&btnI=I'm+Feeling+L

ucky
 dictionary.reference.com | http://dictionary.reference.com/search?q=%s
(10 rows)

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] Extracting hostname from URI column

2007-09-11 Thread Paul Lambert

chester c young wrote:

I'm trying to use substr() and position() functions to extract the
full host name (and later a domain) from a column that holds URLs.


substring( href from '.*://\([^/]*)' );



Ok, your solution looks better than mine... but I have no idea how to 
interpret that, time to consult some manuals.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Extracting hostname from URI column

2007-09-11 Thread Paul Lambert

Andrej Ricnik-Bay wrote:


Plain regex The key are the parenthesis () ...
basically it will omit ANYTHING + two slashes at the beginning
of a string.  Then it will match everything BUT a slash, and as
much of that as possible since regex are greedy by default
(hence the host name he was looking for) ... and everything
AFTER a slash will be omitted.



Cheers,
Andrej


Thanks - that makes a bit more sense. I'm in the middle of reading 
chapter 9.3.7 of the manual - POSIX Regular Expressions - which I'm 
assuming is dealing with this, so it's looking clearer.


--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Extracting hostname from URI column

2007-09-11 Thread Paul Lambert

[EMAIL PROTECTED] wrote:


And what I'd like is something that would give me the counts for the number of 
occurrences of each unique hostname.  Something much like `uniq -c'.  Can 
anyone tell me how that's done or where I should look for info? (I'm not sure 
what to look for, that's the problem).

Thanks,
Otis



Just use distinct...

test=# select distinct count(*),substring( href from '.*://([^/]*)' ) as 
domain from url group by domain order by domain;

 count |  domain
---+--
 3 | devedge.netscape.com
 1 | dictionary.reference.com
 2 | forums.mozillazine.org
 1 | groups.google.com
 4 | texturizer.net
11 | www.google.com
 2 | www.mozillazine.org
(7 rows)

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Extracting hostname from URI column

2007-09-11 Thread Paul Lambert

Paul Lambert wrote:


Just use distinct...

test=# select distinct count(*),substring( href from '.*://([^/]*)' ) as 
domain from url group by domain order by domain;


OK so distinct was redundant there... it gives the same results without it.

AutoDRS=# select count(*) as occurances,substring( href from 
'.*://([^/]*)' ) as domain from url group by domain order by occurances 
desc,domain;

 occurances |  domain
+--
 11 | www.google.com
  4 | dictionary.reference.com
  4 | texturizer.net
  3 | devedge.netscape.com
  3 | groups.google.com
  2 | forums.mozillazine.org
  2 | www.mozillazine.org
(7 rows)

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Extracting hostname from URI column

2007-09-16 Thread Paul Lambert

Paul Lambert wrote:

chester c young wrote:

I'm trying to use substr() and position() functions to extract the
full host name (and later a domain) from a column that holds URLs.


substring( href from '.*://\([^/]*)' );



Ok, your solution looks better than mine... but I have no idea how to 
interpret that, time to consult some manuals.




OK - following on from this, I'm doing a similar thing to the OP to 
analyze my proxy's traffic (never occured to me to do it in a db until 
that post)


I've done the above regex to pull out the domain part of the URL and am 
left with results such as:

"acvs.mediaonenetwork.net"
"profile.ak.facebook.com"
"www.bankwest.com.au"

What I want to do next is pull out the major domain part of the URL
I.e. for the above three records I should end up with
"mediaonenetwork.net"
"facebook.com"
"bankwest.com.au"

What would be the best way to do something like that? I assume it won't 
be a simple regex like the above due to the country codes on the end of 
some domains. My thought is look at the last portion of the domain, if 
it's 2 characters long then assume it's a country code and grab the last 
three sections, if it's not three characters long then assume it's an 
international domain and grab the last two... but that sounds a bit dodgy.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Quick question re foreign keys.

2007-10-23 Thread Paul Lambert
I have a table where one column references by foreign key a column from 
another table. However, the column in this first table does not always 
contain data which results in a not-null constraint violation when I 
attempt an insert.


My question therefore is, is it possible to create a foreign key that is 
conditional, i.e. only enforce the foreign key where the value in that 
table is not null.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Quick question re foreign keys.

2007-10-23 Thread Paul Lambert

Tom Lane wrote:


If I'm understanding you correctly, the problem is not the foreign key,
it's that you marked the column NOT NULL.  A foreign key constraint by
itself will allow a NULL in the referencing column to pass.  You choose
whether you want to allow that or not by separately applying a NOT NULL
constraint or not.

regards, tom lane


It's marked not null as a result of being part of the primary key for 
that table which I can't really get around.


I can get away with not having the foreign key though, so I'll have to 
go down that path.


Cheers,
P.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] request for help with COPY syntax

2007-10-24 Thread Paul Lambert

Chuck D. wrote:

Greetings everyone,

I'm having some trouble with COPY syntax.

I'm importing the cities data from MaxMind, but I run into errors when the 
data adds a double quote inside a field.


The data is CSV, comma delimited, no quotes around fields, ISO-8859-1.  I'm 
using COPY with the defaults and setting client encoding to LATIN1.


The temporary table for importing looks like this:


  Table "geo.orig_city_maxmind"
   Column| Type  | Modifiers
-+---+---
 cc1 | character(2)  |
 city| text  |
 accent_city | text  |
 region  | character(3)  |
 latitude| character varying(18) |
 longitude   | character varying(18) |

The COPY command is:

COPY geo.orig_city_maxmind
  FROM '/home/www/geo/DATA/MAXMIND.com/cities_no_header.txt'
CSV;


Here is one error I get:

ERROR:  value too long for type character(3)
CONTEXT:  COPY orig_city_maxmind, line 281430, column region: "52.1438889"

Looking at line 281430 we see:

by,kruhavyetskalini"na,KruhavyetsKalini"na,02,52.1438889,31.6925

There are a couple " where I would expect to see ' instead.  I see other lines 
in the data that use both in a field.


I tried this with the earth-info.nga.mil data and I have a similar problem but 
they are using newlines within a field and I can't figure out how to allow 
them.


Anyone known how I can rewrite the COPY command to allow those " or ' within 
the data?  After a couple days I wasn't able to find any examples to help.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org




I get around this problem with my data loads by specifying some other 
arbitrary character that I know won't appear in the data as the quote 
character.


Eg QUOTE E'\f' will specify form feed as the quote character, ergo any 
data with double or single quotes will be loaded with those quote 
characters in the string.


Something similar may help with your case.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Quick question re foreign keys.

2007-10-24 Thread Paul Lambert

Paul Lambert wrote:


It's marked not null as a result of being part of the primary key for 
that table which I can't really get around.


I can get away with not having the foreign key though, so I'll have to 
go down that path.


Cheers,
P.



Ignore this whole thread actually.

I need to rethink some of my design.

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] Select into with dynamic criteria in a plpgsql function

2007-10-28 Thread Paul Lambert
I've got a function defined in PL/PgSQL to update some fields in a 
record where the criteria for pulling out some other values from a table 
is dynamic.


I define a string called account_criteria to which I assign a normal SQL 
WHERE clause based on some work done earlier in the function. I then 
need to pull out some values from a table based on that where clause so 
I'm doing something like:


SELECT amt_ytd INTO yesterday_amt_ytd FROM finbalance WHERE 
account_criteria;


Which returns the following:

ERROR: argument of WHERE must be type boolean, not type text
SQL state: 42804
Context: SQL statement "SELECT amt_ytd FROM finbalance WHERE  $1 "
PL/pgSQL function "fn_update_daily_balance" line 27 at SQL statement

I assume I would need something like EXECUTE to do this... but SELECT 
INTO is not supported in EXECUTE so I have a bit of a conundrum. :(


Can anyone offer some suggestion(s) on how can I make my function 
behave? Non-violent suggestions would be preferable.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Select into with dynamic criteria in a plpgsql function

2007-10-28 Thread Paul Lambert

Paul Lambert wrote:
I've got a function defined in PL/PgSQL to update some fields in a 
record where the criteria for pulling out some other values from a table 
is dynamic.


I define a string called account_criteria to which I assign a normal SQL 
WHERE clause based on some work done earlier in the function. I then 
need to pull out some values from a table based on that where clause so 
I'm doing something like:


SELECT amt_ytd INTO yesterday_amt_ytd FROM finbalance WHERE 
account_criteria;


Which returns the following:

ERROR: argument of WHERE must be type boolean, not type text
SQL state: 42804
Context: SQL statement "SELECT amt_ytd FROM finbalance WHERE  $1 "
PL/pgSQL function "fn_update_daily_balance" line 27 at SQL statement

I assume I would need something like EXECUTE to do this... but SELECT 
INTO is not supported in EXECUTE so I have a bit of a conundrum. :(


Can anyone offer some suggestion(s) on how can I make my function 
behave? Non-violent suggestions would be preferable.




Ignore this - I didn't notice execute  into  - I've just 
re-read the manual and worked it out.


Thanks.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Query design assistance - getting daily totals

2007-12-11 Thread Paul Lambert
I have a table of account balances as at the end of a working day and 
want to from that, calculate daily total figures.


Eg, let's say I have a table structure of:
year_id integer
month_id integer
working_day integer
account integer
account_balance numeric(19,4)

Example data might be something like
2007,12,1,1,100.00
2007,12,2,1,200.00
2007,12,3,1,250.00
2007,12,4,1,500.00
2007,12,5,1,575.00

I want to construct a query that will give me the daily balances from 
this information, so I would be presented with something like:

2007,12,1,1,100.00
2007,12,2,1,100.00
2007,12,3,1,50.00
2007,12,4,1,250.00
2007,12,5,1,75.00

I figure there's a couple of ways I could do it...
Firstly, build a complicated nested select where the lower level gets 
the main data, then the outer select joins it on itself where the 
working_day is equal to the working_day-1 from the nested query and then 
wrap that in another select that calculates the difference in the 
account_balance column from both.
The second option I think would be to create a function whereby I pass 
it the primary key fields (year_id,month_id,working_day,account) and 
have it do two selects and work out the difference.


I suspect the second option would be more efficient than the first, and 
probably easier to implement since it would be easier to handle 
cross-month boundaries, i.e. day 1's daily total will be the amount on 
that day minus the amount of the final day in the previous month - but 
does anyone have any alternate suggestions that would be better still?


Cheers,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] Function result using execute

2007-12-11 Thread Paul Lambert
I have a function which uses execute to populate the value of a variable 
based on a defined select construct.


The relevant part of the code looks like thus:
   EXECUTE curr_query INTO curr_amount;
   RAISE NOTICE '%',curr_amount;
   IF NOT FOUND THEN
  curr_amount=0;
   END IF;
   RAISE NOTICE '%',curr_amount;

I've added the if found to trap if nothing is returned by the execute so 
that the value gets set to a default 0 rather than null.


When I call the function, the first raise notice gives me a value that 
is correct based on the select it would be performing, but the second 
raise notice gives me a 0, which suggests to me that although the 
execute has populated the curr_amount field with something, the IF NOT 
FOUND is always firing.


Am I misunderstanding what the FOUND variable can be used for - i.e. is 
it not compatible with/not set by the EXECUTE command and should 
therefore I just be using a test of IF curr_amount IS NOT NULL?


Cheers,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds & Reynolds Company

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Function result using execute

2007-12-11 Thread Paul Lambert

Tom Lane wrote:

Do you really need an EXECUTE?  If so, maybe you could restructure this
using a FOR ... IN EXECUTE, or some such thing.


I'll always only ever have a single result since the function gets 
passes all the fields making up the primary key of the table, so doing a 
for in seems like it's doing more work than is needed.


I need an execute because I'm dynamically constructing an SQL statement 
based on the parameters passed into the function - unless there is some 
other way of doing it that I'm not aware of.





therefore I just be using a test of IF curr_amount IS NOT NULL?


Well, that might work.  Have you thought through the corner case
where the query does find a row but the field's value is null?



The field in question is marked not null in the tables schema, so unless 
PG lets things get past this constraing I don't believe that would be an 
issue.


Having the test at is not null seems to be doing the job.

Thanks.

--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds & Reynolds Company

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Query design assistance - getting daily totals

2007-12-11 Thread Paul Lambert

A. Kretschmer wrote:

am  Wed, dem 12.12.2007, um 10:34:35 +0900 mailte Paul Lambert folgendes:

year_id integer
month_id integer
working_day integer


Why this broken data types? We have date and timestamp[tz].




It's a financial application which needs to work using a concept of 
'financial periods' which may not necessarily correspond to calendar 
months and it's much easier to manage in this way than it is to merge it 
all together using a date field. Eg, 1st January may actually be the 
15th 'working day' of the 9th 'financial period' - however looking at 
just a date of jan-1 there is no way of knowing this and it's the 
periods that matter more so than the actual date.


I've given the function method a try and it looks to work efficiently 
enough.


P.

--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds & Reynolds Company

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Shorthand column labels (SELECT col1 foo, ...) vs (SELECT col1 AS foo, ...)

2008-01-09 Thread Paul Lambert

Ken Johanson wrote:
I notice PG doesn't allow shorthand column labels -- it requires the 
'AS' operand.


SELECT col1 foo, ...; -> ERROR: syntax error at or near "foo"

For compatibility with other databases, what objections might be argued 
in allowing this syntax in the future?


On the 'pros' side I think it eases migration to PG, shortens code, is 
similar syntax to shorthand table aliases, and some users might argue it 
has become defacto syntax among DBs.


Regards,
Ken


Briefly discussed a couple of weeks ago.

See http://archives.postgresql.org/pgsql-general/2008-01/msg00089.php

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   >