[SQL] How to use function PointN?

2007-05-07 Thread Nemo Terry

select PointN(envelope(polyline),1) from highway;
return null,why?

_
享用世界上最大的电子邮件系统― MSN Hotmail。 http://www.hotmail.com 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] syntax error in "createdb"

2007-05-07 Thread kannan kk
hello
  
  when i try to execute createdb,psql,pg_dump,pg_restore and similar postgre 
commands , i get 
  syntax at or near 'createdb' and similar to other statements.
  
  plz help me wht would be the problem. 

   
-
 Office firewalls, cyber cafes, college labs, don't allow you to download CHAT? 
Here's a solution! 

Re: [SQL] syntax error in "createdb"

2007-05-07 Thread Phillip Smith
Where are you trying to execute these commands? They should be run at
the command line as they are 'standalone' programs, not SQL commands to
be run in an SQL session.

On Mon, 2007-05-07 at 10:01 +0100, kannan kk wrote:

> hello
> 
> when i try to execute createdb,psql,pg_dump,pg_restore and similar
> postgre commands , i get 
> syntax at or near 'createdb' and similar to other statements.
> 
> plz help me wht would be the problem. 
> 
> 
> 
> __
> Office firewalls, cyber cafes, college labs, don't allow you to
> download CHAT? Here's a solution! 


***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments


Re: [SQL] Sequence vs. Index Scan

2007-05-07 Thread Andrew Sullivan
On Sun, May 06, 2007 at 01:45:54PM -0500, Aaron Bono wrote:
 
> Then I inserted 150 more records in the slow schema and pow - it started
> working like the fast schema.
> 
> So my conclusion is that the function is being treated as volatile even
> though it is stable because the number of records is small. 

I don't think that's the issue.  If this is dependent on the
number of records, then for some reason the way the data is
structured means that the planner thinks a seqscan's a better bet. 
This is probably due to distribution of the values.  You could try
increasing the stats sample, and see if that helps.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well. 
--Dennis Ritchie

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] How to use function PointN?

2007-05-07 Thread Michael Fuhr
On Mon, May 07, 2007 at 04:07:00PM +0800, Nemo Terry wrote:
> select PointN(envelope(polyline),1) from highway;
> return null,why?

PointN is a PostGIS function; you might get more help on the
postgis-users mailing list.

http://postgis.refractions.net/mailman/listinfo/postgis-users

See also the PostGIS documentation for Envelope and PointN:

http://postgis.refractions.net/docs/ch06.html

Envelope(geometry)

Returns a POLYGON representing the bounding box of the geometry.

PointN(geometry,integer)

Return the N'th point in the first linestring in the geometry.
Return NULL if there is no linestring in the geometry.

You've used Envelope to get a polygon but PointN expects a linestring.
Try using ExteriorRing on Envelope's polygon:

SELECT PointN(ExteriorRing(Envelope(polyline)), 1) FROM highway;

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Sequence vs. Index Scan

2007-05-07 Thread Tom Lane
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Sun, May 06, 2007 at 01:45:54PM -0500, Aaron Bono wrote:
>> So my conclusion is that the function is being treated as volatile even
>> though it is stable because the number of records is small. 

> I don't think that's the issue.  If this is dependent on the
> number of records, then for some reason the way the data is
> structured means that the planner thinks a seqscan's a better bet. 
> This is probably due to distribution of the values.  You could try
> increasing the stats sample, and see if that helps.

It's got nothing to do with distribution, just with numbers of pages to
fetch.  You'll nearly always get a seqscan plan if there are only a
couple of pages in the table, simply because it would take more I/O to
read the index too.

The reason this is a problem in this example is that the function is so
expensive to execute.  The planner should be avoiding the seqscan on the
basis of CPU cost not I/O cost, but it doesn't know that the function is
expensive enough to drive the decision that way.

In CVS HEAD (8.3-to-be) we've added a "cost" property to functions,
which provides a clean way to fix this issue, but there's no good way to
deal with it in existing releases :-(

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Temporal Table Relations and Referential Integrity

2007-05-07 Thread Richard Broersma Jr
In non-temporal tables, changes to the primary key of a Parent table and record 
deletions can be
cascaded to a related Child table using "REFERENCES Parent( Parent_code ) ON 
DELETE CASCADE ON
UPDATE CASCADE,"

However, since temporal UPDATEs and DELETEs do not behave in the same way as 
they do in
non-temporal tables, is anyone able to successfully use Referential Integrity 
constraints between
Parent and Child tables?  Or are custom triggers the only solution to maintain 
temporal relations
between Parents and Children?  My question deals specifically with the 
"Current" style of temporal
relations, where only the currently known attributes and attribute histories 
are stored.

As a second question, would anyone know if temporal referential integrity is 
targeted for future
inclusion into the SQL standard?

I would be happy to elaborate on what I mean by Temporal Referential-Integrity 
(RFI) if my
questions are unclear.

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [SQL] Sequence vs. Index Scan

2007-05-07 Thread Aaron Bono

On 5/7/07, Tom Lane <[EMAIL PROTECTED]> wrote:


Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Sun, May 06, 2007 at 01:45:54PM -0500, Aaron Bono wrote:
>> So my conclusion is that the function is being treated as volatile even
>> though it is stable because the number of records is small.

> I don't think that's the issue.  If this is dependent on the
> number of records, then for some reason the way the data is
> structured means that the planner thinks a seqscan's a better bet.
> This is probably due to distribution of the values.  You could try
> increasing the stats sample, and see if that helps.

It's got nothing to do with distribution, just with numbers of pages to
fetch.  You'll nearly always get a seqscan plan if there are only a
couple of pages in the table, simply because it would take more I/O to
read the index too.

The reason this is a problem in this example is that the function is so
expensive to execute.  The planner should be avoiding the seqscan on the
basis of CPU cost not I/O cost, but it doesn't know that the function is
expensive enough to drive the decision that way.

In CVS HEAD (8.3-to-be) we've added a "cost" property to functions,
which provides a clean way to fix this issue, but there's no good way to
deal with it in existing releases :-(

regards, tom lane




Since we don't delete records, we just deactivate them, I added 100 dummy
records that are not active.  This fixed the problem.  As the number of
records in that table grows, I will delete the dummy records.

Thanks for all the help!
Aaron

--
==
  Aaron Bono
  Aranya Software Technologies, Inc.
  http://www.aranya.com
  http://codeelixir.com
==


Re: [SQL] Dynamic prepare possible in plpgsql?

2007-05-07 Thread Jim Nasby

On May 1, 2007, at 12:16 PM, Jonah H. Harris wrote:

On 5/1/07, Collin Peters <[EMAIL PROTECTED]> wrote:
Is it faster to use PREPARE for the various INSERT statements  
inside a

plpgsql function?  Perhaps I am wrong and it does its PREPARE work
when the function is parsed.


IIRC, PLpgSQL automagically prepares each statement behind the scenes
on the first use.


BTW, a good use for a version of EXECUTE that accepted parameters is  
the trigger on a partitioned table to direct inserts to the  
appropriate partition. Currently, you have to quote_literal(coalesce 
(NEW.field,  'NULL')) in the dynamic statement.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] Error: Input string was not in a correct format

2007-05-07 Thread Rommel the iCeMAn
Hi guys,
 
I'm trying to call a function from my .NET project and I am getting this
error: "Input string was not in a correct format". I have checked to make
sure that I am not mismatching my parameters. Is there a way that I can see
the actual SQL query that is being sent to postgresql? Once I can see this
then perhaps I can find the little gremlin that is annoying me :-)
 
Thanks,
Rommel Edwards
Software Developer,
Barbados, Caribbean.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Error: Input string was not in a correct format

2007-05-07 Thread Rommel the iCeMAn
Ok guys,

Sorry for the false alarm. I solved the problem. When I define my parameters
in .NET in the order they are listed in pgAdmin, the function call works. If
I use the order that they are listed in Visual Studio's Server Explorer, the
function call fails. It appears Visual Studio's Server explorer is listing
my function parameters incorrectly.

Thanks anyways.
Rommel Edwards
Software Developer,
Barbados, Caribbean.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [SQL] Error: Input string was not in a correct format

2007-05-07 Thread Michael Glaesemann


On May 7, 2007, at 18:17 , Rommel the iCeMAn wrote:


Is there a way that I can see
the actual SQL query that is being sent to postgresql?


From within PostgreSQL you can put the SQL queries in the logs.

http://www.postgresql.org/docs/8.2/interactive/runtime-config- 
logging.html#RUNTIME-CONFIG-LOGGING-WHAT


Check out the log_statement paramenter (and possibly others).

You might also have some logging options within .NET, but as I'm  
unfamiliar with the framework, I really can't help you there.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Error: Input string was not in a correct format

2007-05-07 Thread Tom Lane
"Rommel the iCeMAn" <[EMAIL PROTECTED]> writes:
> I'm trying to call a function from my .NET project and I am getting this
> error: "Input string was not in a correct format". I have checked to make
> sure that I am not mismatching my parameters. Is there a way that I can see
> the actual SQL query that is being sent to postgresql? Once I can see this
> then perhaps I can find the little gremlin that is annoying me :-)

FWIW, I can't find that error string (or even one using the phrase
"correct format") anywhere in the core PG code.  So it's coming from
somewhere on the client side, unless you've got some third-party library
loaded into the backend.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Query RE using COPY

2007-05-07 Thread Paul Lambert

I use the COPY command to load data from a file into tables in my database.

The following is an example done in psql:
	COPY deals_temp_load FROM 'c:/temp/autodrs_deals.txt' WITH DELIMITER AS 
'^' QUOTE '\f' CSV HEADER;


If a column is added to the table but has not been added to the extracts 
which create the autodrs_deals.txt file, this errors thusly:

ERROR:  missing data for column "location"
CONTEXT:  COPY deals_temp_load, line 2: "line containing data removed 
for confidentiality - suffice to say it does not contain the 'location' 
column"


Is there any way of telling it to ignore columns that have not been 
specified in the file used to load? I have tried giving the column a 
default value but I still get the error. The column in this case 
'location' is a nullable column and does not always have data (yes, bad 
design for a database, but we'll skip that point for now) thus I am not 
concerned if the load procedure doesn't supply it.


BTW, this is done on Weendoze.

Thanks,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Query RE using COPY

2007-05-07 Thread Jonah H. Harris

COPY mytbl (col1, col2, col3, col4, col6) FROM 'myfile.txt' WITH
DELIMITER AS '^' QUOTE '\f' CSV HEADER;


On 5/7/07, Paul Lambert <[EMAIL PROTECTED]> wrote:

I use the COPY command to load data from a file into tables in my database.

The following is an example done in psql:
COPY deals_temp_load FROM 'c:/temp/autodrs_deals.txt' WITH DELIMITER AS
'^' QUOTE '\f' CSV HEADER;

If a column is added to the table but has not been added to the extracts
which create the autodrs_deals.txt file, this errors thusly:
ERROR:  missing data for column "location"
CONTEXT:  COPY deals_temp_load, line 2: "line containing data removed
for confidentiality - suffice to say it does not contain the 'location'
column"

Is there any way of telling it to ignore columns that have not been
specified in the file used to load? I have tried giving the column a
default value but I still get the error. The column in this case
'location' is a nullable column and does not always have data (yes, bad
design for a database, but we'll skip that point for now) thus I am not
concerned if the load procedure doesn't supply it.

BTW, this is done on Weendoze.

Thanks,
Paul.

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] Query RE using COPY

2007-05-07 Thread Paul Lambert

Jonah H. Harris wrote:

COPY mytbl (col1, col2, col3, col4, col6) FROM 'myfile.txt' WITH
DELIMITER AS '^' QUOTE '\f' CSV HEADER;




I would rather not do it this way as I use the same load script at all 
customer sites where the extracts and requirements may vary. I.e. one 
customer may not use the location field mentioned above, but another might.


What I would prefer to do, if possible, is set that column in the 
database to 'not required' so that if I get an extract that doesn't have 
that column in the file, the copy doesn't care.


As it stands now I generally have to drop all the unneeded columns from 
 numerous tables, perform my load and then re-add them back again.


When we're talking about 40 or more tables in each database some tables 
with several hundred columns... your suggestion would be a bit 
cumbersome - particularly if it is only one or two columns in each table 
that that the client doesn't need.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Query RE using COPY

2007-05-07 Thread Phillip Smith
Can you modify the 'extract' and make the extra column "\n" which is the
null escape?

That would be the only other option.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Paul Lambert
Sent: Tuesday, 8 May 2007 12:44
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Query RE using COPY

Jonah H. Harris wrote:
> COPY mytbl (col1, col2, col3, col4, col6) FROM 'myfile.txt' WITH
> DELIMITER AS '^' QUOTE '\f' CSV HEADER;
> 
> 

I would rather not do it this way as I use the same load script at all 
customer sites where the extracts and requirements may vary. I.e. one 
customer may not use the location field mentioned above, but another might.

What I would prefer to do, if possible, is set that column in the 
database to 'not required' so that if I get an extract that doesn't have 
that column in the file, the copy doesn't care.

As it stands now I generally have to drop all the unneeded columns from 
  numerous tables, perform my load and then re-add them back again.

When we're talking about 40 or more tables in each database some tables 
with several hundred columns... your suggestion would be a bit 
cumbersome - particularly if it is only one or two columns in each table 
that that the client doesn't need.

-- 
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 6: explain analyze is your friend


***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Query RE using COPY

2007-05-07 Thread Paul Lambert

Phillip Smith wrote:

Can you modify the 'extract' and make the extra column "\n" which is the
null escape?

That would be the only other option.




Right now the software that does the extracts is developed by our 
applications developers, but I'll be taking that onto my side in the 
near future, just looking for a workaround until then.


--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate