[SQL] copy into serial field with auto_increment
Hello, i've got a textfile of data separated by '|'. One of the fields in my table is a serial (auto_increment) field. When i import the data with the copy command i get duplicate key error. The key is the serial - field. I recognized that the copy command doesn't auto_increment the serial field. I come from mysql and pushed a 0 into the serial field. But it doesn't work. I tested it with an empty field, but it failed to. How could i make it work ?? Thanx for every help -- Andy - Kim Möller Leiter Lastminute / Pauschal - Deutschland ypsilon.net AG Leiter Entwicklungsabteilung Lastminute / Pauschal / Mietwagen - Deutschland ypsilon.net AG Tel.: (06109) - 50 50 FAX : (06109) - 50 525 eMail : [EMAIL PROTECTED] begin:vcard n:Möller;Andy-Kim tel;cell:0179/2302856 tel;fax:06109 / 505 25 tel;home:06661/4526 tel;work:06109 / 505 193 x-mozilla-html:FALSE url:http://www.ypsilon.net org:i. A. d. F. Ypsilon - Network GmbH;Database - Managment & Software - Development adr:;;Vilbeler Landstr. 203;Frankfurt;Hessen;60388;Germany version:2.1 email;internet:[EMAIL PROTECTED] fn:Andy-Kim Möller end:vcard ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Create trigger for auto update function
CREATE FUNCTION update_pass() RETURNS integer AS $$ UPDATE hoy SET pass_md5=md5(pass) WHERE id=new.id; SELECT 1; $$ LANGUAGE SQL; CREATE TRIGGER triger_users_pass_md5 AFTER INSERT OR UPDATE ON hoy FOR EACH ROW EXECUTE PROCEDURE update_pass; I understand the ideea, but don't know how to apply it. I also receive the error that NEW must be definde as a rule. Still... not working... - Original Message - From: "daq" <[EMAIL PROTECTED]> To: "Andrei Bintintan" <[EMAIL PROTECTED]> Cc: Sent: Monday, July 18, 2005 4:32 PM Subject: Re: [SQL] Create trigger for auto update function Hello Andrei, Monday, July 18, 2005, 2:24:41 PM, you wrote: AB> Hi to all, AB> I have a table: AB> create table hoy( AB> id serial, AB> pass varchar(40), AB> pass_md5 varchar(40); AB> Now, I want to write a trigger function that automatically updates the pass_md5 with the md5 function of the pass. AB> I tried this: AB> CREATE FUNCTION update_pass(integer) RETURNS integer AS $$ AB> UPDATE hoy SET pass_md5=md5(pass) WHERE id=$1; AB>SELECT 1; AB> $$ LANGUAGE SQL; AB> and AB> CREATE TRIGGER triger_users_pass_md5 AB> AFTER INSERT OR UPDATE AB> ON hoy AB> EXECUTE PROCEDURE update_pass(integer); What will be the param of the trigger procedure? Try this way: CREATE FUNCTION update_pass() RETURNS integer AS $$ UPDATE hoy SET pass_md5=md5(pass) WHERE id=new.id; SELECT 1; $$ LANGUAGE SQL; CREATE TRIGGER triger_users_pass_md5 AFTER INSERT OR UPDATE ON hoy FOR EACH ROW EXECUTE PROCEDURE update_pass; DAQ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Create trigger for auto update function >> SOLVED!!!
CREATE OR REPLACE FUNCTION u9() RETURNS TRIGGER AS' BEGIN NEW.pass_md5=md5(NEW.pass); return NEW; END 'language plpgsql; CREATE TRIGGER t8 BEFORE INSERT OR UPDATE ON hoy FOR EACH ROW EXECUTE PROCEDURE u9(); Ok. This is the solution. It works well, for inserts and updates. Took some time to figure it out. (ignore the function names --- test functions) Best regards, Andy. - Original Message - From: "daq" <[EMAIL PROTECTED]> To: "Andrei Bintintan" <[EMAIL PROTECTED]> Cc: Sent: Monday, July 18, 2005 4:32 PM Subject: Re: [SQL] Create trigger for auto update function Hello Andrei, Monday, July 18, 2005, 2:24:41 PM, you wrote: AB> Hi to all, AB> I have a table: AB> create table hoy( AB> id serial, AB> pass varchar(40), AB> pass_md5 varchar(40); AB> Now, I want to write a trigger function that automatically updates the pass_md5 with the md5 function of the pass. AB> I tried this: AB> CREATE FUNCTION update_pass(integer) RETURNS integer AS $$ AB> UPDATE hoy SET pass_md5=md5(pass) WHERE id=$1; AB>SELECT 1; AB> $$ LANGUAGE SQL; AB> and AB> CREATE TRIGGER triger_users_pass_md5 AB> AFTER INSERT OR UPDATE AB> ON hoy AB> EXECUTE PROCEDURE update_pass(integer); What will be the param of the trigger procedure? Try this way: CREATE FUNCTION update_pass() RETURNS integer AS $$ UPDATE hoy SET pass_md5=md5(pass) WHERE id=new.id; SELECT 1; $$ LANGUAGE SQL; CREATE TRIGGER triger_users_pass_md5 AFTER INSERT OR UPDATE ON hoy FOR EACH ROW EXECUTE PROCEDURE update_pass; DAQ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Create trigger for auto update function
Off topic :) I think we posted in the same time :)) - Original Message - From: "Richard Huxton" To: "Andrei Bintintan" <[EMAIL PROTECTED]> Cc: Sent: Tuesday, July 19, 2005 12:11 PM Subject: Re: [SQL] Create trigger for auto update function Andrei Bintintan wrote: Now, I want to write a trigger function that automatically updates the pass_md5 with the md5 function of the pass. I tried this: CREATE FUNCTION update_pass(integer) RETURNS integer AS $$ UPDATE hoy SET pass_md5=md5(pass) WHERE id=$1; SELECT 1; $$ LANGUAGE SQL; and CREATE TRIGGER triger_users_pass_md5 AFTER INSERT OR UPDATE ON hoy EXECUTE PROCEDURE update_pass(integer); The simplest way to do this is with a BEFORE trigger, and just modifying the NEW pseudo-record. CREATE OR REPLACE FUNCTION maintain_pass_md5() RETURNS TRIGGER AS ' BEGIN NEW.pass_md5 = md5(NEW.pass); RETURN NEW; END ' LANGUAGE plpgsql; CREATE TRIGGER hoy_maintain_pass_md5 BEFORE INSERT OR UPDATE ON hoy FOR EACH ROW EXECUTE PROCEDURE maintain_pass_md5(); Note that the function is defined to return type TRIGGER and that we return NEW. If we returned NULL, the row would be skipped by the current update statement. This means only one actual on-disk update takes place, and as far as everyone is concerned pass_md5 automagically updates itself. If the md5() function was actually an operation that would take a long time, it might be worth checking whether pass has been changed: IF NEW.pass IS DISTINCT FROM OLD.pass THEN ... END IF However, if you do this then you have to test TG_OP to see whether you are inserting or updating - insert ops don't have OLD defined. HTH -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 6: explain analyze is your friend
[SQL] Strange join...maybe some improvements???
I have 3 tables: CREATE TABLE orders( id int4 SERIAL, id_ag int4, id_modell int4 ->> this is linked to the modell.id ) CREATE TABLE modell( id int4 SERIAL, id_hersteller int4) CREATE TABLE contactpartner( id int4 SERIAL, id_ag int4, ->> this is linked to order.id_ag or modell.id_hersteller id_user int4 ). I get a list of id_ag from the contactpartner which belongs to a user(AG_LIST). Then I have to selectselect/count all the data's from the order table that have the order.id_ag in the AG LIST or which have the modell.id_hersteller in the AG_LIST. I have this query: SELECT count(o.id) FROM orders oINNER JOIN modell m ON m.id=o.id_modellWHERE o.id_ag IN (SELECT id_ag FROM contactpartner cp WHERE id_user=15) OR m.id_hersteller IN (SELECT id_ag FROM contactpartner cp WHERE id_user=15) Aggregate (cost=7828.60..7828.60 rows=1 width=4) (actual time=1145.150..1145.151 rows=1 loops=1) -> Hash Join (cost=1689.64..7706.32 rows=48913 width=4) (actual time=153.059..1136.457 rows=9395 loops=1) Hash Cond: ("outer".id_modell = "inner".id) Join Filter: ((hashed subplan) OR (hashed subplan)) -> Seq Scan on orders o (cost=0.00..3129.17 rows=65217 width=12) (actual time=0.031..94.444 rows=65217 loops=1) -> Hash (cost=1218.07..1218.07 rows=66607 width=8) (actual time=151.211..151.211 rows=0 loops=1) -> Seq Scan on modell m (cost=0.00..1218.07 rows=66607 width=8) (actual time=0.044..87.154 rows=66607 loops=1) SubPlan -> Index Scan using contactpartner_id_user_idx on contactpartner cp (cost=0.00..6.02 rows=2 width=4) (actual time=0.010..0.018 rows=4 loops=1) Index Cond: (id_user = 15) -> Index Scan using contactpartner_id_user_idx on contactpartner cp (cost=0.00..6.02 rows=2 width=4) (actual time=0.092..0.116 rows=4 loops=1) Index Cond: (id_user = 15)Total runtime: 1145.689 ms I tried also this one: SELECT count(o.id) FROM orders oINNER JOIN modell m ON m.id=o.id_modellINNER JOIN contactpartner cp ON cp.id_user=15 AND (o.id_ag=cp.id_ag OR cp.id_ag=m.id_hersteller) Aggregate (cost=11658.63..11658.63 rows=1 width=4) (actual time=1691.570..1691.570 rows=1 loops=1) -> Nested Loop (cost=7752.40..11657.27 rows=542 width=4) (actual time=213.945..1683.515 rows=9416 loops=1) Join Filter: (("inner".id_ag = "outer".id_ag) OR ("outer".id_ag = "inner".id_hersteller)) -> Index Scan using contactpartner_id_user_idx on contactpartner cp (cost=0.00..6.02 rows=2 width=4) (actual time=0.108..0.188 rows=4 loops=1) Index Cond: (id_user = 15) -> Materialize (cost=7752.40..8723.57 rows=65217 width=12) (actual time=37.586..352.620 rows=65217 loops=4) -> Hash Join (cost=1677.59..7368.18 rows=65217 width=12) (actual time=150.220..1153.872 rows=65217 loops=1) Hash Cond: ("outer".id_modell = "inner".id) -> Seq Scan on orders o (cost=0.00..3129.17 rows=65217 width=12) (actual time=0.034..95.133 rows=65217 loops=1) -> Hash (cost=1218.07..1218.07 rows=66607 width=8) (actual time=149.961..149.961 rows=0 loops=1) -> Seq Scan on modell m (cost=0.00..1218.07 rows=66607 width=8) (actual time=0.032..86.378 rows=66607 loops=1)Total runtime: 1696.253 ms >>>>>>>> but this brings me some double information(the same o.id) in the situation in which the o.id_ag and m.id_hersteller are different, but still both in the AG_LIST. Is there any way to speed up this query??? Regards, Andy.
Re: [SQL] Strange join...maybe some improvements???
Indexes are on all join fields. In the shown example on all fields I have indexes. Yes I vacuum the database regulary. Andy. - Original Message - From: Thomas F. O'Connell To: Andy Cc: pgsql-sql@postgresql.org Sent: Thursday, October 13, 2005 7:58 PM Subject: Re: [SQL] Strange join...maybe some improvements??? What indexes do you have on these tables? And have you ANALYZEd all three recently? -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Open Source Solutions. Optimized Web Development. http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-469-5150615-469-5151 (fax) On Oct 12, 2005, at 8:52 AM, Andy wrote: I have 3 tables: CREATE TABLE orders( id int4 SERIAL, id_ag int4, id_modell int4 ->> this is linked to the modell.id ) CREATE TABLE modell( id int4 SERIAL, id_hersteller int4) CREATE TABLE contactpartner( id int4 SERIAL, id_ag int4, ->> this is linked to order.id_ag or modell.id_hersteller id_user int4 ). I get a list of id_ag from the contactpartner which belongs to a user(AG_LIST). Then I have to selectselect/count all the data's from the order table that have the order.id_ag in the AG LIST or which have the modell.id_hersteller in the AG_LIST. I have this query: SELECT count(o.id) FROM orders oINNER JOIN modell m ON m.id=o.id_modellWHERE o.id_ag IN (SELECT id_ag FROM contactpartner cp WHERE id_user=15) OR m.id_hersteller IN (SELECT id_ag FROM contactpartner cp WHERE id_user=15) Aggregate (cost=7828.60..7828.60 rows=1 width=4) (actual time=1145.150..1145.151 rows=1 loops=1) -> Hash Join (cost=1689.64..7706.32 rows=48913 width=4) (actual time=153.059..1136.457 rows=9395 loops=1) Hash Cond: ("outer".id_modell = "inner".id) Join Filter: ((hashed subplan) OR (hashed subplan)) -> Seq Scan on orders o (cost=0.00..3129.17 rows=65217 width=12) (actual time=0.031..94.444 rows=65217 loops=1) -> Hash (cost=1218.07..1218.07 rows=66607 width=8) (actual time=151.211..151.211 rows=0 loops=1) -> Seq Scan on modell m (cost=0.00..1218.07 rows=66607 width=8) (actual time=0.044..87.154 rows=66607 loops=1) SubPlan -> Index Scan using contactpartner_id_user_idx on contactpartner cp (cost=0.00..6.02 rows=2 width=4) (actual time=0.010..0.018 rows=4 loops=1) Index Cond: (id_user = 15) -> Index Scan using contactpartner_id_user_idx on contactpartner cp (cost=0.00..6.02 rows=2 width=4) (actual time=0.092..0.116 rows=4 loops=1) Index Cond: (id_user = 15)Total runtime: 1145.689 ms I tried also this one: SELECT count(o.id) FROM orders oINNER JOIN modell m ON m.id=o.id_modellINNER JOIN contactpartner cp ON cp.id_user=15 AND (o.id_ag=cp.id_ag OR cp.id_ag=m.id_hersteller) Aggregate (cost=11658.63..11658.63 rows=1 width=4) (actual time=1691.570..1691.570 rows=1 loops=1) -> Nested Loop (cost=7752.40..11657.27 rows=542 width=4) (actual time=213.945..1683.515 rows=9416 loops=1) Join Filter: (("inner".id_ag = "outer".id_ag) OR ("outer".id_ag = "inner".id_hersteller)) -> Index Scan using contactpartner_id_user_idx on contactpartner cp (cost=0.00..6.02 rows=2 width=4) (actual time=0.108..0.188 rows=4 loops=1) Index Cond: (id_user = 15) -> Materialize (cost=7752.40..8723.57 rows=65217 width=12) (actual time=37.586..352.620 rows=65217 loops=4) -> Hash Join (cost=1677.59..7368.18 rows=65217 width=12) (actual time=150.220..1153.872 rows=65217 loops=1) Hash Cond: ("outer".id_modell = "inner".id) -> Seq Scan on orders o (cost=0.00..3129.17 rows=65217 width=12) (actual time=0.034..95.133 rows=65217 loops=1) -> Hash (cost=1218.07..1218.07 rows=66607 width=8) (actual time=149.961..149.961 rows=0 loops=1) -> Seq Scan on modell m (cost=0.00..1218.07 rows=66607 width=8) (actual time=0.032..86.378 rows=66607 loops=1)Total runtime: 1696.253 ms >>>>>>>> but this brings me some double information(the same o.id) in the situation in which the o.id_ag and m.id_hersteller are different, but still both in the AG_LIST. Is there any way to speed up this query??? Regards, Andy.
[SQL] plpgsql record as parameter ???
Hi, I have the following function: CREATE OR REPLACE FUNCTION zahlavis_rech_list(int4, varchar(10)) RETURNS "varchar" AS$BODY$DECLAREavis_id ALIAS FOR $1;rech_type ALIAS FOR $2;rech_list text;sql text;rec RECORD;BEGIN rech_list := '';sql := 'SELECT '|| rech_type ||' as xx FROM rechnung WHERE id IN (SELECT id_rechnung FROM rechnung_zahlavis WHERE id_zahlavis IN (' || avis_id || '))';FOR rec IN execute sqlloop RAISE WARNING 'value = %', rec.xx ; rech_list := rech_list || ',' || rec.xx;end loop;return substr(rech_list,2); END$BODY$LANGUAGE 'plpgsql' VOLATILE; I want to give as a second parameter a column from the table. It works ONLY when I run the function for the first and only with that parameter. For example: select zahlavis_rech_list(1, 'nummer'); WARNING: value = 103670WARNING: value = 103603WARNING: value = 103345WARNING: value = 103318WARNING: value = 103882WARNING: value = 103241WARNING: value = 109124 Total query runtime: 16 ms.Data retrieval runtime: 15 ms.1 rows retrieved. EXECUTION OK! select zahlavis_rech_list(1, 'id'); WARNING: value = 504 ERROR: type of "rec.xx" does not match that when preparing the planCONTEXT: PL/pgSQL function "zahlavis_rech_list" line 14 at assignment EXECUTION ERROR! Both id, and nummer are columns from the table. I tried different solutions but no result. Help && regards, Andy.
Re: [SQL] Re: Still don't know how to build this string ? how to concat ??
Tuesday, March 27, 2001, 10:20:18 AM, you wrote: jrpc> jrpc> Result: jrpc> 01 1440 jrpc> 02 1460 jrpc> 03 1398 jrpc> The values (1440, ...) are the last entries of z_u_umfang for each z_u_typ. jrpc> This is correct since in the function the list:= ... is overwritten until jrpc> the last record is read. jrpc> When I try to concat the list in the manner of list := list || jrpc> text(rec.z_u_umfang); the zustring is empty ! jrpc> Thanks for any help ... jr jrpc> Query : jrpc> select distinct z_u_typ, buildString(z_u_typ) as zustring from jrpc> zylinder_umfang jrpc> Function: jrpc> CREATE FUNCTION buildString(bpchar) RETURNS text AS ' jrpc> DECLARE jrpc> list text; jrpc> rec record; jrpc> BEGIN jrpc> FOR rec IN SELECT z_u_umfang FROM zylinder_umfang WHERE z_u_typ jrpc> = $1; jrpc> list := text(rec.z_u_umfang); jrpc> END LOOP; jrpc> RETURN list; jrpc> END; jrpc> ' LANGUAGE 'plpgsql'; You seem to be constantly re-assigning "list", rather than adding to it with each iteration of the "for loop". Would: ... list := list || ',' || text(rec.z_u_umfang) ... be what your solution is missing? -- Best regards, Andymailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Re: Still don't know how to build this string ? how to concat ??
Hello Andy, Tuesday, March 27, 2001, 3:22:37 PM, you wrote: AC> Tuesday, March 27, 2001, 10:20:18 AM, you wrote: jrpc>> jrpc>> Result: jrpc>> 01 1440 jrpc>> 02 1460 jrpc>> 03 1398 jrpc>> The values (1440, ...) are the last entries of z_u_umfang for each z_u_typ. jrpc>> This is correct since in the function the list:= ... is overwritten until jrpc>> the last record is read. jrpc>> When I try to concat the list in the manner of list := list || jrpc>> text(rec.z_u_umfang); the zustring is empty ! jrpc>> Thanks for any help ... jr jrpc>> Query : jrpc>> select distinct z_u_typ, buildString(z_u_typ) as zustring from jrpc>> zylinder_umfang jrpc>> Function: jrpc>> CREATE FUNCTION buildString(bpchar) RETURNS text AS ' jrpc>> DECLARE jrpc>> list text; jrpc>> rec record; jrpc>> BEGIN jrpc>> FOR rec IN SELECT z_u_umfang FROM zylinder_umfang WHERE z_u_typ jrpc>> = $1; jrpc>> list := text(rec.z_u_umfang); jrpc>> END LOOP; jrpc>> RETURN list; jrpc>> END; jrpc>> ' LANGUAGE 'plpgsql'; AC> You seem to be constantly re-assigning "list", rather than adding to AC> it with each iteration of the "for loop". AC> Would: AC> ... AC> list := list || ',' || text(rec.z_u_umfang) AC> ... AC> be what your solution is missing? I read it again and noticed your comment about having tried || already - I must learn to read messages fully... But, I did wonder if the semicolon ";" at the end of the for loop is what is causing your problem? The syntax explanation I have does not show the ";", therefore it is possible that the loop is executing a null instruction ";", moving on the the list assignment, and then finding the unmatched "end loop" which might not throw an error. Can anyone comment if this is a plausible explanation? -- Best regards, Andymailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] getting some tech skills?
Larry Holt wrote: > It really depends upon what kind of work you like to do. > > For system admin, network engineering, etc. you need scripting like SED > AWK, PERL plus O/S commands. To be a programmer you need a language that > can be compiled: C, Java. A programmer usually does not need to parse a > file table and make changes while a network engineer usually does not > need to push values on a stack. To decide which tool (languages really > are tools) you need you need to decide what job you want to do. > > Generally those who can program business applications will earn the most > money but I prefer playing with hardware instead of typing at a > workstation all day. > It looks like I'll be getting up to speed with the above mentioned scripting langurages. Thanks Larry Andy H -- "Linux, the best thing since Unix itself." ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] [OT] Inventory systems (private)
While I'm not expert what I would do is something like this Items table fields: item_id item_name Attributes table fields: attribute_id attribute_name Item_Attr table fields: item_id attribute_id attribute_value Now an item can be associated with any number of attributes. 03/12/2002 13:17:15, "Ries van Twisk" <[EMAIL PROTECTED]> wrote: >Hi All, > >is there any whitepaper, document or website that can point me to how to >setup a inventory system? >I'm particulary interested how other people solve the problem of a unknown >number of attributes to a inventory item. > >example: >BAL<-- Inventory Item > - Color <- Attribute > - Diameter <- Attribute > - Weight <- Attribute > >Car<-- Inventory Item > - Speed <- Attribute > - Size <- Attribute > - Weight <- Attribute > - Color <- Attribute > >Computer<-- Inventory Item > - Brand <- Attribute > - Weight <- Attribute > - Windows/Linux <- Attribute > > >I can ofcource add any number of columns to a table but for a lot of items >there will be a lot of NULL values and currently I don't know how many >attrubutes one item can have (possible between 10 and 20). This can even >change in feature opon request. > >Ries > > >---(end of broadcast)--- >TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] function replace doesnt exist
Hi im trying to execute an update command on a postgresql DB table using pgAdmin II im using the following statement UPDATE commandlist SET command = REPLACE (command,'A','B') commandlist is the table name command is the column and i want to change the value A to B but it's giving me the following error message an error has occured in pgAdmin II:frmSQLInput.cmdExecute_Click: Number: -2147467259 Description: Error while executing the query; ERROR: Function'replace(varchar, unknown, unknown)' does not exist Unable to identify a function that satisfies the given argument types You may need to add explicit typecasts ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Point and function help
Title: Message Hello all merry XMAS! I'm trying to create a function that will return a point and having little luck in returning results. Basically I have a zip code DB complete with city, state and zip pre-populated. What I would like to do is create this function: CREATE OR REPLACE FUNCTION public.map_point(pg_catalog.varchar, pg_catalog.varchar, pg_catalog.varchar) RETURNS point AS'SELECT map_loc from zip_code where zip = \'$3\' and lower(state) = lower(\'$2\') and lower(city) = lower(\'$1\')' LANGUAGE 'sql' VOLATILE; And I have no problems creating this function however, I can't get it to return any thing from my zip_code table. Am I doing something wrong? Here's a snippet of the zip_code table: Table "public.zip_code"Column | Type | Modifiers ---++--- city | character varying(100) | state | character varying(2) | zip | character varying(10) | area_code | character varying(3) | map_loc | point | city | state | zip | area_code | map_loc +---+---+---+--- portsmouth | nh | 00210 | 603 | (43.0718,70.7634) portsmouth | nh | 00211 | 603 | (43.0718,70.7634) portsmouth | nh | 00212 | 603 | (43.0718,70.7634) portsmouth | nh | 00213 | 603 | (43.0718,70.7634) And nothing returned but an empty row: my_db=# select public.map_point('portsmouth','nh','00211'); map_point --- (1 row) I'm running 7.3.x on Slackware. Any ideas why this is happening? Thanks, Andy
Re: [SQL] Point and function help
Thanks Tom, worked like a charm. Appreciate your time on Christmas day! Best Regards and Merry Christmas to all. Andy -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Thursday, December 25, 2003 10:44 AM To: Andy Lewis Cc: [EMAIL PROTECTED] Subject: Re: [SQL] Point and function help "Andy Lewis" <[EMAIL PROTECTED]> writes: > CREATE OR REPLACE FUNCTION public.map_point(pg_catalog.varchar, > pg_catalog.varchar, pg_catalog.varchar) > RETURNS point AS > 'SELECT map_loc from zip_code where zip = \'$3\' and lower(state) = > lower(\'$2\') and lower(city) = lower(\'$1\')' > LANGUAGE 'sql' VOLATILE; You don't want to quote the parameter references --- what you've got there is simple literal constants '$3' etc. Try CREATE OR REPLACE FUNCTION public.map_point(pg_catalog.varchar, pg_catalog.varchar, pg_catalog.varchar) RETURNS point AS 'SELECT map_loc from zip_code where zip = $3 and lower(state) = lower($2) and lower(city) = lower($1)' LANGUAGE 'sql' VOLATILE; Also, I can't see any reason why this function needs to be VOLATILE; STABLE should be enough, no? regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] Radius of a zip code
Title: Message Hello all, I was trying to find all zip codes within a given zip code or radius. I have map points and Latitude and Longitude in my zip table. I remember seeing a post or two referencing this but can't see to find it. I've tried the following with no luck: -- 20 Miles --select 20 * 360.0 / (7900.0 * atan2(1,1) * 4.0);select * from zip_code where map_loc @ circle(map_point('dallas','tx','75201'), .290105212724467 ) order by city Anyone that has this experience, can you validate this for correctness? Thanks in advance, Andy
Re: [SQL] Radius of a zip code
Thanks All for your suggestions, I have enough information to construct what I need. -Original Message- From: Michael Fuhr [mailto:[EMAIL PROTECTED] Sent: Friday, December 26, 2003 8:43 PM To: Andy Lewis Cc: [EMAIL PROTECTED] Subject: Re: [SQL] Radius of a zip code On Fri, Dec 26, 2003 at 05:42:08PM -0600, Andy Lewis wrote: > I was trying to find all zip codes within a given zip code or radius. > > I have map points and Latitude and Longitude in my zip table. > > I remember seeing a post or two referencing this but can't see to find > it. The code in contrib/earthdistance in the PostgreSQL source code might be what you're looking for. I haven't used it myself, as I had already written a function I needed for another DBMS and ported it to PostgreSQL. > I've tried the following with no luck: > > -- 20 Miles > --select 20 * 360.0 / (7900.0 * atan2(1,1) * 4.0); > select * from zip_code where map_loc @ > circle(map_point('dallas','tx','75201'), .290105212724467 ) order by > city This isn't related to the problem, but is there a reason your map_point function requires city, state, and zip code? If you know the zip code then you shouldn't need the city and state. > Anyone that has this experience, can you validate this for > correctness? I have several databases with lat/lon coordinates and frequently make "show me all records within a certain distance of this point" queries. I wrote a haversine() function that uses the Haversine Formula to calculate the great circle distance between two points on a sphere (assuming the earth is a perfect sphere is accurate enough for my uses). Here's a web site with related info: http://www.census.gov/cgi-bin/geo/gisfaq?Q5.1 Here's an example of how I use the haversine() function. I'm not using PostgreSQL's geometric types -- latitude and longitude are stored in separate fields. The function takes two lat/lon coordinates in degrees and optionally a radius (the default is 3956.0, the approximate radius of the earth in miles); it returns the distance in whatever units the radius is in. SELECT a.zipcode, a.city, a.state, haversine(a.latitude, a.longitude, b.latitude, b.longitude) AS dist FROM zipcode AS a, zipcode AS b WHERE b.zipcode = 75201 AND haversine(a.latitude, a.longitude, b.latitude, b.longitude) <= 20 ORDER BY dist; zipcode | city | state | dist -+---+---+--- 75201 | Dallas| TX| 0 75270 | Dallas| TX| 0.460576795779555 75202 | Dallas| TX| 0.62326173788043 . . . 76012 | Arlington | TX| 19.644132573068 75126 | Forney| TX| 19.8963253723536 75024 | Plano | TX| 19.9884653971924 (106 rows) As for validating the function's correctness, I'm using a well-known formula and I've compared the function's output to distances measured on a map. I wouldn't use it for missile targeting, but it's sufficiently accurate for "show me all stores within 20 miles of my home." Here's the meat of the function (written in C); the coordinates have by now been converted to radians: dlat = lat2 - lat1; dlon = lon2 - lon1; a1 = sin(dlat / 2.0); a2 = sin(dlon / 2.0); a = (a1 * a1) + cos(lat1) * cos(lat2) * (a2 * a2); c = 2.0 * atan2(sqrt(a), sqrt(1.0 - a)); dist = radius * c; If anybody's interested I'll post the entire file. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] postgreSQL editors
I would have to agree with EMS-HiTech's product also. A feature rich, product, very responsive tech support/bug report. PGManager beats the competition hands down. Andy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rich Hall Sent: Wednesday, December 31, 2003 1:09 PM To: [EMAIL PROTECTED] Subject: Re: [SQL] postgreSQL editors >> can anyone recommend a good editor for postgreSQl wich wil enable me >> to create functions/stored procedures? >> I am currently running version 7.4.1 on OS X 10.3 > I use PG Manager from EMS Hitech. It is not free but inexpensive. You have everything PostgreSQL-wise at your fingertips. Ver 2.0 was just released and they are very responsive to bug reports. Rick ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] sort by on two columns
Title: Message Hi All, Is it possible to sort by two columns? Using the query below? SELECT table1.name, table2.name, FROM table1, table2 WHERE table1.id = table2.id ORDER BY I want to be able to sort the names select from two different tables and two different colums(same data type). Is this possible? Thanks, Andy
Re: [SQL] sort by on two columns
Hi Michael, Yes, I understand this but, I would like to have the results of both "table1.name, table2.name" sorted as one column. Is this possible? Thanks, Andy -Original Message- From: Michael Glaesemann [mailto:[EMAIL PROTECTED] Sent: Friday, January 02, 2004 8:40 PM To: Andy Lewis Cc: [EMAIL PROTECTED] Subject: Re: [SQL] sort by on two columns Hi Andy, On Jan 2, 2004, at 7:15 PM, Andy Lewis wrote: > Is it possible to sort by two columns? Using the query below? > > SELECT table1.name, table2.name, FROM table1, > table2 WHERE table1.id = table2.id ORDER BY > > I want to be able to sort the names select from two different tables > and > two different colums(same data type). If you want to order by table1.name and table2.name, just enter them in a comma-separated-list after ORDER BY, e.g., SELECT table1.name, table2.name, FROM table1, table2 WHERE table1.id = table2.id ORDER BY table1.name, table2.name Check out the following link for the online docs: <http://www.postgresql.org/docs/current/static/queries-order.html> It doesn't explicitly give you an example of sorting on more than one column, but the syntax explanation at the top includes it. Does this help? Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] sqlstate 02000 while declaring cursor/freeing prepared statements
I'm new to the PostgreSQL community so please pardon what is probably a silly question. Also, this is my first attempt at posting so you might have seen this already (Sorry!) ... I'm in the process of porting Informix ESQL to PostgreSQL. I occasionally get sqlcode = 100 and sqlstate = 02000 when declaring cursors or freeing prepared statements. Is this normal? For example: $declare loop1 cursor with hold for select distinct ( tabname ) from meta ; results in sqlca.sqlcode = 100 and sqlca.sqlstate = '02000' -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.nws.gov ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] sqlstate 02000 while declaring cursor/freeing prepared
Sure. I'm using ECPG (ecpg -t -r no_indicator -C INFORMIX) in a TRU64 operating system for PostgreSQL version 8.0.2. By occasionally, I mean that I don't observe this problems for each declare and free statement that I've encoded - only for a subset of those commands. But I do observe this problem consistently within that subset - the same command fails every time for a small number of declares and frees. Unfortunately, I do not have a case that can be easily reproduced. The commands that fail are part of a large system. It seems like an odd error for these kinds of commands. Thanks ... Andy Michael Fuhr wrote: On Mon, Aug 29, 2005 at 02:28:24PM -0500, andy rost wrote: I'm in the process of porting Informix ESQL to PostgreSQL. I occasionally get sqlcode = 100 and sqlstate = 02000 when declaring cursors or freeing prepared statements. Is this normal? For example: $declare loop1 cursor with hold for select distinct ( tabname ) from meta ; results in sqlca.sqlcode = 100 and sqlca.sqlstate = '02000' Could you provide a little more context? Are you using ECPG? What version of PostgreSQL are you using? When you say that you "occasionally" see this behavior, do you mean that you see it consistently with some commands and not with others, or do you mean that the same command sometimes does it and sometimes not? Do you have a reproducible test case? That is, everything that somebody could do to reproduce the behavior on their own system? -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.nws.gov ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] sqlstate 02000 while declaring cursor/freeing prepared
I worked on my problem a little further and have a little more information to share. The declare statement that fails consistently follows a select statement that returns zero rows (and sqlcode 100 and sqlstate '02000'). If I ommit the select statement from the code or set sqlcode to 0 before calling the declare statement, the declare statement works fine. It appears as though the declare statement is not updating the sqlca structure. Is this by design for the ecpg options that I'm using? Did I pick up bad habits while using Informix? Thanks ... Andy andy rost wrote: Sure. I'm using ECPG (ecpg -t -r no_indicator -C INFORMIX) in a TRU64 operating system for PostgreSQL version 8.0.2. By occasionally, I mean that I don't observe this problems for each declare and free statement that I've encoded - only for a subset of those commands. But I do observe this problem consistently within that subset - the same command fails every time for a small number of declares and frees. Unfortunately, I do not have a case that can be easily reproduced. The commands that fail are part of a large system. It seems like an odd error for these kinds of commands. Thanks ... Andy Michael Fuhr wrote: On Mon, Aug 29, 2005 at 02:28:24PM -0500, andy rost wrote: I'm in the process of porting Informix ESQL to PostgreSQL. I occasionally get sqlcode = 100 and sqlstate = 02000 when declaring cursors or freeing prepared statements. Is this normal? For example: $declare loop1 cursor with hold for select distinct ( tabname ) from meta ; results in sqlca.sqlcode = 100 and sqlca.sqlstate = '02000' Could you provide a little more context? Are you using ECPG? What version of PostgreSQL are you using? When you say that you "occasionally" see this behavior, do you mean that you see it consistently with some commands and not with others, or do you mean that the same command sometimes does it and sometimes not? Do you have a reproducible test case? That is, everything that somebody could do to reproduce the behavior on their own system? -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.nws.gov ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] sqlstate 02000 while declaring cursor/freeing prepared
Just so that we can snip this thread, we've confirmed that free cursor and free statement do not affect sqlca structure elements sqlcode and sqlstate. Michael Fuhr wrote: On Mon, Aug 29, 2005 at 04:39:36PM -0500, andy rost wrote: I worked on my problem a little further and have a little more information to share. The declare statement that fails consistently follows a select statement that returns zero rows (and sqlcode 100 and sqlstate '02000'). If I ommit the select statement from the code or set sqlcode to 0 before calling the declare statement, the declare statement works fine. It appears as though the declare statement is not updating the sqlca structure. Is this by design for the ecpg options that I'm using? Did I pick up bad habits while using Informix? Apparently since PostgreSQL doesn't actually have an OPEN statement, the ECPG code generator doesn't issue the DECLARE until the code OPENs the cursor. Observe: % cat foo.pgc int main(void) { EXEC SQL CONNECT TO DEFAULT; printf("before DECLARE\n"); EXEC SQL DECLARE curs CURSOR FOR SELECT 1; printf("after DECLARE, before OPEN\n"); EXEC SQL OPEN curs; printf("after OPEN\n"); EXEC SQL CLOSE curs; EXEC SQL DISCONNECT; return 0; } % ecpg foo.pgc % cat foo.c ... printf("before DECLARE\n"); /* declare curs cursor for select 1 */ #line 7 "foo.pgc" printf("after DECLARE, before OPEN\n"); { ECPGdo(__LINE__, 0, 1, NULL, "declare curs cursor for select 1 ", ECPGt_EOIT, ECPGt_EORT);} #line 9 "foo.pgc" printf("after OPEN\n"); ... Notice that "after DECLARE" actually comes *before* the DECLARE statement is issued, so references to sqlcode and sqlstate would see values from a previous command. I don't know if DECLARE is supposed to affect error codes or not; I'd have to consult the standard. -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.nws.gov ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] Using descriptor areas to insert
I'm in the final stages of porting a large body of ESQL software from Informix to PostgreSQL (V8). The last few Informix-based functions involve Informix descriptor areas to insert rows into a table. I've sorted out the logic for using PostgreSQL descriptor areas for select statements. How do you go about using the PostgreSQL descriptor areas for insert statements? Is there another way to approach the problem of insert records into an arbitrary table using dynamic SQL? Thanks ... -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.nws.gov ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] idea for a geographically distributed database: how best to implement?
Hello, I've got a database for a website which is a variant of the 'show stuff near to me' sort of thing. Rather than host this database on a single server, I have a scheme in mind to break the database up geographically so that each one can run comfortably on a small server, but I'm not sure about the best way of implementing it. Here's the scheme: Imagine that the country is split into an array of square cells. Each cell contains a database that stores information about people who live in the area covered by the cell. There's one problem with this scheme. What happens if you live near the edge of a cell? My solution is that any inserted data which lies near to the edge of cell A is *also* inserted in the database of the relevant neighbouring cell - let's say cell B. Thus, if someone lives in cell B, but close to the border with cell A, they'll see the data that is geographically close to them, even if it lies in cell A. Is this a common pattern? I could, of course, simply find every insert, update and delete in the application and alter the code to explicitly update all the relevant databases, but is there a more elegant way of simply saying: "Do this transaction on both Database A and Database B" monotonically? I've had a look at some replication solutions, but they all seem to involve replicating an entire database. The advantage of my scheme is that if I can distribute my application over large numbers of small servers, I'll end up with more bangs for the buck, and it'll be much easier to manage growth by managing the number of servers, and number of cells hosted on each server. Thanks for any suggestions! Andy Ballingall ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] idea for a geographically distributed database: how best to implement?
Hi again, I don't think I've explained my idea well enough: >You might want to consider using latitude and longitude or zip codes or taking more of a traditional "GIS" approach rather than duplicating data across redundant databases. I do use coordinates to position data. The databases aren't redundant. Each database serves a particular region, the aim being to ensure that no one database gets too big for the machines in my farm to cope with. >Another issue is that you end up having to query every database to find proximity... Suppose you have 500 "cells" ? You now have to invoke some sort of RPC mechanism on 499 other RDBMS, manage 500 user names and passwords, retrieve potentially 500 recordsets, merge the data, etc... 1. It doesn't matter how many cells there are. A user's details are only found in the database covering the area he lives, and possibly in the neighbouring databases (at most 3, if he lives in the corner of the cell). 2. The neighbouring cells of cell A never change, therefore you don't need to do anything clever to identify which other databases to write to in these edge conditions, since when the user logs in, the relationship information is read just once (just like the user's name and picture) 3. No merging ever needs to take place during queries. Here's an example. Let's say, you, a resident of cell A, say 'show me who lives nearby!'. The database of cell A has *all* the information. Why? Well, when a resident in nearby cell B registered, his details were also added to cell A at that time. Database A doesn't know, or care, how the data it holds got there. 4. Which database do I use to serve all your requests? Well, yes, once, I have to work out which one it is, based on where you are, but that's almost no work at all (see next point...) 5. I didn't mention before that my scheme *does* also require a national database, but the size of that is well defined, as the only jobs it needs to do are: a) Know the names of all the places, b) Know the definition of the existing cells, and their positions - to make it easy to manage the cell structure, and to know where to send you when you login c) A set of global sequences used to generate unique ids for all the other databases. When your record goes into cell A and cell B, the id of this record is the *same*... >Your problems will continue to get more and more complex... You are better off with a single RDBMS and a single schema... I currently have a single database. It is clearly simpler. However, I also have a headache about what machinery to buy so that: a) it starts off cheap b) It is easy to expand, if and when the website takes off. Yes, if I knew how many people would use this service, and how quickly it would expand, I'd be able to raise the capital to buy a huge nationwide server. But I don't. So the answer 'how big does your DB server need to be' is simply 'I can only make a very poor guess'. This doesn't look good in a business plan. Far better if I can demonstrate an application which can start off on a single simple server, and can elegantly expand with the user base, with the farm of servers growing over time. Unlike many data sets, mine is almost totally partitioned geographically. There is only *one* little detail - that of visibility of data in neighbouring cells, and that is sorted out with my idea of duplicating information between neighbours. Hope that fills in some gaps... Thanks for your comments, Andy "Andy Ballingall" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Hello, > > I've got a database for a website which is a variant of the 'show stuff > near > to me' sort of thing. > > Rather than host this database on a single server, I have a scheme in mind > to break the database up geographically so that each one can run > comfortably > on a small server, but I'm not sure about the best way of implementing it. > > Here's the scheme: > > > Imagine that the country is split into an array of square cells. > Each cell contains a database that stores information about people who > live > in the area covered by the cell. > > There's one problem with this scheme. What happens if you live near the > edge > of a cell? > > My solution is that any inserted data which lies near to the edge of cell > A > is *also* inserted in the database of the relevant neighbouring cell - > let's > say cell B. > > Thus, if someone lives in cell B, but close to the border with cell A, > they'll see the data that is geographically close to > them, even if it lies in cell A. > > > > Is this a common pattern? > > I could, of course, sim
Re: [SQL] idea for a geographically distributed database: how best
>Andy, i agree with what codeWarrior says. But if you are interested in replication, dbmirror is very elegant (altho not as trendy) simple, and highly customizable replication solution. I have heavily modified dbmirror to even support Asynchronous Conditional row grained Foreign key dependency Traversal oriented Lazy Replication!!! (which ofcourse nobody wants :) I'll go and check it out, so even if I don't want to use it, I'll at least know what it means! (I considered suggesting using an acronym for it, but 'ACRGFKDTOLR' made my lips bleed when I tried to say it). Thanks, Andy > > > > > "Andy Ballingall" <[EMAIL PROTECTED]> wrote in message > news:[EMAIL PROTECTED] > > Hello, > > > > I've got a database for a website which is a variant of the 'show stuff > > near > > to me' sort of thing. > > > > Rather than host this database on a single server, I have a scheme in mind > > to break the database up geographically so that each one can run > > comfortably > > on a small server, but I'm not sure about the best way of implementing it. > > > > Here's the scheme: > > > > > > Imagine that the country is split into an array of square cells. > > Each cell contains a database that stores information about people who > > live > > in the area covered by the cell. > > > > There's one problem with this scheme. What happens if you live near the > > edge > > of a cell? > > > > My solution is that any inserted data which lies near to the edge of cell > > A > > is *also* inserted in the database of the relevant neighbouring cell - > > let's > > say cell B. > > > > Thus, if someone lives in cell B, but close to the border with cell A, > > they'll see the data that is geographically close to > > them, even if it lies in cell A. > > > > > > > > Is this a common pattern? > > > > I could, of course, simply find every insert, update and delete in the > > application and alter the code to explicitly update all the relevant > > databases, but is there a more elegant way of simply saying: "Do this > > transaction on both Database A and Database B" monotonically? > > > > I've had a look at some replication solutions, but they all seem to > > involve > > replicating an entire database. The advantage of my scheme is that if I > > can > > distribute my application over large numbers of small servers, I'll end up > > with more bangs for the buck, and it'll be much easier to manage growth by > > managing the number of servers, and number of cells hosted on each server. > > > > Thanks for any suggestions! > > Andy Ballingall > > > > > > ---(end of broadcast)--- > > TIP 3: Have you checked our extensive FAQ? > > > > http://www.postgresql.org/docs/faq > > > > > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- -Achilleus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.3/173 - Release Date: 16/11/2005 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] idea for a geographically distributed database: how best to implement?
Thanks Aidan, That's very interesting! I've not used slony yet, so I'll setup some tests and try exactly this mechanism. What I like about it is that the changes that have to occur to either add a cell or to split an existing cell into two to manage growing traffic have a localised effect on the web of database interdependencies, making it more straightforward to do maintenance without the website being taken offline for most people. They'd only notice downtime if they happened to live in or next to the cells being messed around with. Not having to touch the actual web application is nice. The other advantage over my manual hack is that the replicated data is just that - replicated, so when each database does a backup, it's only the master data that is getting backed up and restored. In my manual scheme, following restoration after a node failure, I'd have to do a postprocessing step to work out which restored data was actually data slaved from a neighbouring cell, and refresh it in case the neighbouring data has been updated since the node went down. (I hadn't considered backup and restore properly up till now, but I'm going to add that to the list of good reasons for taking advantage of my 'distantly disjoint' data set. If my national data is shared between 500 cells, then each database is going to be titchy, and backup will take 'no time at all', as in parallel, 500 streams of database data flow first to node local storage and thence to safer, remote storage) Cheers, Andy - -Original Message- From: Aidan Van Dyk [mailto:[EMAIL PROTECTED] Sent: 17 November 2005 19:54 To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: idea for a geographically distributed database: how best to implement? Andy, This is *very* interesting scenario. Definitely peaked my interest on this one. I haven't tried this scenario before, but I have used "inheritted" tables to do "splitups". Basically, just partition your data, and replicate certain partitions between various neighbours. http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html With Slony, you can get "master slave" replication of particular tables "sets" (not necessarily whole databases). So I haven't developped this idea, or tried it, but you might be able to play with it and make it work. On *each* database, you have a head table, with no records: CREATE TABLE complete (); And a bunch of "cell" tables that inherit the "complete" table: CREATE TABLE cell_a () INHERITS "complete"; CREATE TABLE cell_b () INHERITS "complete"; Basically, you *always* work on the "complete" table, with the following rules: CREATE RULE cell_replicate_insert ON INSERT TO complete DO INSTEAD INSERT INTO cell_a (...); CREAT RULE cell_replicate_update ON UPDATE TO complete DO INSTEAD UPDATE cell_a ...; Use SlonyI to replecate cella to NODE B (master->slave), and on NODE B, do something similar (except work on b, and replicate b -> NODE A). This way, all your web/interface code everywhere is using the "complete" table for any insert/update/select. When insert/updateing, it will always go to the "local" table, which Slony "replicates" out to the neighbouring nodes. And when selecting on it (the "complete" table), you get data from your local table, and any "replicated" neighbouring node's tables which are in the local schema inheriting from "complete" and SlonyI is updating. Good luck, and if you get anywhere with this, let me know! Andy Ballingall wrote: > Hello, > > I've got a database for a website which is a variant of the 'show stuff > near to me' sort of thing. > > Rather than host this database on a single server, I have a scheme in mind > to break the database up geographically so that each one can run > comfortably on a small server, but I'm not sure about the best way of > implementing it. > > Here's the scheme: > > > Imagine that the country is split into an array of square cells. > Each cell contains a database that stores information about people who > live in the area covered by the cell. > > There's one problem with this scheme. What happens if you live near the > edge of a cell? > > My solution is that any inserted data which lies near to the edge of cell > A is *also* inserted in the database of the relevant neighbouring cell - > let's say cell B. > > Thus, if someone lives in cell B, but close to the border with cell A, > they'll see the data that is geographically close to > them, even if it lies in cell A. > > > > Is this a common pattern? > > I cou
[SQL] idea for a geographically distributed database: how best to implement?
Hello, I’ve got a database for a website which is a variant of the ‘show stuff near to me’ sort of thing. Rather than host this database on a single server, I have a scheme in mind to break the database up geographically so that each one can run comfortably on a small server, but I’m not sure about the best way of implementing it. Here’s the scheme: Imagine that the country is split into an array of square cells. Each cell contains a database that stores information about people who live in the area covered by the cell. There’s one problem with this scheme. What happens if you live near the edge of a cell? My solution is that any inserted data which lies near to the edge of cell A is *also* inserted in the database of the relevant neighbouring cell – let’s say cell B. Thus, if someone lives in cell B, but close to the border with cell A, they’ll see the data that is geographically close to them, even if it lies in cell A. Is this a common pattern? I could, of course, simply find every insert, update and delete in the application and alter the code to explicitly update all the relevant databases, but is there a more elegant way of simply saying: “Do this transaction on both Database A and Database B” monotonically? I’ve had a look at some replication solutions, but they all seem to involve replicating an entire database. The advantage of my scheme is that if I can distribute my application over large numbers of small servers, I’ll end up with more bangs for the buck, and it’ll be much easier to manage growth by managing the number of servers, and number of cells hosted on each server. Thanks for any suggestions! Andy Ballingall 'pgsql-sql@postgresql.org'
[SQL] Is it possible to redirect an update/insert/delete to a different table?
Hello, Is there a way that I can specify that a request to change to a particular table is redirected instead to a different table? I’ve looked through rules, and as far as I can make out, they are only useful for explicit actions. I’m looking for something that behaves as though it simply substitutes the table name for a different table name before executing the command, no matter what the command looks like. Thanks Andy Ballingall
Re: [SQL] Is it possible to redirect an update/insert/delete to a different table?
Hello Peter, I'm glad it's possible, but I can't see how from the documentation. Say if I have a table called 'apples' and a table called 'pears'. What would the rule look like that would remap all updates on apples so that they were applied to pears instead? Thanks, Andy - Andy Ballingall wrote: > I've looked through rules, and as far as I can make out, they are > only useful for explicit actions. I'm looking for something that > behaves as though it simply substitutes the table name for a > different table name before executing the command, no matter what the > command looks like. You can write a couple of rules for that. -- Peter Eisentraut http://developer.postgresql.org/~petere/ -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.4/176 - Release Date: 20/11/2005 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Is it possible to redirect an update/insert/delete to a different table?
Hello Jaime, I'm still not quite clear. Say I have a number of different updates on a table 'apples' in my code, including: UPDATE apples set pips=6 and color='yellow' where id=3; UPDATE apples set size=10 where id=6; What would a rule look like which, when *any* update is attempted on the apples table, will instead apply the update to a different table - 'pears'. I get this far: CREATE rule pears_instead_of_apples AS ON UPDATE TO apples DO INSTEAD UPDATE INTO pears .; What do I put where the . is, so that the rule will transparently update the pears table with whatever values happened to be defined by the original update command? Is there a special keyword that I've missed? Regards, Andy Ballingall -Original Message- From: Jaime Casanova [mailto:[EMAIL PROTECTED] Sent: 20 November 2005 14:23 To: [EMAIL PROTECTED] Cc: pgsql-sql@postgresql.org Subject: Re: Is it possible to redirect an update/insert/delete to a different table? > I've looked through rules, and as far as I can make out, they are only > useful for explicit actions. I'm looking for something that behaves as > though it simply substitutes the table name for a different table name > before executing the command, no matter what the command looks like. > Make 3 rules (INSERT/UPDATE/DELETE), 4 if you want SELECT as well... -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.4/176 - Release Date: 20/11/2005 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Is it possible to redirect an update/insert/delete to a different table?
Hi Andreas, The rule you've given only works for an update which changes the name. If I do another update which changed the colour instead of the name, that rule wouldn't do the right thing. Instead, I'm looking for something which, with a single 'rule' (or whatever the mechanism ends up being), intercepts *any* update on apples, and applies the changes to the pears table instead, as if the only change that occurred was a change of table name. I can achieve this in the application which generates the sql commands, but someone else suggested it was possible with rules, but it may not be the case. Thanks! Andy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andreas Kretschmer Sent: 20 November 2005 16:17 To: pgsql-sql@postgresql.org Subject: Re: [SQL] Is it possible to redirect an update/insert/delete to a different table? Andy Ballingall <[EMAIL PROTECTED]> schrieb: > Hello Peter, > > I'm glad it's possible, but I can't see how from the documentation. > > Say if I have a table called 'apples' and a table called 'pears'. > > What would the rule look like that would remap all updates on apples so that > they were applied to pears instead? create rule apples_pears_update as on update to apples do instead update pears set name= NEW.name where id=NEW.id ; test=# select * from apples ; id | name +-- 1 | a (1 row) test=# select * from pears ; id | name +-- 1 | b (1 row) test=# update apples set name = 'c' where id = 1; UPDATE 1 test=# select * from pears ; id | name +-- 1 | c (1 row) http://www.postgresql.org/docs/8.1/interactive/rules-update.html HTH, Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° ---(end of broadcast)--- TIP 6: explain analyze is your friend -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.4/176 - Release Date: 20/11/2005 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Is it possible to redirect an update/insert/delete to a different table?
>Try it. [snipped example] Ah. Basically, you set up the rule to assign every column, and if the update doesn't redefine some columns, then it still works. I didn't understand that you could get the rule to work generically like this. I'll presume that the rule will need amending if the table column definition later changes. (E.g. if I add 'stalk_length' to my apples and pears tables)... Thanks very much for your help. Andy ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] idea for a geographically distributed database: how best to implement?
David Bath wrote: > There are a couple of philosophical perspectives I've come across in > previous > work with cadastral data that may be useful...[snipped] Thanks, David In this particular application, structures such as postcode sectors, administrative boundaries etc. are not really of much importance, as most stuff is a simple coordinate based searches. Even with the problem partitioned into disjoint regions, within each region, the search remains trivial, as all the data that the user is allowed to access will be stored with that region (this includes data replicated from neighbouring regions). In this context, the interesting task isn't so much the actual database searching, or the exact definition of the disjoint regions. The interesting task is to define a system which can dynamically remap the hosting of regions to specific servers, so that no one server gets too busy. As demand grows, I simply plug in more 4 blades and press the 'reconfigure' button (Sorry - I was dreaming for a moment...) The only limiters are the number of servers available and the activity within a single region (which must be servable by a single server), but given the highly localised nature of the application, the regions can be very small, and I don't expect to ever see a region with more than 1GB of data - the aim being for all the data to be resident in RAM. So far, I've already seen some issues. I've been looking at slony-1 to handle the replication between adjacent regions, and not only is it asynchronous (I was hoping otherwise...slony-2 seems a long way off), but changing the db schema has ramifications too. (I.e. changing the schema means redefining each replication). Still - no show stoppers yet. Thanks for your insights, Andy ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] drop PW
Mark, As Joshua said, you can modify pg_hba.conf. What's happening is PostgreSQL is configured to require a password for the IP address that user is connecting from, but you're not supplying one. Set that user and the IP address they connect from to "trust" security in pg_hba.conf and bounce PostgreSQL. An alternative approach is to add the required password to your user's .pgpass file however I've never used those so cannot comment. Andy Mark Fenbers wrote: I have created a new 8.3 version DB and populated it. A specific user of this database (george) has been setup with a password, so that every time I use psql or some other utility, I need to supply this password. So I want to drop the password authentication. I tried rerunning createuser (and just pressing Enter when prompted for the new password), but it complains that the user already exists. I can't drop the user because this user owns the DB and all the tables. My postgresql books are all for 7.x, and suggests altering the pg_shadow table (which seems risky to me). I tried: ALTER USER george PASSWORD ''; and that looked like it succeeded, but running psql again prompted me and when I just hit Enter, it complained that no password was supplied. So how do I turn off being prompted for a password for george. (I am aware of the security risks...) Mark -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] PHP pg_escape_string
Hi, Zdravko Balorda wrote: there are both pg_(un)escape_bytea() functions but only one pg_escape_string()... I wonder if I may be missing something here? Yeah, I think you are. pg_escape_string (funnily enough) escapes string data which is then stored in the database. You would use this for escaping things like apostrophes in a text field so PostgreSQL wouldn't think the apostrophe in the field is the "end of data" marker. However this string is *not* stored in the database in an escaped form, as it's only escaped for the SQL command, therefore it makes no sense to unescape it. bytea columns on the other hand, are a way of sending and receiving binary data as a textual representation to/from the database server. The data you send and receive is both encoded, therefore you need to unescape it to read it back out. For example a null byte (byte value 0) cannot be sent or received in a SQL command, because a null byte represents an end-of-string in C. Other byte values similarly cannot be sent in a string because they cannot be converted to a character (e.g. ASCII newline/linefeed.) Regards, Andy -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] CHECK constraint removing brackets
Hi, I notice this had been raised as a bug (and subsequently over-ruled) so I'm asking how I can achieve the following business rule. I have an order table which has an invoice_id column that links to an invoice table (an order can only have 1 invoice, but one invoice can have multiple orders.) An order can have either an unconfirmed state, or any other state after it's been confirmed. If an order has the state unconfirmed, the invoice_id column must be null, as an invoice won't have been created yet. If an order has any other state except unconfirmed, the invoice_id must not be null. With the above in mind, I decided on the following check to enforce this: (state = 'Unconfirmed'::client.order_state AND invoice_id = NULL) OR (state != 'Unconfirmed'::client.order_state AND invoice_id != NULL) However PostgreSQL (8.4.2) converts this to the following: state = 'Unconfirmed'::client.order_state AND invoice_id = NULL::integer OR state <> 'Unconfirmed'::client.order_state AND invoice_id <> NULL::integer This allows both an order state of "unconfirmed" and a non-null invoice_id, and an order state of "confirmed" and a NULL invoice_id. How can I achieve the above? Thanks, Andy -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] CHECK constraint removing brackets
Hi Tom and Scott, > > I think your real problem is that you're trying to use "= NULL" and > "!= NULL" where you should say IS NULL or IS NOT NULL. Argh such a school-boy error! This is the first bit of database programming I've done for about 2 months, and I hadn't switched my C++ brain off. I know about the <> and !=, for some reason != has always made better sense to me to read, so I tend to write it that way. Cheers, Andy -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Storing null bytes in bytea
Hi all, I was going to post this on the pgsql-php list but I think the issue is more on the PostgreSQL side of things. I'm using PHP 5.2.9 connected to a PostgreSQL 8.3.7 server running on Solaris 10 to try to store the session data for an application using a custom session handler class. The session data (objects/class instances etc) is serialized into raw bytes in PHP and contains a few nul/zero bytes (which are meaningful to PHP when it comes to deserializing the data.) Because of the nul bytes, I've set the session_data column to be a bytea column in my database table. However I cannot get PostgreSQL to read past the first nul byte on an insert, so the unserialize call fails when it reads it back out the database and the remaining data is omitted. An example of such query is this: INSERT INTO system.session (user_id, session_key, session_name, client_browser, date_created, date_expires, ip_address, session_data) VALUES (NULL, '4pc4sjciahoc4fuk1bt4kohe91'::character varying(32), 'AppName'::character varying(50), 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.9) Gecko/2009040821 Firefox/3.0.9 (.NET CLR 3.5.30729)'::character varying(200), public.get_pg_timestamp(1240853862::integer), public.get_pg_timestamp(1240854162::integer), '192.168.0.8'::inet, E'IsLoggedIn|b:1;CurrentUser|O:17:"Class_SystemUser":4:{s:26:"\\000Class_SystemUser}'::bytea); All other columns are fine, but when it comes to the session_data column, all I end up with is 'IsLoggedIn|b:1;CurrentUser|O:17:"Class_SystemUser":4:{s:26:"'. Everything past the first "\\" byte sequence is ignored. I've tried this with and without the 'E' at the beginning of the value string. Any pointers as to what I'm doing wrong? Thanks, Andy -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Storing null bytes in bytea
Hi Tom, Your example works fine in psql: regression=# create table t1 (f1 bytea); CREATE TABLE regression=# insert into t1 values (E'IsLoggedIn|b:1;CurrentUser|O:17:"Class_SystemUser":4:{s:26:"\\000Class_SystemUser}'::bytea); INSERT 0 1 regression=# select * from t1; f1 --- IsLoggedIn|b:1;CurrentUser|O:17:"Class_SystemUser":4:{s:26:"\000Class_SystemUser} (1 row) I suspect what is happening is that some layer on the client side is doubling (or perhaps undoubling?) the backslashes for you. Exactly what are you doing with that literal as you build the query? It might help to turn on log_statements so that you can see just what the server is getting. Many thanks for your reply. I was a bit naive about the quality of the client I was using (Navicat 8.1.) It turns out in the "Memo" view it doesn't show anything past the first zero byte. However switch it to hex view and you see the full bytes. I think my issue that it wasn't working on the client side (and caused me to examine it in more detail in the database) was because it wasn't unescaping correctly in the application code when it read the data back out of the database. Next time I won't be so lazy and try it out in psql first... Regards, Andy -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Obtaining a limited number of records from a long query
Hi Oliveiros Certainly! What you're looking for is the LIMIT...OFFSET syntax. Some examples: SELECT ... LIMIT 10 - return the first 10 records only. SELECT ... LIMIT 10 OFFSET 10 - return 10 records, starting with record 11. Manual page: http://www.postgresql.org/docs/8.3/interactive/queries-limit.html Regards, Andy Oliveiros Cristina wrote: Dear List, Is there any way to force a query to return just a few records? For ex, returning just the first ten rows from a long ORDER BY query ? And then, eventually, obtaining the remaining records at a later time, or in background? Thanks in advance for your help, Best, Oliveiros -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql