Re: [GENERAL] Would an index benefit select ... order by?
On Mon, Nov 05, 2007 at 10:09:12AM +0400, rihad wrote: What if it's really a limited select: select * from foo order by created_at desc limit SCREEN_AT_A_TIME; because this is what I meant initially (sorry), would Postgres always use index to get at sorted created_at values, so I don't *have* to create the index? Postgres would probably use the index in this case. In general, postgres plan's the execution of a query whichever way the statistics about the data indicate are likely to make it proceed the fastest. I think maintaining the index has its own penalty so in my upcoming project I'm evaluating the option of skipping defining one entirely unless absolutely necessary. It's always a balancing act. If your code is spending the majority of the time running the above select (and postgres thinks that an index scan is best) then you're better off with the index. If your code is spending the majority of the time inserting data (and hence updating indexes) then you're probably better off without the index. You need to know your access patterns and determine which is best for you. I'd generally leave indexes out until I know that I need them. Indexes are, after all, just a performance hack and therefore the root of all evil! :) Sam ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Populating large DB from Perl script
On 11/3/07, Mikko Partio [EMAIL PROTECTED] wrote: On Nov 2, 2007 8:45 PM, Kynn Jones [EMAIL PROTECTED] wrote: It would be great if there was a stored proc-archive somewhere in the web where people could post their procedures. I know there are some code examples in the official documentation but they are few in numbers. In a somewhat related note, what happened to the old PostgreSQL cookbook site? Does anyone care to revive it? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Copy the database..
Tom Lane wrote: Rainer Bauer [EMAIL PROTECTED] writes: Wouldn't it be possible to copy the database folder and somehow instruct the postmaster to include the copied data after a restart? See CREATE DATABASE's TEMPLATE option. It's a bit crude but I think it'll help. Thanks, Tom. Works like a charm. Rainer ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Problems with PostgreSQL DBI-Link / DBD-ODBC
Hi David, I'm sorry to bother you again, but I still couldn't get it to work. Like you suggested, I checked for successfully installed schemas. There was one, which I deleted again because I don't know which of my countless tries to create it was the successful one. I think the following code should be alright, however when executing it, I receive the following error message: ERROR: error from Perl function: error from Perl function: DBD::ODBC::db column_info failed: [Microsoft][ODBC SQL Server Driver]Die Verbindung ist mit Ergebnissen von einem anderen hstmt belegt (SQL-HY000)(DBD: odbc_columns/SQLColumns err=-1) at line 66. at line 53. SQL state: XX000 Die Verbindung ist mit Ergebnissen von einem anderen hstmt belegt is German and means The connection is occupied/reserved with results from another hstmt. I'm desperately trying to solve it, but have no idea where to start. SELECT dbi_link.make_accessor_functions( 'dbi:ODBC:test'::dbi_link.data_source, 'sa'::text, ''::text, '--- AutoCommit: 1 RaiseError: 1 '::dbi_link.yaml, NULL::dbi_link.yaml, NULL::text, NULL::text, 'test'::text ); Hope you can help me one more time. Thanks. Regards, Pit On Tue, Oct 30, 2007 at 01:12:41PM +0100, Pit M. wrote: Hi David! Thanks for the fast reply. So you mean that I might already have created a connection but am still trying to create the same one again? Yes. Check whether the existing one works :) So how can I test it? Fire up psql and do: SELECT data_source_id, local_schema, data_source FROM dbi_link.dbi_conection; This gives you schemas you have created successfully. You can then check what's in schema foo by doing: \dv foo.* What is the correct way of accessing this data source in a select statement? Concerning the documents, I wouldn't say that they are unclear about make_accessor_functions(). The problem is, that in the dbi-link documents (quote: Do the following, with the appropriate parameters. Appropriate parameters come from the perldoc of the appropriate DBD...) and in the dbi-odbc documents I can't find any information about the correct use of the parameters for the ODBC-connection. That's why I tried to create the connection so many times. Ah, I see. I don't have an example of dbd-odbc just yet, but you should be able to write a very short perl program which connects to your data source. http://www.easysoft.com/developer/languages/perl/dbd_odbc_tutorial_part_1.html Once you've verified that that's working, you can set up a new connection. The parameters you pass into $dbh coincide, not coincidentally, with the first inputs to make_accessor_functions :) Cheers, David. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] COPY ... FROM and index usage
Hi, Anyone have comparisons/benchmarks to give some idea of the potential performance gains? Say compared to doing the stuff here: http://www.postgresql.org/docs/8.2/static/populate.html Regards, Link. At 09:35 AM 11/5/2007, Toru SHIMOGAKI wrote: Dimitri, thank you for your quoting. I'm a pg_bulkload author. pg_bulkload is optimized especially for appending data to table with indexes. If you use it, you don't need to drop index before loading data. But you have to consider conditions carefully as Dimitri said below. See also pg_bulkload README: http://pgfoundry.org/docman/view.php/1000261/473/README.pg_bulkload-2.2.0.txt Best regards, Dimitri Fontaine wrote: Hi, Le Sunday 04 November 2007 11:22:19 Reg Me Please, vous avez écrit : That is, should I drop all indexes during a COPY ... FROM in order to gain the maximum speed to load data? When looking for a way to speed up data loading, you may want to consider pgbulkload, a project which optimizes index creation while loading data, and bypass constraints: it's useful when you know you trust input. As I've never used it myself, I can only provides following links: http://pgfoundry.org/projects/pgbulkload http://pgfoundry.org/docman/view.php/1000261/456/20060709_pg_bulkload.pdf Regards, -- Toru SHIMOGAKI[EMAIL PROTECTED] NTT Open Source Software Center ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] How to temporarily disable a table's FK constraints?
Hi, everyone. Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do, but I only found this snippet online: -- to disable UPDATE pg_class SET reltriggers=0 WHERE relname = 'your_table'; -- to re-enable UPDATE pg_class SET reltriggers = count( * ) FROM pg_trigger WHERE pg_class.oid=tgrelid AND relname = 'your_table'; and it appears that one needs to be root to execute these statements. Is there any other way for non-root users? TIA! kj ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Restore a database
On Sun, Nov 04, 2007 at 03:52:51PM -0500, Jesus Arocho wrote: Ok, I committed the worst mistake in db admin. I upgraded to 8.1 on my debian server but forgot to backup one of the databases. The 7.1 directory is still there. I would like a list of options. I am not sure that temporarily reinstalling 7.1 will allow access immediately or must I have to rebuild something. The debian packages tend to leave behind old versions of pg_dump and such to help with situations like this, though 7.1 is from quite a while ago... Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Those who make peaceful revolution impossible will make violent revolution inevitable. -- John F Kennedy signature.asc Description: Digital signature
Re: [GENERAL] Restore a database
On Sun, Nov 04, 2007 at 03:52:51PM -0500, Jesus Arocho wrote: Ok, I committed the worst mistake in db admin. I upgraded to 8.1 on my debian server but forgot to backup one of the databases. The 7.1 directory is still there. I would like a list of options. I am not sure that temporarily reinstalling 7.1 will allow access immediately or must I have to rebuild something. Thanks. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Those who make peaceful revolution impossible will make violent revolution inevitable. -- John F Kennedy signature.asc Description: Digital signature
Re: [GENERAL] How to temporarily disable a table's FK constraints?
On Nov 5, 2007, at 10:50 AM, Kynn Jones wrote: Hi, everyone. Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do, but I only found this snippet online: -- to disable UPDATE pg_class SET reltriggers=0 WHERE relname = 'your_table'; -- to re-enable UPDATE pg_class SET reltriggers = count( * ) FROM pg_trigger WHERE pg_class.oid=tgrelid AND relname = 'your_table'; and it appears that one needs to be root to execute these statements. Is there any other way for non-root users? Can you explain what it is you're actually trying to do? As in, what's your use case for needing to do this? While there isn't any specific 'disable foreign key' functionality, there are different solutions to what use cases where people think they need this. The one you listed is actually a total hack and should really be avoided unless your really know what you're doing as what you're doing is forcing an inconsistency in the catalogs and if you forget to restore them with that second query, well, good luck to the next guy trying to figure out what you did. You'd be better off just dropping the foreign key than going that route. I think a good addition to the pieces of advice that get tacked on to the end of the list messages would be something along the lines of: Don't edit the catalogs unless you absolutely, positively know what you're doing and even then, think again. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to temporarily disable a table's FK constraints?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/05/07 10:50, Kynn Jones wrote: Hi, everyone. Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do, but I only found this snippet online: -- to disable UPDATE pg_class SET reltriggers=0 WHERE relname = 'your_table'; -- to re-enable UPDATE pg_class SET reltriggers = count( * ) FROM pg_trigger WHERE pg_class.oid=tgrelid AND relname = 'your_table'; and it appears that one needs to be root to execute these statements. Is there any other way for non-root users? The whole idea of enforcing Relational Integrity in the database engine is to *not* allow regular users to bypass data integrity checks. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD4DBQFHL1PLS9HxQb37XmcRAm7zAKDbdYSymz3zIyKmfdU5wPjtpVTAlwCYoEA/ DI1Z2Fbgo62k6C2P8gsCQQ== =Np96 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Bitemporal sequenced unique constraint (function/trigger)
Hi there, This is my first time posting in here and I'm hoping somebody can point out where I am going wrong? I am currently trying to use Bitemporal tables. By this I mean a table with a valid times and transaction times. These topics are covered by Joe Celko and Richard Snodgrass in their respective books. I have developed a simple schema to test the relevant constraints which are required to keep all the valid times and transaction times in order and to make sure they don't overlap. This is shown below and is done using a similar schema of tables for Customers, Properties and Prop_Owners as Richard Snodgrass does in his book. Of course these constrains are not possible in Postgres, so I have made them as functions and then created triggers for them. Everything seems to be working except for my function/trigger that maintains the referential integrity between the Prop_Owner and Customers tables when there is a gap in the Customers valid time or transaction time. I am using Postgres 8.1 on Suse10.2 vt = valid time tt = transaction time Here is the schema: CREATE TABLE Customers (customer_no INTEGER NOT NULL, customer_name CHAR(30) NOT NULL, vt_begin DATE DEFAULT CURRENT_DATE, vt_end DATE DEFAULT DATE '-12-31', tt_start DATE DEFAULT CURRENT_DATE, tt_stop DATE DEFAULT DATE '-12-31', CONSTRAINT Cust_VTdates_correct CHECK (vt_begin = vt_end), CONSTRAINT Cust_ttdates_correct CHECK (tt_start = tt_stop), PRIMARY KEY (customer_no, vt_begin, vt_end, tt_start, tt_stop) ); CREATE TABLE Properties ( prop_no INTEGER NOT NULL PRIMARY KEY, prop_name CHAR(20) NOT NULL ); CREATE TABLE Prop_Owner ( prop_no INTEGER NOT NULL REFERENCES Properties (prop_no), customer_no INTEGER NOT NULL, vt_begin DATE DEFAULT CURRENT_DATE, vt_end DATE DEFAULT DATE '-12-31', tt_start DATE DEFAULT CURRENT_DATE, tt_stop DATE DEFAULT DATE '-12-31', CONSTRAINT PropOwner_VTdates_correct CHECK (vt_begin = vt_end), CONSTRAINT PropOwner_ttdates_correct CHECK (tt_start = tt_stop), PRIMARY KEY (prop_no, customer_no, vt_begin, vt_end, tt_start, tt_stop) ); Here is the function/trigger I seem to be having trouble with (although there are others which maintain the integrity of the data - meaning records cannot overlap): CREATE OR REPLACE FUNCTION P_O_integrity() RETURNS TRIGGER AS $$ DECLARE vald INTEGER; BEGIN SELECT 1 INTO vald WHERE NOT EXISTS (SELECT * FROM Prop_Owner AS A -- there was a row valid in ReferencedTable when A started WHERE NOT EXISTS (SELECT * FROM Customers AS B WHERE A.customer_no = B.customer_no AND B.vt_begin = A.vt_begin AND A.vt_begin B.vt_end AND B.tt_start = A.tt_start AND A.tt_start B.tt_stop) -- there was a row valid in ReferencedTable when A ended OR NOT EXISTS (SELECT * FROM Customers AS B WHERE A.customer_no = B.customer_no AND B.vt_begin A.vt_end AND A.vt_end = B.vt_end AND B.tt_start A.tt_stop AND A.tt_stop = B.tt_stop) -- there are no gaps in ReferencedTable during A's period of validity OR EXISTS (SELECT * FROM Customers AS B WHERE A.customer_no = B.customer_no AND ((A.vt_begin B.vt_end AND B.vt_end A.vt_end) OR (A.tt_start B.tt_stop AND B.tt_stop A.tt_stop)) AND NOT EXISTS (SELECT * FROM Customers AS B2 WHERE B2.customer_no = B.customer_no AND ((B2.vt_begin = B.vt_end AND B.vt_end B2.vt_end) OR (B2.tt_start = B.tt_stop AND B.tt_stop B2.tt_stop ); IF NOT FOUND THEN RAISE EXCEPTION 'Referential integrity breached. No covering Foreign Key'; END IF; RETURN NULL; END; $$ LANGUAGE plpgsql; CREATE TRIGGER P_O_integrity AFTER INSERT OR UPDATE OR DELETE ON Prop_Owner FOR EACH ROW EXECUTE PROCEDURE P_O_integrity(); It is this trigger/function (P_O_integrity) that does not work properly. Specifically it is the following part: -- there are no gaps in ReferencedTable during A's period of validity OR EXISTS (SELECT * FROM Customers AS B WHERE A.customer_no = B.customer_no AND ((A.vt_begin B.vt_end AND B.vt_end A.vt_end) OR (A.tt_start B.tt_stop AND B.tt_stop A.tt_stop)) AND NOT EXISTS (SELECT * FROM Customers AS B2 WHERE B2.customer_no = B.customer_no AND ((B2.vt_begin = B.vt_end AND B.vt_end B2.vt_end) OR (B2.tt_start = B.tt_stop AND B.tt_stop B2.tt_stop This can be rewritten as follows: SELECT customer_no FROM Prop_Owner AS A WHERE EXISTS (SELECT customer_no FROM Customers AS B WHERE A.customer_no = B.customer_no AND ((A.vt_begin B.vt_end AND B.vt_end A.vt_end) OR (A.tt_start B.tt_stop AND B.tt_stop A.tt_stop)) AND NOT
Re: [GENERAL] How to temporarily disable a table's FK constraints?
On 11/5/07, Erik Jones [EMAIL PROTECTED] wrote: On Nov 5, 2007, at 10:50 AM, Kynn Jones wrote: Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do... Can you explain what it is you're actually trying to do? As in, what's your use case for needing to do this? A Perl script that needs to update a referring table with many new entries before knowing the foreign keys for each new record. (I described a similar situation in a recent post, Subject: Populating large DB from Perl script.) Also, Ron, the *owner* of a table is not a regular user as far as that table is concern. That user has special privileges, including that of dropping constraints. What I seek to do is no greater a violation of the idea of enforcing relational integrity than is the ability to drop constraints altogether. BTW, I realize that I can just drop and reinstate constraints, but from the point of view of writing a Perl script to do all this, it would be much easier if I could just disable temporarily all the FK constraints on a table. kj ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Problems with PostgreSQL DBI-Link / DBD-ODBC
On Mon, Nov 05, 2007 at 05:02:03PM +0100, Pit M. wrote: Hi David, I'm sorry to bother you again, but I still couldn't get it to work. Like you suggested, I checked for successfully installed schemas. There was one, which I deleted again because I don't know which of my countless tries to create it was the successful one. Any one that got created successfully in a transaction is the successful one. Please start over by doing the following from within psql. BEGIN; DROP SCHEMA test CASCADE; DROP SCHEMA dbi_link CASCADE; \i dbi_link.sql COMMIT; If you see COMMIT after that, you have successfully loaded the DBI-Link software into that database. If you don't, let me know what you did get. After that, do the following, editing the file and repeating until you get a COMMIT at the end. BEGIN; \i test.sql COMMIT; Cheers, David. -- David Fetter [EMAIL PROTECTED] http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to temporarily disable a table's FK constraints?
On 11/5/07, Kynn Jones [EMAIL PROTECTED] wrote: On 11/5/07, Erik Jones [EMAIL PROTECTED] wrote: On Nov 5, 2007, at 10:50 AM, Kynn Jones wrote: Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do... Can you explain what it is you're actually trying to do? As in, what's your use case for needing to do this? A Perl script that needs to update a referring table with many new entries before knowing the foreign keys for each new record. (I described a similar situation in a recent post, Subject: Populating large DB from Perl script.) Also, Ron, the *owner* of a table is not a regular user as far as that table is concern. That user has special privileges, including that of dropping constraints. What I seek to do is no greater a violation of the idea of enforcing relational integrity than is the ability to drop constraints altogether. BTW, I realize that I can just drop and reinstate constraints, but from the point of view of writing a Perl script to do all this, it would be much easier if I could just disable temporarily all the FK constraints on a table. But those aren't the same things. If userA has permission to add / drop FKs, and drops them, inserts data, and then reapplies the foreign key, userA will get an error if they've managed to dork out the data (i.e. data with no foreign key). OTOH, if userA just switches off FK enforcement, adds data and turns them back on, the data can now be incoherent. Things get even more interesting if other users are involved. If one and only one user ever uses the table, and that use is absolutely sure the data is coherent, then they are the same thing. Otherwise, they certainly are not. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How to temporarily disable a table's FK constraints?
On Nov 5, 2007, at 11:52 AM, Kynn Jones wrote: On 11/5/07, Erik Jones [EMAIL PROTECTED] wrote: On Nov 5, 2007, at 10:50 AM, Kynn Jones wrote: Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do... Can you explain what it is you're actually trying to do? As in, what's your use case for needing to do this? A Perl script that needs to update a referring table with many new entries before knowing the foreign keys for each new record. (I described a similar situation in a recent post, Subject: Populating large DB from Perl script.) So, let me rephrase to see if I get you correctly: you want to insert a lot of data and then go back and update the inserted rows with the foreign key values? If that's the case and you can do all of that in one transaction then you should talk to your dba or whomever the table owner is to see about redefining the foreign key as being deferrable. That will make it so that the foreign key constraints aren't checked until transaction commit time rather than at statement execution time. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] running postgresql
I am running Gutsy (Ubuntu 7.10) and new to Postgresql. I followed the direction at https://help.ubuntu.com/community/PostgreSQL. I executed the following commands: sudo -u postgres createuser -D -A -P myuser sudo -u postgres createdb -O myuser mydb [EMAIL PROTECTED]:~$ psql mydb and received the following error message: psql: FATAL: role dagon does not exist. Apparently Postgresql gets confused with my login name. Is this a Postgresql problem or a Gutsy problem? How do I establish a role that will let me setup different databases? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to temporarily disable a table's FK constraints?
On Mon, Nov 05, 2007 at 01:52:54PM -0400, Kynn Jones wrote: BTW, I realize that I can just drop and reinstate constraints, but from the point of view of writing a Perl script to do all this, it would be much easier if I could just disable temporarily all the FK constraints on a table. Do you really want to disable the foreign key constraint, or just defer their checking till you commit the transaction? If you just want to defer checking, then [1] may help. Sam [1] http://www.postgresql.org/docs/current/static/sql-set-constraints.html ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to temporarily disable a table's FK constraints?
On 11/5/07, Erik Jones [EMAIL PROTECTED] wrote: ...see about redefining the foreign key as being deferrable... Yep, that'll do it. Thanks! kj ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Is there a way to tell how far along a COPY is in the process?
I¹m looking for a way to see how many rows have been processed while a COPY is actually running. I can¹t seem to find a pg_stat table/view that will give me this level of visibility into the process. Is there any way to do this, to tell the number of rows processed during a COPY into a table while the COPY is still running? Thanks, Keaton
[GENERAL] Filter sequence
Hi all, I´m experiencing an unexpected behaviour in the planner. I want the planner to apply a function in the results of a subquery, but its doing a filter in the hole table. The results between the mixed filters are the same in the end, but its taking ages. What I want is the planner to aplly the filters in the query sequence, eg, filter the subquery, then filter its results again using a function. The table: [ users ] year | cod | name | age | sex 1997 | 123 | john | 23 | M 1997 | 456 | smith | 68 | M 1998 | 123 | john | 23 | M 1998 | 456 | smith | 68 | M 1999 | 789 | mary | 12 | F ... The query: select u2.cod, u2.name from ( select u.cod, u.name from users u where age between 0 and 44 and sex='F' group by u.cod, u.name ) u2 group by u2.cod, u2.name having getSalaryPeriod(1997,1999,u2.cod) 1000 The function getSalaryPeriod is in pl/pgsql , and basically returns the accumulated salary in the given period (1997 to 1999). I want the planner to filter the user age and sex (wich restricts the results and groups the user cod), and after that, run getSalaryPeriod ONLY in the results of the subquery u2. Instead, its filtering the age, sex and the salary in the same point : Filter: ((age = 0) AND (age= 44) AND (sex = 'F'::bpchar) AND (getSalaryPeriod(1997 , 1999, (cod)::text, 0) = 1000)) Right now, I am rewriting the query to filter age/sex in a temp table, and after that, running getSalary. This way is EXTREMELY faster, but I think that there must be a better way =) Any hints ? -- ACV ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] running postgresql
Well it defaults to mapping to the current user, so you would have wanted: psql -U myuser mydb Or just create a postgres user named dagon and create the db as owned by that user. Or su myuser before running psql... -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] running postgresql
On Nov 5, 2007, at 12:36 PM, Charles wrote: I am running Gutsy (Ubuntu 7.10) and new to Postgresql. I followed the direction at https://help.ubuntu.com/community/PostgreSQL. I executed the following commands: sudo -u postgres createuser -D -A -P myuser sudo -u postgres createdb -O myuser mydb [EMAIL PROTECTED]:~$ psql mydb and received the following error message: psql: FATAL: role dagon does not exist. Apparently Postgresql gets confused with my login name. Is this a Postgresql problem or a Gutsy problem? How do I establish a role that will let me setup different databases? Is the name of the db user you created really myuser? If so you need to specify the user to connect as to psql, otherwise it defaults to your current system username: psql -U myuser mydb Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] running postgresql
Charles [EMAIL PROTECTED] writes: I am running Gutsy (Ubuntu 7.10) and new to Postgresql. I followed the direction at https://help.ubuntu.com/community/PostgreSQL. I executed the following commands: sudo -u postgres createuser -D -A -P myuser sudo -u postgres createdb -O myuser mydb [EMAIL PROTECTED]:~$ psql mydb and received the following error message: psql: FATAL: role dagon does not exist. Apparently Postgresql gets confused with my login name. There's no confusion about it: the default assumption for psql is that your Postgres username is the same as your login name. What else would you have expected it to use? If you want to use a different Postgres username, you need to say so, eg psql -U myuser mydb You might still have problems with that, because on many systems the default user authentication method disallows logging in under a username different from your login name. I'm not sure if Ubuntu sets it up that way, but if it does, you'd get something like ident authorization failed. You'll need to read the manual chapter about client authentication if you want to choose a different behavior, such as password-based authentication. In any case, using Postgres username equal to your login name is going to save you lots of typing, so I'd suggest doing it that way unless you have a really good reason not to... regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Is there a way to tell how far along a COPY is in the process?
On Nov 5, 2007, at 1:32 PM, Keaton Adams wrote: I’m looking for a way to see how many rows have been processed while a COPY is actually running. I can’t seem to find a pg_stat table/view that will give me this level of visibility into the process. Is there any way to do this, to tell the number of rows processed during a COPY into a table while the COPY is still running? No, because changes made by any given transaction are not visible to other transactions until that transaction commits, which would be when the COPY completes if it is the only statement in the transaction. What you could do is split the file whose data you're COPYing in into smaller pieces and run separate, sequential COPYs if you really need something along those lines. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Filter sequence
=?ISO-8859-1?Q?Andr=E9_Volpato?= [EMAIL PROTECTED] writes: select u2.cod, u2.name from ( select u.cod, u.name from users u where age between 0 and 44 and sex='F' group by u.cod, u.name ) u2 group by u2.cod, u2.name having getSalaryPeriod(1997,1999,u2.cod) 1000 I want the planner to filter the user age and sex (wich restricts the results and groups the user cod), and after that, run getSalaryPeriod ONLY in the results of the subquery u2. Instead, its filtering the age, sex and the salary in the same point : The standard hack is to add OFFSET 0 to the subquery --- this will serve as an optimization fence without actually changing its results. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Is there a way to tell how far along a COPY is in the process?
Keaton Adams wrote: I’m looking for a way to see how many rows have been processed while a COPY is actually running. I can’t seem to find a pg_stat table/view that will give me this level of visibility into the process. Is there any way to do this, to tell the number of rows processed during a COPY into a table while the COPY is still running? Thanks, Keaton I use this little perl function: sub runscript($) { my $fname = pop; open(F, $fname) or die; print executing $fname\n; my $sql = F; $db-do($sql) or die 'cant start copy'; my $c = 0; while (F) { $db-pg_putline($_); if ($c % 10_000 == 0) { print $c\r; if ($stop) { die; } } $c++; } print $c total\n; $db-pg_endcopy; unlink($fname); } The first line in the file needs to be the sql copy command, like: print F copy junk(id, name, address) from stdin;\n; The following lines are the data, like: print F $id\t$name\t$add\n; -Andy ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Locale and indexes: howto?
HI all. While reading chapter 11 of v8.2 I've encountered this sentence: However, if your server does not use the C locale you will need to create the index with a special operator class to support indexing of pattern-matching queries. Well, I'd like to use the C locale at least for PGSQL. Accordingly to my system (Linux, of course) these are my locale settings: LANG=it_IT.UTF-8 LC_CTYPE=it_IT.UTF-8 LC_NUMERIC=it_IT.UTF-8 LC_TIME=it_IT.UTF-8 LC_COLLATE=C LC_MONETARY=it_IT.UTF-8 LC_MESSAGES=it_IT.UTF-8 LC_PAPER=it_IT.UTF-8 LC_NAME=it_IT.UTF-8 LC_ADDRESS=it_IT.UTF-8 LC_TELEPHONE=it_IT.UTF-8 LC_MEASUREMENT=it_IT.UTF-8 LC_IDENTIFICATION=it_IT.UTF-8 LC_ALL= (I'm Italian, I think). So the locale itself seems to be a little bit more complex than I thought. I already use the C language collation schema, very useful in directory listings. Should I install PGSQL with also the LC_CTYPE=C? Or what? Many thanks in advance. -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to transfer from place to plase without backup/restore
Hristo Filipov wrote: Is there a way to tell PostgreSQL not install itself with OS or/and CPU compatibility(for the prize of loosing performance), but that way the one can move files free from on Computer to another? No, there isn't. -- Alvaro Herrera http://www.PlanetPostgreSQL.org/ Uno combate cuando es necesario... ¡no cuando está de humor! El humor es para el ganado, o para hacer el amor, o para tocar el baliset. No para combatir. (Gurney Halleck) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to temporarily disable a table's FK constraints?
Kynn Jones wrote: Hi, everyone. Is there a standard way to disable a table foreign-key constraint temporarily? I thought that this would be a fairly common thing to want to do, but I only found this snippet online: -- to disable UPDATE pg_class SET reltriggers=0 WHERE relname = 'your_table'; -- to re-enable UPDATE pg_class SET reltriggers = count( * ) FROM pg_trigger WHERE pg_class.oid=tgrelid AND relname = 'your_table'; and it appears that one needs to be root to execute these statements. Is there any other way for non-root users? TIA! kj Hey, I was just thinking about this... instead of disabling the FK's, what about adding a temp table where you could COPY into, then fire off a bunch of update's to setup the id fields, etc, etc, then do an Insert into realtable select * from temptable? -Andy ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] php and postgres - too many queries too fast?
On Mon, 2007-11-05 at 17:18 -0500, Tom Hart wrote: UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. As others have said, you need to narrow the problem down a bit more before we can provide useful help. However, a wild guess might be that some of your fields contain NULLs. In SQL, NULL=NULL is _not_ true (more specifically, it is NULL). To see what I mean, do SELECT 1 WHERE NULL=NULL, it will return 0 rows. Regards, Jeff Davis ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] php and postgres - too many queries too fast?
andy wrote: Tom Hart wrote: [snip] OK, enough of the background, here's my issue. For some lovely reason, even though my script reports running an UPDATE query 1563 times (out of 1566 rows), only 316 rows have is_ok set to TRUE. I've tried a few times, changing this and that, and it always updates those 316 rows (no reason for this, the data is actually really good when it comes in, I'm just trying to build an extra layer of assuredness). Of particular note, I'm trying to keep the script fairly uniform and work for all our tables, so the UPDATE statement looks something like UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. Thomas R. Hart II [EMAIL PROTECTED] Have you run one of these queries via psql or something other than php? I doubt its a too many too fast thing. I'd guess a logic error someplace. Why 1563 queries? Can you get the row's modified per query? If you're tables looks like: var1 | var2 | var3 a | b | a a | b | c Would you fire of two query's like: UPDATE table SET is_ok = 'TRUE' WHERE var1 = 'a' AND var2 = 'b' and var3 = 'a; UPDATE table SET is_ok = 'TRUE' WHERE var1 = 'a' AND var2 = 'b' and var3 = 'c; if so, do you generate the update's on the fly? -Andy ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match I thought I had run the queries through psql (it's been a long one :-) and when I tried to verify, I was able to find my problem (yes, I am an idiot). It turns out that for some reason it didn't like to UPDATE when I was using a text type field (specifically an empty text field) in the WHERE clause. To remedy this, I instructed PHP to not use a field in the WHERE clause if the destination type was 'text', and now we're working beautifully (2.405 seconds to run the script through 1566 rows, running updates on 1563 of them). Now I just need to figure out what's going on with those 3 rogue rows. Sorry I hadn't checked all the bases thoroughly, but now they definitely are belong to us. Thanks for the help and have a good night. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] php and postgres - too many queries too fast?
Hey everybody. I'm running postgresql 8.2 on a windows 2k3 server machine. I have a table (two tables actually, in table and table_import format). The _import table has all text type fields, while the main table has datatypes defined. I wrote a PHP script that checks the various type fields (e.g. integer is a number, date is a date, etc.) and sets a bool flag is_ok to true for the row if the data all checks out. The script gets a dump of the data from a SELECT * statement, then takes each row, verifies the various data fields (successfully, I've tested) and sets a variable baddata. At the end of the checking, if baddata still equals 0, then it crafts an UPDATE statement to change the value of is_ok. There are a relatively small amount of rows (~1500, small time to you guys I'm sure), and the script runs fairly fast. OK, enough of the background, here's my issue. For some lovely reason, even though my script reports running an UPDATE query 1563 times (out of 1566 rows), only 316 rows have is_ok set to TRUE. I've tried a few times, changing this and that, and it always updates those 316 rows (no reason for this, the data is actually really good when it comes in, I'm just trying to build an extra layer of assuredness). Of particular note, I'm trying to keep the script fairly uniform and work for all our tables, so the UPDATE statement looks something like UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. for every field in the record (I would have it base it on the primary key, but the field names and locations are different for each table). Is it possible that I'm trying to run too many queries at once (or rather rapid succession)? I've tried encapsulating the queries in a BEGIN .. COMMIT transaction which improved my speed quite a bit, but it's still updating only those rows. I know that it's entirely possible that the problem lies in the PHP, or the network, or the web server configuration, or the moon phase, but is there anything here that jumps out at anybody as a possible cause? TIA Thomas R. Hart II [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] php and postgres - too many queries too fast?
On 11/5/07, Tom Hart [EMAIL PROTECTED] wrote: Hey everybody. I'm running postgresql 8.2 on a windows 2k3 server machine. I have a table (two tables actually, in table and table_import format). The _import table has all text type fields, while the main table has datatypes defined. I wrote a PHP script that checks the various type fields (e.g. integer is a number, date is a date, etc.) and sets a bool flag is_ok to true for the row if the data all checks out. The script gets a dump of the data from a SELECT * statement, then takes each row, verifies the various data fields (successfully, I've tested) and sets a variable baddata. At the end of the checking, if baddata still equals 0, then it crafts an UPDATE statement to change the value of is_ok. There are a relatively small amount of rows (~1500, small time to you guys I'm sure), and the script runs fairly fast. OK, enough of the background, here's my issue. For some lovely reason, even though my script reports running an UPDATE query 1563 times (out of 1566 rows), only 316 rows have is_ok set to TRUE. I've tried a few times, changing this and that, and it always updates those 316 rows (no reason for this, the data is actually really good when it comes in, I'm just trying to build an extra layer of assuredness). Of particular note, I'm trying to keep the script fairly uniform and work for all our tables, so the UPDATE statement looks something like UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. Can you create a repeatable test version of this? I.e. have one row that does and one row that doesn't update? And give it to the list as a .sql file to be loaded along with the queries you're using? I understand the general idea of what you're saying, but I'm afraid there's not enough detail in your post to really help. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] php and postgres - too many queries too fast?
Tom Hart wrote: [snip] OK, enough of the background, here's my issue. For some lovely reason, even though my script reports running an UPDATE query 1563 times (out of 1566 rows), only 316 rows have is_ok set to TRUE. I've tried a few times, changing this and that, and it always updates those 316 rows (no reason for this, the data is actually really good when it comes in, I'm just trying to build an extra layer of assuredness). Of particular note, I'm trying to keep the script fairly uniform and work for all our tables, so the UPDATE statement looks something like UPDATE table SET is_ok = 'TRUE' WHERE var1 = value1 AND var2 = value2 AND var3 = value3. Thomas R. Hart II [EMAIL PROTECTED] Have you run one of these queries via psql or something other than php? I doubt its a too many too fast thing. I'd guess a logic error someplace. Why 1563 queries? Can you get the row's modified per query? If you're tables looks like: var1 | var2 | var3 a | b | a a | b | c Would you fire of two query's like: UPDATE table SET is_ok = 'TRUE' WHERE var1 = 'a' AND var2 = 'b' and var3 = 'a; UPDATE table SET is_ok = 'TRUE' WHERE var1 = 'a' AND var2 = 'b' and var3 = 'c; if so, do you generate the update's on the fly? -Andy ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] safely increase a single column integer value
Hi, What is the best way from concurrency point of view to increase a integer value from a table? Suppose you count every postback from all the users that are currently browsing your web-site. Cheers, Anton ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] for libpq which include defines the type oids
On 06/11/2007, Samantha Atkins [EMAIL PROTECTED] wrote: I am probably overlooking something but where exactly are these found for inclusion is libpq based programs? Poking around my installation doesn't make it obvious. - samantha Get oids dynamically. Use static oids on client part isn't good idea. They can by changed and than you have to recompile your application., postgres=# select 'integer'::regtype::int; int4 -- 23 (1 row) Regards Pavel Stehule ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?
Is there any way to hardcode the NULL handling in an index (as per http://www.postgresql.org/docs/8.3/static/indexes-ordering.html) so that SELECT * FROM t ORDER BY foo automatically implies NULLS FIRST (and, similarly so that SELECT * FROM t ORDER BY foo DESC automatically implies NULLS LAST)? Thing is, I'm using PHP Symfony/Propel to generate their SQL and have no easy way to influence how they do so. Thanks. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] for libpq which include defines the type oids
I am probably overlooking something but where exactly are these found for inclusion is libpq based programs? Poking around my installation doesn't make it obvious. - samantha ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly