Re: [GENERAL] restart server on Lion
Hi all, Problem resolved. The postgres server now starts automatically when my OSX Lion machine reboots. Hopefully the following info will be useful for anyone contemplating a macports installation of PostgreSQL. The biggest impediment to getting the relaunch behavior I expected was from installing the postgresql84 package, rather than postgresql84-server. The latter package contains the LaunchDaemon wrapper script that will ultimately relaunch postgres on reboot; it also apparently corrects the postgres username clobbering that Apple provides with PG on Lion. Furthermore, it prints post-installation and db initialization instructions to the shell, which obviates having to glean that info from web searches ;) Note that in my installation, I cleverly broke the LaunchDaemon wrapper's ability to relaunch postgres by making a minor change to macport's default database directly structure. Turns out their postgres initd instructions reflect paths that are hard-wired in the wrapper. Who knew? I run PostgreSQL in a closed environment, for development and testing purposes, on my laptop. Installing a *-server package seemed overkill. Quite the contrary, that's exactly what I should've done from the beginning. Hope this helps - Scott On Jan 30, 2012, at 2:03 PM, Scott Frankel wrote: Hi M, On Jan 30, 2012, at 11:46 AM, A.M. wrote: On Jan 30, 2012, at 2:40 PM, Scott Frankel wrote: Hi all, What's the best/correct way to cause the PostgreSQL server to startup automatically when rebooting on OSX 10.7 Lion? I'm using a macports install of postgres 8.4 and went through a couple grueling days, sudo'd up to my eyeballs, to restore the postgres user and have a working installation. To start the service, I'm currently invoking this on the cmd-line: sudo su postgres -c /opt/local/lib/postgresql84/bin/pg_ctl -D /opt/local/var/postgresql84/defaultdb -l /opt/local/var/postgresql84/defaultdb/data/logfile.txt start That's pretty cumbersome for each reboot. I've also seen references to manually invoking this on the cmd-line: sudo serveradmin start postgres But that yields postgres:error = CANNOT_LOAD_BUNDLE_ERR Is there an /etc or OSX-specific solution people are using for restarts? My PG 8.3 server restarted automagically on OSX 10.5. While I don't recall setting up anything specifically to make that happen, memory fades... MacPorts includes a launchd plist to handle this. (Perhaps launchd is the keyword you need to search.) Aha! Nice to know which tree to bark up ;) /Library/LaunchDaemons/org.macports.postgresql90-server.plist (for PostgreSQL 9.0, of course) http://od-eon.com/blogs/calvin/os-x-lion-postgresql/ I installed macports: postgresql84 @8.4.10_0 and there's no trace of a postgres launch daemon plist file having been installed on my machine. My best guess at this point is that the plist file may only come with the postgresql84-server @8.4.10 port. I'll test that theory tomorrow and keep this list posted. Thanks! Scott You can adjust the script to your liking. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] restart server on Lion
Hi all, What's the best/correct way to cause the PostgreSQL server to startup automatically when rebooting on OSX 10.7 Lion? I'm using a macports install of postgres 8.4 and went through a couple grueling days, sudo'd up to my eyeballs, to restore the postgres user and have a working installation. To start the service, I'm currently invoking this on the cmd-line: sudo su postgres -c /opt/local/lib/postgresql84/bin/pg_ctl -D /opt/local/var/postgresql84/defaultdb -l /opt/local/var/postgresql84/defaultdb/data/logfile.txt start That's pretty cumbersome for each reboot. I've also seen references to manually invoking this on the cmd-line: sudo serveradmin start postgres But that yields postgres:error = CANNOT_LOAD_BUNDLE_ERR Is there an /etc or OSX-specific solution people are using for restarts? My PG 8.3 server restarted automagically on OSX 10.5. While I don't recall setting up anything specifically to make that happen, memory fades... Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] restart server on Lion
Hi M, On Jan 30, 2012, at 11:46 AM, A.M. wrote: On Jan 30, 2012, at 2:40 PM, Scott Frankel wrote: Hi all, What's the best/correct way to cause the PostgreSQL server to startup automatically when rebooting on OSX 10.7 Lion? I'm using a macports install of postgres 8.4 and went through a couple grueling days, sudo'd up to my eyeballs, to restore the postgres user and have a working installation. To start the service, I'm currently invoking this on the cmd-line: sudo su postgres -c /opt/local/lib/postgresql84/bin/pg_ctl -D /opt/local/var/postgresql84/defaultdb -l /opt/local/var/postgresql84/defaultdb/data/logfile.txt start That's pretty cumbersome for each reboot. I've also seen references to manually invoking this on the cmd-line: sudo serveradmin start postgres But that yields postgres:error = CANNOT_LOAD_BUNDLE_ERR Is there an /etc or OSX-specific solution people are using for restarts? My PG 8.3 server restarted automagically on OSX 10.5. While I don't recall setting up anything specifically to make that happen, memory fades... MacPorts includes a launchd plist to handle this. (Perhaps launchd is the keyword you need to search.) Aha! Nice to know which tree to bark up ;) /Library/LaunchDaemons/org.macports.postgresql90-server.plist (for PostgreSQL 9.0, of course) http://od-eon.com/blogs/calvin/os-x-lion-postgresql/ I installed macports: postgresql84 @8.4.10_0 and there's no trace of a postgres launch daemon plist file having been installed on my machine. My best guess at this point is that the plist file may only come with the postgresql84-server @8.4.10 port. I'll test that theory tomorrow and keep this list posted. Thanks! Scott You can adjust the script to your liking. Cheers, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pipe text to copy statement stdin input
Hi all, Is there a way to pipe text into a COPY statement's stdin input using cmd-line psql? I'm using the following syntax to enter large strings of text into a table. The text itself has a json-like syntax that has the potential for carrying numerous special characters. COPY mytable(name, description, text) FROM stdin; the text \. Problem is that my terminal's copy-paste buffer is much smaller than the text I need to insert. Note: - I do not have superuser perms for the db, so passing a file instead of stdin is not an option. - Ditto for using \i to import a file. - The db is password protected, so invoking `psql` as a non- interactive command may not be possible. Right? - If I'm wrong, anyone have example syntax of how to create a valid COPY statement? I've found an interesting OSX cmd-line util that copies/pastes between Terminal and the pasteboard. Though I think this just gets bitten by the file restriction anyway, eg: % cat bigfile.txt pbcopy % psql DBNAME USERNAME (PASSWORD???) EOF COPY mytable(name, description, text) FROM stdin; pbpaste stdin(???) \. pqsl 8.3 OSX 10.5.8 Terminal Suggestions greatly appreciated! Thanks Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pipe text to copy statement stdin input
John, Michael, Thanks for the thorough tips. Worked perfectly! The .pgpass file is quite useful. Could've saved myself a lot of typing the past few years! Note that since I already prepared a CSV formated file for the COPY statement, once I created the .pgpass file, I opted for Michael's suggestion; eg: cat myfile | psql -c COPY mytable (name, description, text) FROM stdin Thanks! Scott On Jun 21, 2011, at 1:10 PM, John R Pierce wrote: On 06/21/11 12:43 PM, Scott Frankel wrote: Hi all, Is there a way to pipe text into a COPY statement's stdin input using cmd-line psql? I'm using the following syntax to enter large strings of text into a table. The text itself has a json-like syntax that has the potential for carrying numerous special characters. COPY mytable(name, description, text) FROM stdin; the text \. Problem is that my terminal's copy-paste buffer is much smaller than the text I need to insert. Note: - I do not have superuser perms for the db, so passing a file instead of stdin is not an option. - Ditto for using \i to import a file. - The db is password protected, so invoking `psql` as a non- interactive command may not be possible. Right? - If I'm wrong, anyone have example syntax of how to create a valid COPY statement? I've found an interesting OSX cmd-line util that copies/pastes between Terminal and the pasteboard. Though I think this just gets bitten by the file restriction anyway, eg: % cat bigfile.txt pbcopy % psql DBNAME USERNAME (PASSWORD???) EOF COPY mytable(name, description, text) FROM stdin; pbpaste stdin(???) \. You can get around the password issue via .pgpass, put this file in your home directory with permissions 600, and lines like... hostname:port:database:username:password You may replace any fields with *, so like... localhost:*:*:youruser:yourpassword To copy data from a file, use the \copy command in psql, create a .SQL file like... \copy yourtable(name,description,text) from stdin val,val,val val,val,val ... \. then execute this file like $ psql -f yourfile.sql -d dbname There is no file size restriction here, as it reads that file as its going and streams it to the sql COPY command... (note indents are purely to show verbatim stuff from my mail text, there are no idents in these files) -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_dump and boolean format
Hi all, Is it possible to control the representation of boolean data in a pg_dump? The pg docs say that booleans can be stored as 't', 'true', 'y', 'yes', or '1'. My db is storing them as 't' and pg_dump is outputing them as 'true'. Can I coerce pg_dump to output 't' as 1? I'm using pg_dump to create an SQL script for importing into another database. This will happen automatically and repeatedly. Unfortunately, the other database (sqlite) is limited to only integer representation of booleans: 0 or 1. I'm currently using the following command: pg_dump -aDFp -v mydatabase mydatabase_BAK.sql Thanks! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dump and boolean format
On Aug 11, 2010, at 10:12 AM, Michael Glaesemann wrote: On Aug 11, 2010, at 13:00 , Scott Frankel wrote: The pg docs say that booleans can be stored as 't', 'true', 'y', 'yes', or '1'. Booleans are not stored as those literals: those are only acceptable literals (i.e., string representations) for boolean values. Right. I'm using pg_dump to create an SQL script for importing into another database. This will happen automatically and repeatedly. Unfortunately, the other database (sqlite) is limited to only integer representation of booleans: 0 or 1. One option is to use COPY to export the data in a format you like. For example: COPY (SELECT CAST(boolean_column AS INT) FROM my_table) TO STDOUT. Then write a script which reads the exported data files and loads them into your sqlite database. COPY WITH CSV would likely be helpful as well. With the number of tables and insert statements I'll be wrangling, I was hopeful I could coerce pg_dump to perform the AS INT cast. Thanks for the tip! Scott Michael Glaesemann grzm seespotcode net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dump and boolean format
On Aug 11, 2010, at 3:57 PM, Michael Glaesemann wrote: On Aug 11, 2010, at 18:21 , Scott Frankel wrote: On Aug 11, 2010, at 10:12 AM, Michael Glaesemann wrote: One option is to use COPY to export the data in a format you like. For example: COPY (SELECT CAST(boolean_column AS INT) FROM my_table) TO STDOUT. Then write a script which reads the exported data files and loads them into your sqlite database. COPY WITH CSV would likely be helpful as well. With the number of tables and insert statements I'll be wrangling, I was hopeful I could coerce pg_dump to perform the AS INT cast. Query the database Information Schema or system tables to provide a list of the table names. By using COPY WITH CSV HEADER and the names of the tables, you should be able to generate INSERT statements programatically. Good point. Thanks for the suggestions! Scott Michael Glaesemann grzm seespotcode net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] MySQL versus Postgres
On Aug 8, 2010, at 2:45 AM, Torsten Zühlsdorff wrote: Scott Frankel schrieb: On Aug 6, 2010, at 6:13 AM, Torsten Zühlsdorff wrote: John Gage schrieb: On reflection, I think what is needed is a handbook that features cut and paste code to do the things with Postgres that people do today with MySQL. Everyone of my trainees want such thing - for databases, for other programming-languages etc. It's the worst thing you can give them. The will copy, they will paste and they will understand nothing. Learning is the way to understanding, not copying. I couldn't disagree more. Presenting working code (at least snippets) should continue to be a fundamental part of any documentation project. You missunderstand me. Working code is a fundamental part of any documentation. But we talk about a handbook with code that works in PostgreSQL and does the same thinks in MySQL. This way the trainees won't learn how PostgreSQL works, the just learn the different examples. Giving them training-problems and the PostgreSQL handbook is out of my experience the best way. It tooks longer for them to solve the problems, but in this way they are able to solve problems, which are not related to the presented examples. I understand and appreciate your position. Thanks for the clarification. While I believe that this thread has, for all intents and purposes, run its course (and I look forward to reading the documentation it informs), I'm going to go out on a limb and present an additional use- case that may be unpopular, or at least controversial. There are times when a documentation's audience is not interested in taking the subject matter to expert level. (eg: informed supervisory or vendor-client relationships, proof of concept development, hobbies, c.). For those cases, a working understanding is all that's strictly necessary. Annotated, cookbook-style code reference is especially well suited for that mode of learning. Regards, Scott Greetings from Germany, Torsten -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] MySQL versus Postgres
On Aug 6, 2010, at 6:13 AM, Torsten Zühlsdorff wrote: John Gage schrieb: On reflection, I think what is needed is a handbook that features cut and paste code to do the things with Postgres that people do today with MySQL. Everyone of my trainees want such thing - for databases, for other programming-languages etc. It's the worst thing you can give them. The will copy, they will paste and they will understand nothing. Learning is the way to understanding, not copying. I couldn't disagree more. Presenting working code (at least snippets) should continue to be a fundamental part of any documentation project. As a first-time db programmer and 'casual' user of PostgreSQL, I read Bruce Momjian's book to get started. I rely on the example code presented in the current documentation to learn best practices, compare against it to troubleshoot my code when it breaks, and provide inspiration for elegant solutions to challenges I encounter. I would further suggest that a QuickStart guide would be an ideal addition to the current documentation efforts. Scanning a basic soup- to-nuts solution can often answer fundamental questions more efficiently than full-blown documentation can, especially when the user is not already familiar with specific terminology to search for in the index. Regards, Scott Greetings, Torsten -- http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 verschiedenen Datenbanksystemen abstrahiert, Queries von Applikationen trennt und automatisch die Query- Ergebnisse auswerten kann. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PQescapeStringConn
Hi all, What's the best way to insert long strings that contain numerous special characters into a PG database? I'm assuming that importing an SQL script with prepared statements is the way to go. If so, how to escape all the special characters? I've found documentation on PQescapeStringConn but haven't found any examples of it in use. I have a number of very long strings that each contain many instances of semi-colons, single quotes, forward and back slashes, etc. I'm looking for an efficient and safe way to write them to my db using a prepared statement. An example follows. Thanks in advance! Scott CREATE TABLE foo ( foo_id SERIAL PRIMARY KEY, nameVARCHAR(32) UNIQUE NOT NULL, description TEXT, bodyTEXTDEFAULT NULL, created timestamp DEFAULT CURRENT_TIMESTAMP, UNIQUE (name)); PREPARE fooprep (VARCHAR(32), text, text) AS INSERT INTO foo (name, description, body) VALUES ($1, $2, $3); EXECUTE fooprep('foo1', 'this is foo1', '#!()[]{}; qwe'poi'asdlkjzxcmnb; /\1\2\3\4\5\6\7\8\9/' );
Re: [GENERAL] PQescapeStringConn
On Jul 30, 2010, at 1:13 AM, Richard Huxton wrote: On 30/07/10 07:52, Scott Frankel wrote: I have a number of very long strings that each contain many instances of semi-colons, single quotes, forward and back slashes, etc. I'm looking for an efficient and safe way to write them to my db using a prepared statement. What language? From C? Importing an SQL script. eg: \i my_script_of_prepared_statements.sql PREPARE fooprep (VARCHAR(32), text, text) AS INSERT INTO foo (name, description, body) VALUES ($1, $2, $3); EXECUTE fooprep('foo1', 'this is foo1', The full statement (below) illustrates the problem I'm encountering. The text I'm trying to insert has single quotes and semi-colons in it. These get interpreted, causing errors. I'm looking for a way to insert strings with special characters into my db, hopefully avoiding having to escape each one by hand. (They are numerous and the strings quite long.) eg: INSERT INTO foo (name, body) VALUES ('foo', 'this will fail 'fer sher;' on the characters inside the string'); Thanks again! Scott PREPARE fooprep (VARCHAR(32), text, text) AS INSERT INTO foo (name, description, body) VALUES ($1, $2, $3); EXECUTE fooprep('foo1', 'this is foo1', '#!()[]{}; qwe'poi'asdlkjzxcmnb; /\1\2\3\4\5\6\7\8\9/' ); This is basically PQprepare+PQexecPrepared, or PQexecParams if you want to do both in one step. There is no need to escape strings if they are passed as parameters - the library knows it's a string and handles that for you. Where you *do* have to worry about escaping strings is if you are building up a query and have e.g. a varying table-name. It's legal for table names to contain spaces etc. but they need to be quoted correctly. Every application language will have its own library, but they all have a similar prepare+exec option (and I think most use the C libpq interface underneath). -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PQescapeStringConn
On Jul 30, 2010, at 9:11 AM, Richard Huxton wrote: On 30/07/10 16:57, Scott Frankel wrote: On Jul 30, 2010, at 1:13 AM, Richard Huxton wrote: On 30/07/10 07:52, Scott Frankel wrote: I have a number of very long strings that each contain many instances of semi-colons, single quotes, forward and back slashes, etc. I'm looking for an efficient and safe way to write them to my db using a prepared statement. You're going to have to pre-process the strings in some way, or there will always be the chance of problems. Probably the best way to handle a bulk insert is through the COPY command: Excellent! Thanks for the lead. I see from the docs: COPY weather FROM '/home/user/weather.txt'; I am using Python to create the strings; and Python is the ultimate consumer of the strings after storage in the db. Thus I have a fair degree of control over the strings' formatting. COPY from a plain text file on my server looks very promising. Thanks! Scott BEGIN; COPY foo (name, body) FROM stdin; n1 b1 n2 b2 foo this will fail 'fer sher;' on the characters inside the string \. COMMIT; By default COPY expects one line per row, with columns separated by tab characters. You can also have '/path/to/file/name' instead of stdin, but the file will need to be accessible from the backend process. If that's not the case (and it probably isn't) then you want to use psql's \copy variant which views the world from the client end of things. COPY is faster than separate inserts and the only characters you need to worry about are tab, carriage-return and newline. These would be replaced by the sequences \t, \r, \n. I don't know what format your strings are in initially, but a bit of perl/python/ruby can easily tidy them up. Finally, more recent versions of PG have a COPY that supports CSV formatting too. See the manuals for more details on this. -- Richard Huxton Archonet Ltd
[GENERAL] sql dump
Hi all, Is it possible to perform an SQL Dump without using pg_dump? I have a special case situation wherein my application has access to a remotely-hosted PG (8.3) database, but does not have access to its admin tools. (There's a longer backstory here that I'm happy to explain if necessary.) I'm looking for an efficient way to dump all the data in the DB without having to SELECT * on each table. Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] sql dump
Thanks for all the suggestions. COPY may work for my purposes. The SSH tunnel option for using pg_dump is very interesting. Thanks! Scott On Jul 26, 2010, at 9:18 AM, David Fetter wrote: On Mon, Jul 26, 2010 at 08:58:59AM -0700, Scott Frankel wrote: Hi all, Is it possible to perform an SQL Dump without using pg_dump? No, but there may be more options for using pg_dump than you have looked at. One example would be to use pg_dump on one with an SSH tunnel to the other one's local PostgreSQL port (5432 by default, but check which yours is). For example: ssh -fNR 5432:localhost: postg...@your.host.dom would let you connect to localhost: with pg_dump and any other PostgreSQL tools. Cheers, David. I have a special case situation wherein my application has access to a remotely-hosted PG (8.3) database, but does not have access to its admin tools. (There's a longer backstory here that I'm happy to explain if necessary.) I'm looking for an efficient way to dump all the data in the DB without having to SELECT * on each table. Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- David Fetter da...@fetter.org http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] prepared statements
Hi all, I'm working with prepared statements directly in pg for the first time and have a couple of questions. Does a prepared statement used to insert into a table need to insert into all columns of the table? I've found that, for a table with a serial sequence key as its first column, I have to specify the key in my prepared statement or I get type errors: ERROR: column foo_id is of type integer but expression is of type character varying. What's the best way to specify the next value for the serial sequence key if subqueries are not allowed in a prepared statement's execute parameter: ERROR: cannot use subquery in EXECUTE parameter For example, given the following table definition: CREATE TABLE foo ( foo_id SERIAL PRIMARY KEY, nameVARCHAR(32) UNIQUE NOT NULL, description TEXT, bodyTEXTDEFAULT NULL, created timestamp DEFAULT CURRENT_TIMESTAMP, UNIQUE (name)); What's the best way to insert several records that have lots of special characters in the body column? eg: PREPARE fooprep (int, VARCHAR(32), text, text) AS INSERT INTO foo VALUES ($1, $2, $3, $4); EXECUTE (fooprep (SELECT nextval('foo_id_seq')), 'foo1', 'this is foo1', '#!()[]{} qwepoiasdlkjzxcmnb /\1\2\3\4\5\6\7\8\9/'); Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] prepared statements
Works! The bug in my example was not passing the INSERT statement an explicit list of column names, as per any non-prepared insert. Thanks! Scott On Jul 23, 2010, at 2:53 PM, Daniel Verite wrote: Scott Frankel wrote: I've found that, for a table with a serial sequence key as its first column, I have to specify the key in my prepared statement or I get type errors: ERROR: column foo_id is of type integer but expression is of type character varying. Let's try: test= create table t(a serial, b int); NOTICE: CREATE TABLE will create implicit sequence t_a_seq for serial column t.a CREATE TABLE test= prepare a as insert into t(b) values($1); PREPARE test= execute a(2); INSERT 0 1 test= select * from t; a | b ---+--- 1 | 2 (1 row) No error here... Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] add column specify position
Excellent! Thanks for providing the link. I think the 'add columns and move data' option would best fit my needs. Thanks! Scott On Feb 2, 2010, at 11:44 PM, Shoaib Mir wrote: On Wed, Feb 3, 2010 at 4:14 PM, Scott Frankel fran...@circlesfx.com wrote: Hi all, Is it possible to specify a position when adding a column to a table? Not possible, but have a read of http://wiki.postgresql.org/wiki/Alter_column_position and look at the alternative options. -- Shoaib Mir http://shoaibmir.wordpress.com/
[GENERAL] add column specify position
Hi all, Is it possible to specify a position when adding a column to a table? I want to swap one column for another without losing the column's position. eg: given that 'foo' is the 5th column in an 8 column table, I want to replace it with a 'bar' column at column 5. ALTER TABLE qwe DROP COLUMN foo; ALTER TABLE qwe ADD COLUMN bar; I'm writing a sql script to migrate from one version of my schema to another, dropping one column and adding another to a number of tables. Unfortunately, the windowing toolkit I'm using relies on integer values to determine column positions. Any ideas come to mind? Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] storing windows path strings
Hi all, What's the proper way to store directory path strings in a table, especially ones with backslashes like windows? I'm currently using a prepared statement with bind value. Do I need to pre-parse all user entries to identify any backslash characters before passing the string to my insert statement? Searches through the documentation turned up references to escape_string_warning (boolean) and standard_conforming_strings (boolean). I'm not sure I'll have access to server side config. Thanks in advance! Scott eg: CREATE TABLE foo ( foo_id SERIAL PRIMARY KEY, nameVARCHAR(32) UNIQUE NOT NULL, dirpath textDEFAULT NULL); INSERT INTO foo (name, dirpath) VALUES ('bar', 'c:\windows\path\to \bar'); -- WARNING: nonstandard use of \\ in a string literal
Re: [GENERAL] storing windows path strings
Excellent! Mild testing so far, but it seems to work. Thanks! Scott On Jan 29, 2010, at 3:00 PM, Cédric Villemain wrote: 2010/1/29 Scott Frankel lekn...@pacbell.net: Hi all, What's the proper way to store directory path strings in a table, especially ones with backslashes like windows? I'm currently using a prepared statement with bind value. Do I need to pre-parse all user entries to identify any backslash characters before passing the string to my insert statement? Searches through the documentation turned up references to escape_string_warning (boolean) and standard_conforming_strings (boolean). I'm not sure I'll have access to server side config. Thanks in advance! Scott eg: CREATE TABLE foo ( foo_id SERIAL PRIMARY KEY, name VARCHAR(32) UNIQUE NOT NULL, dirpath text DEFAULT NULL); INSERT INTO foo (name, dirpath) VALUES ('bar', 'c:\windows\path\to \bar'); -- WARNING: nonstandard use of \\ in a string literal explicetely set ON the standard_conforming_string in the postgresql.conf *but* take care it don't break your application. INSERT INTO foo (name, dirpath) VALUES ('bar', 'c:\windows\path\to \bar'); -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] db cluster location
Hi all, Is there a query I can use to find the location of a db cluster? I've found a term that looks promising (\d+ pg_database;), but can't seem to tease a directory path from it. The docs list several common locations, but mine doesn't appear to be one of them. Searching my local file system for data yields too many hits for that approach to be useful. I installed pg via Macports. Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] db cluster location
Hi all, Is there a query I can use to find the location of a db cluster? I've found a term that looks promising (\d+ pg_database;), but can't seem to tease a directory path from it. The docs list several common locations, but mine doesn't appear to be one of them. Searching my local file system for data yields too many hits for that approach to be useful. I installed pg via Macports. Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] db cluster location
Exactly what I was looking for. Thanks! On Jan 21, 2010, at 10:50 AM, Thomas Kellerer wrote: Scott Frankel wrote on 21.01.2010 18:34: Hi all, Is there a query I can use to find the location of a db cluster? SELECT name, setting FROM pg_settings WHERE category = 'File Locations'; You need to be connected as the superuser (usually postgres) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] where clauses and selects
Hello, Is it possible to perform a select in the where clause of a statement? I have a situation where I've got one arm tied behind my back: I can only have a single table in the select and from clauses, but the where clause appears to be freed from that restriction. Given a statement as follows: SELECT foo.foo_id, foo.name FROM foo, bar WHERE foo.bar_id = bar.bar_id AND bar.name = 'martini'; I'm looking for a way to recast it so that the select and from clauses refer to a single table and the join referencing the second table occurs in the where clause. For example, something like this: SELECT foo.foo_id, foo.name FROM foo WHERE (SELECT * FROM foo, bar WHERE ...) foo.bar_id = bar.bar_id AND bar.name = 'martini'; I've explored the where exists clause, but it's not supported by the application toolkit I'm using. AFAIK, I've only got access to where ... Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] where clauses and multiple tables
Hello, Is it possible to join tables in the where clause of a statement? I ask because I have a situation where I only have access to the where clause of a select statement on a single table, yet I want to perform a join on multiple tables. eg: Given a statement as follows: SELECT foo.foo_id, foo.name FROM foo, bar WHERE foo.bar_id = bar.bar_id AND bar.name = 'martini'; I'm looking for a way to recast it so that the select and from clauses still refer to a single table and the join referencing the second table occurs in the where clause. For example, something like this: SELECT foo.foo_id, foo.name FROM foo WHERE (SELECT * FROM foo, bar WHERE ...) foo.bar_id = bar.bar_id AND bar.name = 'martini'; I've explored the where exists clause, but that's not supported by the application toolkit I'm using. AFAIK, I've only got access to where ... Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] where clauses and multiple tables
On Sep 8, 2009, at 4:02 PM, David W Noon wrote: On Tue, 8 Sep 2009 14:25:20 -0700, Scott Frankel wrote about [GENERAL] where clauses and multiple tables: Is it possible to join tables in the where clause of a statement? [snip] Given a statement as follows: SELECT foo.foo_id, foo.name FROM foo, bar WHERE foo.bar_id = bar.bar_id AND bar.name = 'martini'; Just use an IN predicate: SELECT foo_id, name FROM foo WHERE bar_id IN (SELECT bar_id FROM bar WHERE name = 'martini'); This is frequently called a semi-join. This looks very promising. Thanks for the info! Scott -- Regards, Dave [RLU #314465] = == david.w.n...@ntlworld.com (David W Noon) = == -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general Scott Frankel President Circle-S Studios www.circlesfx.com 510-339-7477 (o) 510-332-2990 (c) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] where clause question
Hello, Is it possible to perform selects in a where clause of a statement? Given a statement as follows: SELECT foo.foo_id, foo.name FROM foo, bar WHERE foo.bar_id = bar.bar_id AND bar.name = 'martini'; I'm looking for a way to recast it so that the select and from clauses refer to a single table and the join referencing the second table occurs in the where clause. For example, something like this: SELECT foo.foo_id, foo.name FROM foo WHERE (SELECT * FROM foo, bar WHERE ...) foo.bar_id = bar.bar_id AND bar.name = 'martini'; I've explored the where exists clause, but it's not supported by the application toolkit I'm using. AFAIK, I've only got access to where ... Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] where clause question
Hello, Is it possible to perform selects in a where clause of a statement? Given a statement as follows: SELECT foo.foo_id, foo.name FROM foo, bar WHERE foo.bar_id = bar.bar_id AND bar.name = 'martini'; I'm looking for a way to recast it so that the select and from clauses refer to a single table and the join referencing the second table occurs in the where clause. For example, something like this: SELECT foo.foo_id, foo.name FROM foo WHERE (SELECT * FROM foo, bar WHERE ...) foo.bar_id = bar.bar_id AND bar.name = 'martini'; I've explored the where exists clause, but it's not supported by the application toolkit I'm using. AFAIK, I've only got access to where ... Thanks in advance! Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] view table pkey values
Thanks for the thorough explanation and link to more docs. Very much appreciated! Scott On Aug 24, 2009, at 11:03 AM, Raymond O'Donnell wrote: On 24/08/2009 18:37, Scott Frankel wrote: If I understand how tables are managed internally, there are 2 sequences: my explicit foo_id and the internal sequence foo_foo_id_seq: public | foo_foo_id_seq | sequence | pguser | It's this internal sequence that must be involved in the collision, since I'm not specifying an insert value for my explicit foo_id column. Your column foo_id is just that - a column . It's not a sequence. It's an integer column which is specified to take it's default value from a sequence, which Postgres creates for you and names foo_foo_id_seq. In fact, serial isn't a real type - its syntactic sugar that - (i) creates the sequence, named table name_column name_seq, (ii) creates the column as type integer, (iii) makes the sequence to be owned by the column, and (iv) sets the default value of the column as nextval(sequence_name). The serial pseudo-type just saves you doing all this by hand. When you don't enter an explicit value for the Serial column, the specified default value gets entered instead, which is the return value of the function nextval('foo_foo_id_seq'). You can of course enter an explicit value into the column, and then the default is ignored; by the same token, the associated sequence doesn't get incremented, so this can lead to collisions if you're not careful. For example: postgres=# create table test(a serial primary key, b text); NOTICE: CREATE TABLE will create implicit sequence test_a_seq for serial column test.a NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index test_pkey for table test CREATE TABLE postgres=# insert into test(b) values('This will work'); INSERT 0 1 postgres=# select * from test; a | b ---+ 1 | This will work (1 row) postgres=# select currval('test_a_seq'); currval - 1 (1 row) postgres=# insert into test(a, b) values(2, 'This works too'); INSERT 0 1 postgres=# select * from test; a | b ---+ 1 | This will work 2 | This works too (2 rows) postgres=# select currval('test_a_seq'); currval - 1 (1 row) postgres=# insert into test(b) values('This will bomb'); ERROR: duplicate key value violates unique constraint test_pkey postgres=# select currval('test_a_seq'); currval - 2 (1 row) You can read all about it here: http://www.postgresql.org/docs/8.3/static/datatype-numeric.html#DATATYPE-SERIAL I hope all this helps. :-) Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland r...@iol.ie Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] view table pkey values
Hello, Is it possible to select or otherwise view a table's primary key values? I'm troubleshooting the following error: ERROR: duplicate key value violates unique constraint foo_pkey The insert that yields the error seems innocuous enough: INSERT INTO foo (color_id, ordinal, person_id) VALUES (1, 1019, 2); It seems as if there's a sequence (foo_pkey) that's got some weird values in it. The table itself looks like this: CREATE TABLE foo ( foo_id SERIAL PRIMARY KEY, color_id INTEGER NOT NULL REFERENCES color(color_id) ON DELETE NO ACTION, ordinal INTEGER DEFAULT NULL, person_id INTEGER NOT NULL REFERENCES person(person_id) ON DELETE SET NULL ON UPDATE CASCADE, created timestamp DEFAULT CURRENT_TIMESTAMP); Thanks in advance, Scott -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] view table pkey values
Hi Ray, On Aug 24, 2009, at 9:48 AM, Raymond O'Donnell wrote: On 24/08/2009 17:31, Scott Frankel wrote: The insert that yields the error seems innocuous enough: INSERT INTO foo (color_id, ordinal, person_id) VALUES (1, 1019, 2); It seems as if there's a sequence (foo_pkey) that's got some weird values in it. The table itself looks like this: CREATE TABLE foo ( foo_idSERIALPRIMARY KEY, If the sequence's current value is lower than the highest foo_id in the table, then you'll get collisions If I understand how tables are managed internally, there are 2 sequences: my explicit foo_id and the internal sequence foo_foo_id_seq: public | foo_foo_id_seq | sequence | pguser | It's this internal sequence that must be involved in the collision, since I'm not specifying an insert value for my explicit foo_id column. You can fix that by using setval() to set the sequence value to a number higher than any currently in foo_id. Aha! So the explicit foo_id value cannot exceed the internal sequence, foo_foo_id_seq value? They should actually be the same, unless there've been insert errors, right? Is there a command that lists the values for the internal, foo_foo_id_seq, sequence? Thanks! Scott Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland r...@iol.ie Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] view table pkey values
Got it! Yes, this started happening after loading from a pg_dump. Thanks for the explanation! Scott On Aug 24, 2009, at 10:52 AM, Tom Lane wrote: Scott Frankel lekn...@pacbell.net writes: Is there a command that lists the values for the internal, foo_foo_id_seq, sequence? select * from foo_foo_id_seq; The usual way to get into this sort of trouble is to load a bunch of data into the table while explicitly specifying ID values. It will take the data (as long as it doesn't conflict with existing IDs) but nothing happens to the sequence. pg_dump knows it has to update the sequence too, but a lot of other tools don't; and even with pg_dump a selective restore can mess things up. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] roll back to 8.1 for PyQt driver work-around
Thanks for the tips. I've created separate log files for the two versions of pg, but postmaster still won't start. When I try to start 8.3, the log file lists a fatal error in the postgresql.conf file. But there are no obvious errors in that file. Line 107 reads: shared_buffers = 1600kB. I've also tried grep'ing for port 5432, but it all looks clear. eg: tiento:~ root# lsof -Pni | grep :5432 -- yields nothing tiento[yfilm]% ps uxwa | grep postgres postgres 46547 0.0 0.1 604328 1204 s006 S+ 11:09PM 0:00.10 -tcsh root 46546 0.0 0.1 75536 1064 s006 S 11:09PM 0:00.02 su - postgres frankel 46720 0.0 0.0 599780 392 s003 R+ 8:58AM 0:00.00 grep postgres Here's the full text from the log file: FATAL: syntax error in file /Library/PostgreSQL8/data/ postgresql.conf line 107, near token kB FATAL: syntax error in file /Library/PostgreSQL8/data/ postgresql.conf line 107, near token kB LOG: could not bind IPv6 socket: Address already in use HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. FATAL: incorrect checksum in control file LOG: could not bind IPv6 socket: Address already in use HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. FATAL: incorrect checksum in control file LOG: could not bind IPv6 socket: Address already in use HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. FATAL: incorrect checksum in control file LOG: could not bind IPv6 socket: Address already in use HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. Thanks again! Scott On Jul 5, 2008, at 10:43 PM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: Any suggestions for how best to launch one of two different versions of pg installed on the same machine? I have both 8.3 and 8.1 installed on a MacBookPro (OS X 10.5.2). I stopped the 8.3 postmaster using pg_ctl in order to roll back to 8.1. Problem is, now I can't seem to start the server using either version. It looks like you're trying to use the same data directory for both versions, which won't work. They're not compatible on-disk. When I launch 8.1 with pg_ctl, it yields a postmaster starting message; but then a status check shows that the server is not running. Issuing the same commands for 8.3, I get similar results. Looking into the postmaster log file (your -l specification) might yield some insight. pg_ctl itself doesn't really know why the postmaster failed to start. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general Scott Frankel President/VFX Supervisor Circle-S Studios 510-339-7477 (o) 510-332-2990 (c) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] roll back to 8.1 for PyQt driver work-around
Sorry to drag this on further. Though I'm now able to start pg8.3 again (thanks!), I still can't launch pg8.1. Rolling back to 8.1 is my goal in order to work around a driver issue in Qt. Is there an example postgresql.conf file for pg 8.1 I can review? Mine appears to be valid only for pg8.3. Adding quotes to the shared_buffers value allows pg8.3 to start successfully. Unfortunately, pg8.1 continues to have issues with it. eg: FATAL: syntax error in file /Library/PostgreSQL8/data/ postgresql.conf line 107, near token kB FATAL: parameter shared_buffers requires an integer value FATAL: unrecognized configuration parameter default_text_search_config Thanks again! Scott On Jul 6, 2008, at 10:48 AM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: When I try to start 8.3, the log file lists a fatal error in the postgresql.conf file. But there are no obvious errors in that file. Line 107 reads: shared_buffers = 1600kB. You need quotes, like shared_buffers = '1600kB' FATAL: incorrect checksum in control file This looks like a version compatibility problem, though I'm surprised it wasn't complained of earlier. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general Scott Frankel President/VFX Supervisor Circle-S Studios 510-339-7477 (o) 510-332-2990 (c) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] roll back to 8.1 for PyQt driver work-around
Hello, I just signed on the list. Any suggestions for how best to launch one of two different versions of pg installed on the same machine? I have both 8.3 and 8.1 installed on a MacBookPro (OS X 10.5.2). I stopped the 8.3 postmaster using pg_ctl in order to roll back to 8.1. Problem is, now I can't seem to start the server using either version. When I launch 8.1 with pg_ctl, it yields a postmaster starting message; but then a status check shows that the server is not running. Issuing the same commands for 8.3, I get similar results. eg: [tiento:~] postgres% /opt/local/lib/postgresql81/bin/pg_ctl start -D / Library/PostgreSQL8/data -l /Users/Shared/pgLog/pgLog.txt postmaster starting [tiento:~] postgres% /opt/local/lib/postgresql81/bin/pg_ctl status -D / Library/PostgreSQL8/data pg_ctl: neither postmaster nor postgres running I'm trying to roll back to version 8.1 as I've run into a bug in Qt's QPSQL driver. I'm able to create tables and add rows of data to them; but my model.select() statements all fail. The This version of PostgreSQL is not supported and may not work message is ominous ;) Thanks in advance! Scott Scott Frankel President/VFX Supervisor Circle-S Studios 510-339-7477 (o) 510-332-2990 (c) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_restore error
Using custom output format instead of tar output (-Fc instead if -Ft) appears to work without error. My initial tests with the backup db seem to match the original db. The compressed output files are also a lot smaller ;) Note that the problematic tar files were never transfered between platforms. They are written to a local disk and are accessed directly from that location. Thanks for the info! Scott On Dec 1, 2005, at 8:02 PM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: Yes, the tar file contains a file called 1765.dat. A `cat` of that file shows nothing more than an empty line (i.e.: a carriage return). -rw--- 1 frankel prod1 Nov 29 11:20 1765.dat Extracting the archive, tar reported a lone zero block. I don't know what this refers to. Hmm, how big is the tarfile, and would you be willing to send it to me? I'm happy to either help debug Ft or switch to Fc in my scripts. I was under the impression, though, that Ft was required to backup db's with blobs. I am storing some thumbnail jpg images in my db. Either -Fc or -Ft can handle blobs ... and actually, in 8.1 the issue is gone entirely because plain text pg_dump can too. I'd also be interested to know if the pg_restore error is due to my upgrade to postgres 8.1 or macosx 10.4.3. Your guess is as good as mine at the moment. One thought that comes to mind --- did you move the tarfile across machines at any point, and if so could it have been munged by a Unix/DOS newline conversion? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] pg_restore error
What does this error mean? pg_restore: [tar archiver] could not find header for file 1765.dat in tar archive All of a sudden, I'm encountering this error as I debug a problem with the db interface I'm using. My backups are created using a script that hasn't changed recently, though I have just upgraded OS from MacOSX v10.4.2. to 10.4.3. My backup script executes: pg_dump -Ft -b -v source_name output_name Verbose output yields no error messages. MacOSX v10.4.3 PostgreSQL v8.1.0 Psycopg2 v2.0b5 Any suggestions? Thanks in advance! Scott ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_restore error
Yes, the tar file contains a file called 1765.dat. A `cat` of that file shows nothing more than an empty line (i.e.: a carriage return). -rw--- 1 frankel prod1 Nov 29 11:20 1765.dat Extracting the archive, tar reported a lone zero block. I don't know what this refers to. tar: A lone zero block at 9242 I'm happy to either help debug Ft or switch to Fc in my scripts. I was under the impression, though, that Ft was required to backup db's with blobs. I am storing some thumbnail jpg images in my db. I'd also be interested to know if the pg_restore error is due to my upgrade to postgres 8.1 or macosx 10.4.3. Thanks Scott On Dec 1, 2005, at 4:47 PM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: What does this error mean? pg_restore: [tar archiver] could not find header for file 1765.dat in tar archive It means either the tar file is corrupt or pg_restore has a bug. Can tar read the file correctly? If so, does it find a file named 1765.dat therein? (Historically, the tar code in pg_dump/pg_restore has not been as well debugged as the custom format, so you might want to just use -Fc instead. If you're really intent on using -Ft, be prepared to help us debug it.) regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] build errors on MacOSX
Voila! Xcode v2.1 worked. Thanks! Scott On Nov 9, 2005, at 2:22 PM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: My build of postgresql-8.1.0 is failing on MacOSX. /usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: -lSystem is not an object file (not allowed in a library) We've seen this before. IIRC, it means you're using outdated Xcode tools; you need to update to Xcode 2.0 or later (2.1 is current I believe). Try searching the list archives for that error message if you want more details. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] save history error, ignorable?
I just upgraded from postgres 7.x.x to 8.1.0 and am getting the following error from cmd-line psql:could not save history to file "my_home_dir/.psql_history": Invalid argumentSearching the archives, I found a thread about this error wherein the poster "solved" the problem by commenting out the reporting code in the source file. Tom's reply (*) suggests a discrepancy in libraries on MacOSX. Nonetheless, I *seem* to have the intended functionality (readline history on the cmd-line), in spite of the error msg.Questions:- Is the error msg ignorable?- Else, is there a post-compile fix for the problem?- Does the correct lib exist on MacOSX? If so, is there doco on how to specify its use on compiling?My platform:MacOSX v10.4.2PostgreSQL v8.1.0Thanks in advance!Scott(*) [ from Tom Lane, via archives search ]The Postgres code in that area hasn't changed at all. Maybe in thisbuild you linked against Apple's builtin libedit instead of libreadline?libedit seems to have a bizarre definition of the result value fromwrite_history()
Re: [GENERAL] save history error, ignorable?
OK. Thanks! Scott On Nov 10, 2005, at 10:20 AM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: [ Mac OS X ] could not save history to file my_home_dir/.psql_history: Invalid argument Searching the archives, I found a thread about this error wherein the poster solved the problem by commenting out the reporting code in the source file. If you look back further, there is more discussion of this several months back. It seems that libedit and libreadline are not compatible about how write_history() reports errors. We haven't figured out a nice solution for this yet. - Is the error msg ignorable? Pretty much. - Does the correct lib exist on MacOSX? You can grab libreadline and install it into /usr/local, or use the version from Fink. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] pg_restore errors
After upgrading from postgres 7.x.x to 8.1.0, my database restore produces the following error msgs. The database is quite pedestrian: a dozen or so tables, couple of triggers, some comments, c. Note that even though I get the following errors, a '\dt+' returns the comment strings. Are the errors ignorable? Should I be concerned? Thanks again! Scott pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 3; 0 2200 COMMENT SCHEMA public postgres pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of schema public Command was: COMMENT ON SCHEMA public IS 'Standard public schema'; pg_restore: WARNING: no privileges could be revoked pg_restore: WARNING: no privileges were granted WARNING: errors ignored on restore: 1 MacOSX v10.4.2 PostgreSQL v8.1.0 The dump was created using the following cmd-line: pg_dump -Ft -b -v dbname dumpname.tar ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_restore errors
Voila, 2. Executed as superuser and, yes, no errors. Thanks again! Scott On Nov 10, 2005, at 3:51 PM, Tom Lane wrote: Scott Frankel [EMAIL PROTECTED] writes: After upgrading from postgres 7.x.x to 8.1.0, my database restore produces the following error msgs. SCHEMA public postgres pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of schema public Command was: COMMENT ON SCHEMA public IS 'Standard public schema'; pg_restore: WARNING: no privileges could be revoked pg_restore: WARNING: no privileges were granted Apparently you're not running the restore as a database superuser? Those particular messages can be ignored, since public presumably exists (and is commented) already. If you see any others then you might want to worry. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Restoring from filesystem backup
I'm no expert, but I did just accomplish a restore from backup; so at least I can help you with your second question. pg_dump is the animal you'll want to use to create your backup. I use it wrapped in a script that automatically timestamps and names the output appropriately. Args looks like this: pg_dump -Ft -b -v the_db_name the_output_file.tar pg_restore restores the database. i.e.: pg_restore -d the_db_name the_output_file.tar Hope this helps - Scott On Nov 10, 2005, at 2:41 PM, Dianne Yumul wrote: Hello to everybody. I just have a couple of newbie questions. I'm trying to restore our databases from a file system backup. The backups are done with rsync, the postgresql service is stopped before backups are done on the server. The system is Fedora Core 4 with Postgresql 8.0.3. Now back to restoring. I tried the following on a development box :) 1. stop postgresql service 2. mv /home/postgres/data /home/postgres/data.old 3. rsync over desired data/ folder from an external drive 4. start postgresql service The last step failed with this error from the logs: postmaster: could not find the database system Expected to find it in the directory /home/postgres/data, but could not open file /home/postgres/data/global/pg_control: No such file or directory Sure enough pg_control data was missing, actually /home/postgres/ data/global was empty. So I removed the 'bad' data/ directory and replaced it with a previous day's backup, and everything restored nicely. My questions are (1) what am I doing wrong that the filesystem backups would sometimes have an empty /home/postgres/data/global directory, thereby not permitting me to restore from it, and (2) what is the preferred way to restore from a file system backup. Any help, even a shove to the right direction, would be greatly appreciated. Thank you and have a pleasant day. Dianne ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] build errors on MacOSX
My build of postgresql-8.1.0 is failing on MacOSX. Following the INSTALL file, I passed configure flags for bonjour (what's that?) and python support. i.e.: ./configure --with-bonjour --with-python My invocation of make, launched from the postgres account, returns the following errors: gcc -no-cpp-precomp -O2 -Wall -Wmissing-prototypes -Wpointer-arith - Winline -Wendif-labels -fno-strict-aliasing -dynamiclib - install_name /usr/local/pgsql/lib/libpq.4.dylib - compatibility_version 4 -current_version 4.1 -multiply_defined suppress fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe- lobj.o fe-protocol2.o fe-protocol3.o pqexpbuffer.o pqsignal.o fe- secure.o md5.o ip.o wchar.o encnames.o noblock.o pgstrcasecmp.o thread.o -L../../../src/port -lresolv -o libpq.4.1.dylib /usr/bin/libtool: for architecture: cputype (16777234) cpusubtype (0) file: -lSystem is not an object file (not allowed in a library) make[3]: *** [libpq.4.1.dylib] Error 1 make[2]: *** [all] Error 2 make[1]: *** [all] Error 2 make: *** [all] Error 2 OSX v10.4.2 PostgreSQL v8.1.0 Any suggestions? Thanks Scott ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] urgent: another postmaster
After a server crash forced a reboot, `pg_ctl start` fails with a FATAL error. Log output says that the lock file, postmaster.pid, already exists. Can I just su to root and delete the .pid file to relaunch? Or will this have nasty side-effects? Thanks in advance! Scott ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] urgent: another postmaster
Bingo - Thanks! On May 23, 2005, at 1:14 PM, Scott Marlowe wrote: On Mon, 2005-05-23 at 14:58, Scott Frankel wrote: After a server crash forced a reboot, `pg_ctl start` fails with a FATAL error. Log output says that the lock file, postmaster.pid, already exists. Can I just su to root and delete the .pid file to relaunch? Or will this have nasty Just check first to make sure there really are no postmasters running and you should be safe ps aex|grep post and if there's no output, delete the pid file. If there really are other postmasters or postgreses running, then kill them first. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Please Recommend a Front End App
Though I haven't embarked on any expeditions into the world of printing yet, I've found Python to be a terrific language for db access. My solution uses Apple hardware, OSX, postgres, the psycopg programming interface to Python, Python, and my slowboat hacking with wxPython to build a GUI front end. So far so good! (Thanks in large part to useful info from this mail list!) Before you compost your older iMacs, you might want to check out YellowDog Linux. I've run various releases of their OS over the years and it's a great way to keep your hardware viable. Best luck - Scott On May 12, 2005, at 4:06 PM, Kurt Gibson wrote: Newbie - please help me choose a direction. I want to know what would be the best front-end app/language to use for postgresql for exact form replication and ease of use. PHP, Python, Java, Rekall (the Kompany), other? All suggestions welcome and appreciated. Background--- I am coming from Filemaker Pro 5.0 and Mac OS 10.2.8. I am the newest of newbies to postgresql. I just downloaded the database today. I have not started the database and do not even know if I can run it on my computers. I have a few old WinTel machines at home but run mainly Macs at the office. One important solution I created in Filemaker Pro requires very detailed and exact replication of official court forms with database data entered on the forms. Imagine a form with small type, boxes and spacing that must be exactly as on the official form and data from the database on the form as printed to pdf. One nice thing about Filemaker is that its reporting ability is very flexible. I also have an extensive solution with approx 50 tables/databases to run my small business. I built a calendar, contacts, clients, timelog, and billing solution. I have been bumping my head against Filemaker Pro's 50 database limit for years - server would allow you 125 databases for about $1000. For those of you with no experience with Filemaker, a database is a table. Each file only has one table and all layouts/reports/forms and scripts are in the same file. This setup has pros and cons that are irrelevant now. I am trying to migrate to linux and postgresql. I love the MacOS but they just upgraded to 10.4 (tiger) and made 3 of my iMacs obsolete because they do not have on-board firewire so are not supported by 10.4. Funny how Linux can be compiled to run on a 386 intel chip from a floppy drive but MacOS cannot now run without firewire on a 4 year old computer. Filemaker.com created Filemaker Pro 7 which adds great features and removes the 50 table limit - with a $1000 price tag for 3 units and a 5-simultaneous-user limit. The upgrade to 7 may require extensive changes to my solutions. Now would be a natural time to change to a new system since I have to relearn/change everything anyway. Additionally, I can only assume that I will be in for more of the same treatment as I and people like me get our data more entrenched in their proprietary business models. Thanks for your help. ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] on insert rule primary key
Problem solved. Hacking away 'til the wee hours yielded a solution using an ON UPDATE rule, adding a row to a new table. Successful test sample follows, for anyone interested. Scott CREATE TABLE colors ( clrs_pkey SERIALPRIMARY KEY, first_nametext UNIQUE DEFAULT NULL, fav_color text DEFAULT NULL ); CREATE TABLE mono ( mono_pkey SERIALPRIMARY KEY, clrs_pkey integer REFERENCES colors, monochrometext DEFAULT NULL ); CREATE RULE mono_rule AS ON UPDATE TO colors WHERE NEW.fav_color = 'blanco' OR NEW.fav_color = 'negro' DO INSERT INTO mono (clrs_pkey, monochrome) VALUES (NEW.clrs_pkey, 'mono') ; INSERT INTO colors (first_name, fav_color) VALUES ('carmen', 'verde'); INSERT INTO colors (first_name, fav_color) VALUES ('carlos', 'amarillo'); INSERT INTO colors (first_name, fav_color) VALUES ('rocio', 'rojo'); INSERT INTO colors (first_name, fav_color) VALUES ('miranda', 'rosa'); UPDATE ONLY colors SET fav_color = 'blanco' WHERE clrs_pkey = 1; UPDATE ONLY colors SET fav_color = 'negro' WHERE clrs_pkey = 3; test= SELECT * FROM mono; mono_pkey | clrs_pkey | monochrome ---+---+ 1 | 1 | mono 2 | 3 | mono (2 rows) On Apr 27, 2005, at 1:20 PM, Scott Frankel wrote: I am trying to construct a rule that performs an UPDATE if specific conditions are met in an INSERT statement. Limiting UPDATE's SET action to just the new row by testing for the new primary key is failing for some reason. Yet if I eliminate the test, all rows in the table are updated. The actual rule I'm building must handle several OR clauses in its conditional test, so I've included that in the following sample. The output I would've expected would have both the Carlos and Miranda inserts yielding their favorite color, azul. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] on insert rule primary key
My original post got eaten. Apologies in advance if you receive this message twice. I am trying to construct a rule that performs an UPDATE if specific conditions are met in an INSERT statement. Limiting UPDATE's SET action to just the new row by testing for the new primary key is failing for some reason. Yet if I eliminate the test, all rows in the table are updated. The actual rule I'm building must handle several OR clauses in its conditional test, so I've included that in the following sample. The output I would've expected would have both the Carlos and Miranda inserts yielding their favorite color, azul. Any suggestions on how I can construct the rule to automatically and correctly fill the fav_color field? Thanks in advance! Scott CREATE TABLE colors ( clrs_pkey SERIALPRIMARY KEY, first_nametext UNIQUE DEFAULT NULL, fav_color text DEFAULT NULL ); CREATE RULE color_rule AS ON INSERT TO ONLY colors WHERE first_name = 'carlos' OR first_name = 'miranda' DO UPDATE ONLY colors SET fav_color = 'azul' WHERE clrs_pkey = NEW.clrs_pkey; INSERT INTO colors (first_name, fav_color) VALUES ('carmen', 'verde'); INSERT INTO colors (first_name) VALUES ('carlos'); INSERT INTO colors (first_name, fav_color) VALUES ('rocio', 'rojo'); INSERT INTO colors (first_name, fav_color) VALUES ('miranda', 'negro'); test= SELECT * FROM ONLY colors; clrs_pkey | first_name | fav_color ---++--- 1 | carmen | verde 2 | carlos | 5 | rocio | rojo 6 | miranda| negro (4 rows) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] inherited table and rules
Close. Thanks for the very helpful suggestions! As I read the doco on rules and dissect the rule I've constructed, one issue remains: the UPDATE in my rule causes additional rows to be added to the parent table. How is that possible? How can it be suppressed? i.e.: My rule specifies that when the parent table is updated, the inherited table receives an INSERT. There is nothing that I see that explicitly calls for a new row to be added to the parent table. I've tried fiddling with INSTEAD; but my attempts haven't yielded the results I'm looking for. (Though the rule docs are quite opaque on the subect ...) Thanks again! Scott Here's what my sample code (below) yields: cs_test=# SELECT * FROM people; usr_pkey | usr_name | color | timestamp --+--+-+ 2 | carol| green | 2005-03-23 11:12:49.627183 3 | ted | blue| 2005-03-23 11:12:49.637483 1 | bob | black | 2005-03-23 11:12:49.616602 1 | bob | red | 2005-03-23 11:12:49.616602 1 | bob | cyan| 2005-03-23 11:12:49.616602 1 | bob | magenta | 2005-03-23 11:12:49.616602 1 | bob | yellow | 2005-03-23 11:12:49.616602 (7 rows) cs_test=# SELECT * FROM people_history; usr_pkey | usr_name | color | timestamp | hist_pkey |hist_tstamp --+--+-+ +---+ 1 | bob | red | 2005-03-23 11:12:49.616602 | 1 | 2005-03-23 11:13:17.04928 1 | bob | cyan| 2005-03-23 11:12:49.616602 | 2 | 2005-03-23 11:22:21.374629 1 | bob | magenta | 2005-03-23 11:12:49.616602 | 3 | 2005-03-23 11:23:49.253014 1 | bob | yellow | 2005-03-23 11:12:49.616602 | 4 | 2005-03-23 11:23:53.924315 (4 rows) Here's what I'm looking for: cs_test=# SELECT * FROM people; usr_pkey | usr_name | color | timestamp --+--+-+ 2 | carol| green | 2005-03-23 11:12:49.627183 3 | ted | blue| 2005-03-23 11:12:49.637483 1 | bob | black | 2005-03-23 11:12:49.616602 (3 rows) cs_test=# SELECT * FROM people_history; usr_pkey | usr_name | color | timestamp | hist_pkey |hist_tstamp --+--+-+ +---+ 1 | bob | red | 2005-03-23 11:12:49.616602 | 1 | 2005-03-23 11:13:17.04928 1 | bob | cyan| 2005-03-23 11:12:49.616602 | 2 | 2005-03-23 11:22:21.374629 1 | bob | magenta | 2005-03-23 11:12:49.616602 | 3 | 2005-03-23 11:23:49.253014 1 | bob | yellow | 2005-03-23 11:12:49.616602 | 4 | 2005-03-23 11:23:53.924315 (4 rows) sample code: CREATE TABLE people ( usr_pkey SERIALPRIMARY KEY, usr_name text UNIQUE DEFAULT NULL, colortext DEFAULT NULL, timestamptimestamp DEFAULT CURRENT_TIMESTAMP ); CREATE TABLE people_history ( hist_pkeySERIALNOT NULL PRIMARY KEY, hist_tstamp timestamp DEFAULT CURRENT_TIMESTAMP ) INHERITS (people); CREATE RULE people_upd_history AS ON UPDATE TO people DO INSERT INTO people_history SELECT * FROM ONLY people WHERE usr_pkey = old.usr_pkey; -- populate table INSERT INTO people (usr_name, color) VALUES ('bob', 'red'); INSERT INTO people (usr_name, color) VALUES ('carol', 'green'); INSERT INTO people (usr_name, color) VALUES ('ted', 'blue'); -- update table (1) UPDATE ONLY people SET color = 'cyan' WHERE usr_pkey = 1; -- update table (2) UPDATE ONLY people SET color = 'magenta' WHERE usr_pkey = 1; -- update table (3) UPDATE ONLY people SET color = 'yellow' WHERE usr_pkey = 1; -- update table (4) UPDATE ONLY people SET color = 'black' WHERE usr_pkey = 1; ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] inherited table and rules
On Mar 23, 2005, at 2:42 PM, Klint Gore wrote: Rows inserted into inherited tables are visible to the parent. It's effectively the same as having a union all on the 2 tables. Using the only qualifier is how you stop the union happening. This explains it. Thanks! Scott ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] inherited table and rules
This is weird. I have two tables: one inherits from the other. And I have a rule that populates the inherited table with changes from the first. When I update a row in the first table, I get an ever-larger number of rows added to both it and the inherited table. i.e.: update 1 yields 2 new rows update 2 yields 6 new rows update 3 yields 42 new rows update 4 yields 1806 new rows I'm clearly doing something wrong ;) My hope was that on update, a field in the first table would be changed (leaving the same number of total rows as prior to update). And the inherited table would have one row added to it per update, reflecting a change log of the updates. Thanks in advance! Example code follows. Scott CREATE TABLE people ( usr_pkeySERIAL PRIMARY KEY, usr_nametextUNIQUE DEFAULT NULL, color textDEFAULT NULL, timestamp timestamp DEFAULT CURRENT_TIMESTAMP ); CREATE TABLE people_history ( hist_pkey SERIAL NOT NULL PRIMARY KEY, hist_tstamp timestamp DEFAULT CURRENT_TIMESTAMP ) INHERITS(people); CREATE RULE people_upd_history AS ON UPDATE TO people DO INSERT INTO people_history SELECT * FROM people WHERE usr_pkey = old.usr_pkey; -- populate table INSERT INTO people (usr_name, color) VALUES ('bob', 'red'); INSERT INTO people (usr_name, color) VALUES ('carol', 'green'); INSERT INTO people (usr_name, color) VALUES ('ted', 'blue'); -- update table (1) -- 2 UPDATE people SET color = 'cyan' WHERE usr_pkey = 1; -- update table (2) -- 6 UPDATE people SET color = 'magenta' WHERE usr_pkey = 1; -- update table (3) -- 42 UPDATE people SET color = 'yellow' WHERE usr_pkey = 1; -- update table (4) -- 1806 UPDATE people SET color = 'black' WHERE usr_pkey = 1; ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] inherited table and rules
Syntax troubles. What is the proper syntax for using FROM ONLY table_name in an UPDATE statement? According to the docs, In a FROM clause, I should be able to use the ONLY keyword preceding the table name. This throws an error: UPDATE FROM ONLY people SET color = 'cyan' WHERE usr_pkey = 1; What is the proper syntax for specifying FROM ONLY in the inheritance statement? This also throws an error: CREATE TABLE people_history ( hist_pkeySERIALNOT NULL PRIMARY KEY, hist_tstamp timestamp DEFAULT CURRENT_TIMESTAMP ) INHERITS ONLY (people); What does GUC stand for? ;) Thanks! Scott On Mar 22, 2005, at 6:55 PM, Stephan Szabo wrote: On Tue, 22 Mar 2005, Scott Frankel wrote: This is weird. I have two tables: one inherits from the other. And I have a rule that populates the inherited table with changes from the first. When I update a row in the first table, I get an ever-larger number of rows added to both it and the inherited table. i.e.: update 1 yields 2 new rows update 2 yields 6 new rows update 3 yields 42 new rows update 4 yields 1806 new rows I'm clearly doing something wrong ;) I think you need to be using ONLY (or changing the sql_inheritance GUC variable) in all the queries on people in order to not also be getting rows from people_history in the SELECT and UPDATE (and in fact changing the select and update statements to FROM ONLY people seems to work for me). ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] preserving data after updates
Door number 3. Thanks for the responses and terrific suggestions! Scott On Mar 4, 2005, at 8:28 AM, Greg Patnude wrote: I use a modified form of option 3 with an ON UPDATE RULE the update rule copies the row to an inherited table... CREATE TABLE dm_user ( id SERIAL NOT NULL PRIMARY KEY, lu_user_type INTEGER NOT NULL REFERENCES lu_user_type(id), dm_user_address INTEGER NOT NULL DEFAULT 0, dm_user_email INTEGER NOT NULL DEFAULT 0, f_name VARCHAR(50) NOT NULL, m_name VARCHAR(50) NOT NULL, l_name VARCHAR(50) NOT NULL, uname VARCHAR(20) NOT NULL, upwd VARCHAR(20) NOT NULL, pwd_change_reqd BOOLEAN DEFAULT FALSE, login_allowed BOOLEAN DEFAULT TRUE, lost_passwd BOOLEAN DEFAULT FALSE, create_dt TIMESTAMP NOT NULL DEFAULT NOW(), change_dt TIMESTAMP NOT NULL DEFAULT NOW(), change_id INTEGER NOT NULL DEFAULT 0, active_flag BOOLEAN NOT NULL DEFAULT TRUE ) WITH OIDS; CREATE TABLE dm_user_history ( history_id SERIAL NOT NULL PRIMARY KEY, hist_create_dt TIMESTAMP NOT NULL DEFAULT NOW() ) INHERITS (dm_user); CREATE RULE dm_user_upd_history AS ON UPDATE TO dm_user DO INSERT INTO dm_user_history SELECT * FROM dm_user WHERE id = old.id; CREATE RULE dm_user_nodelete AS ON DELETE TO dm_user DO INSTEAD UPDATE dm_user SET active_flag = FALSE WHERE id = old.id; Scott Frankel [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Is there a canonical form that db schema designers use to save changes to the data in their databases? For example, given a table with rows of data, if I UPDATE a field in a row, the previous value is lost. If I wanted to track the changes to my data over time, it occurs to me that I could, 1) copy the whole row of data using the new value, thus leaving the old row intact in the db for fishing expeditions, posterity, c. -- awfully wasteful, especially with binary data 2) enter a new row that contains only new data fields, requiring building a full set of data through heavy lifting and multiple queries through 'n' number of old rows -- overly complex query design probably leading to errors 3) create a new table that tracks changes -- the table is either wide enough to mirror all columns in the working table, or uses generic columns and API tricks to parse token pair strings, ... 4) other? Thanks Scott ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] preserving data after updates
Is there a canonical form that db schema designers use to save changes to the data in their databases? For example, given a table with rows of data, if I UPDATE a field in a row, the previous value is lost. If I wanted to track the changes to my data over time, it occurs to me that I could, 1) copy the whole row of data using the new value, thus leaving the old row intact in the db for fishing expeditions, posterity, c. -- awfully wasteful, especially with binary data 2) enter a new row that contains only new data fields, requiring building a full set of data through heavy lifting and multiple queries through 'n' number of old rows -- overly complex query design probably leading to errors 3) create a new table that tracks changes -- the table is either wide enough to mirror all columns in the working table, or uses generic columns and API tricks to parse token pair strings, ... 4) other? Thanks Scott ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] sql join question
Got it. Thanks! Scott On Mar 1, 2005, at 10:52 PM, Ragnar Hafstað wrote: On Tue, 2005-03-01 at 16:51 -0800, Scott Frankel wrote: Sweet! And not so sweet. The natural join worked beautifully with my test schema; but it failed to yield any rows with my real-world schema. I think I've tracked down why: duplicate column names. i.e.: ... CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY, palette_name text UNIQUE DEFAULT NULL, qwe text); CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY, tone_name text UNIQUE DEFAULT NULL, palette_pkey integer REFERENCES palettes, qwe text); Are the 'qwe' columns in both tables clobbering each other and preventing the join from succeeding? the docs really explain this better than I can, but a table1 NATURAL JOIN table2 is shorthand fo a table1 JOIN table2 USING (list_of_common_keys) so: select color_name from palettes join tones USING (palette_pkey) join colors USING (tone_pkey) where palette_name='plt1'; see: http://www.postgresql.org/docs/8.0/interactive/sql-select.html gnari ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] sql join question
I want to return all records that match criteria across three separate tables and , in spite of reading up on joins, have so far been unable to design a solution that doesn't require caching a hash table of intermediate results. Here's the situation: Let's say color names belong to a set of tones, each of which belong to a palette. A palette can be comprised of multiple tones. Each tone can contain multiple color names. i.e.: palette palette1 tones red, green colors rose madder, crimson, red ochre, phthalocyanine, leaf green palette palette2 tones blue colors cerulean palette palette3 tones yellow colors chrome Task: find all color names in each of palette1's tones. Can this be done in a single SQL statement? Or does it require storing the results of a select to find each of the tones that belong to palette1, then separate selects on each resultant tone to yield the 5 color names? Thanks in advance! Scott p.s. Here's my test case sql: CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY, palette_name text UNIQUE DEFAULT NULL); INSERT INTO palettes (palette_name) VALUES ('plt1'); INSERT INTO palettes (palette_name) VALUES ('plt2'); INSERT INTO palettes (palette_name) VALUES ('plt3'); CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY, tone_name text UNIQUE DEFAULT NULL, palette_pkey integer REFERENCES palettes); INSERT INTO tones(tone_name, palette_pkey) VALUES ('red', 1); INSERT INTO tones(tone_name, palette_pkey) VALUES ('green', 1); INSERT INTO tones(tone_name, palette_pkey) VALUES ('blue', 2); INSERT INTO tones(tone_name, palette_pkey) VALUES ('yellow', 3); CREATE TABLE colors (color_pkey SERIAL PRIMARY KEY, color_name text UNIQUE DEFAULT NULL, tone_pkey integer REFERENCES tones); INSERT INTO colors (color_name, tone_pkey) VALUES ('rose madder', 1); INSERT INTO colors (color_name, tone_pkey) VALUES ('crimson', 1); INSERT INTO colors (color_name, tone_pkey) VALUES ('red ochre', 1); INSERT INTO colors (color_name, tone_pkey) VALUES ('phthalocyanine', 2); INSERT INTO colors (color_name, tone_pkey) VALUES ('leaf green', 2); INSERT INTO colors (color_name, tone_pkey) VALUES ('cerulean', 3); INSERT INTO colors (color_name, tone_pkey) VALUES ('chrome', 4); # -1- [ cache results in a hash table for further processing ] SELECT * FROM tones WHERE palette_pkey = 1; # yields # tone_pkey | tone_name | palette_pkey # ---+---+-- # 1 | red |1 # 2 | green |1 # -2- [ for each tone returned from step 1 ] SELECT * FROM colors WHERE tone_pkey = 1; # yields # color_pkey | color_name | tone_pkey # +-+--- # 1 | rose madder | 1 # 2 | crimson | 1 # 3 | red ochre | 1 SELECT * FROM colors WHERE tone_pkey = 2; # yields # color_pkey | color_name | tone_pkey # ++--- # 4 | phthalocyanine | 2 # 5 | leaf | 2 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] sql join question
Sweet! And not so sweet. The natural join worked beautifully with my test schema; but it failed to yield any rows with my real-world schema. I think I've tracked down why: duplicate column names. i.e.: -1- these tables yield rows from a NATURAL JOIN query CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY, palette_name text UNIQUE DEFAULT NULL); CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY, tone_name text UNIQUE DEFAULT NULL, palette_pkey integer REFERENCES palettes); -2- these tables yield NO rows from a NATURAL JOIN query CREATE TABLE palettes (palette_pkey SERIAL PRIMARY KEY, palette_name text UNIQUE DEFAULT NULL, qwe text); CREATE TABLE tones(tone_pkey SERIAL PRIMARY KEY, tone_name text UNIQUE DEFAULT NULL, palette_pkey integer REFERENCES palettes, qwe text); Are the 'qwe' columns in both tables clobbering each other and preventing the join from succeeding? The offending columns are inconsequential for what I'm trying to do with this operation. Can they be suppressed from the query for this SQL statement to function properly? Or am I SOL? Thanks again! Scott On Mar 1, 2005, at 2:28 PM, Ragnar Hafstað wrote: On Tue, 2005-03-01 at 13:42 -0800, Scott Frankel wrote: [snip problem] Task: find all color names in each of palette1's tones. Can this be done in a single SQL statement? [snip table examples] looks like a job for NATURAL JOIN test=# select color_name from palettes natural join tones natural join colors where palette_name='plt1'; color_name rose madder crimson red ochre phthalocyanine leaf green (5 rows) gnari ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] query or schema question
My first schema design has passed all the tests I've thrown it so far, 'cept one -- and a simple one at that. I wonder if the following boils down to a question of query construction or if I need to redesign my schema. Consider the (contrived) example of 3 universities, where each hosts a *unique* list of departments (very contrived). Now populate the universities with students. # # Query: list all the students attending university XXX. # Schema: CREATE TABLE universities (uni_pkey SERIAL PRIMARY KEY, uni_name text); CREATE TABLE departments (dpt_pkey SERIAL PRIMARY KEY, dpt_name text, uni_pkey int REFERENCES universities); CREATE TABLE students (stu_pkey SERIAL PRIMARY KEY, stu_name text, dpt_pkey int REFERENCES departments); Note that since I created the connection from university--departments--students, I thought I could design a query that would return the info requested above without spiking-off a reference from the students table directly back to the universities table. Well, it seems *I* can't ;) So, which is better -- or possible? A quick fix to the schema, referencing uni_pkey in the students table? Or is there a reasonable way to traverse the dependencies from the students table back to the universities table? Thanks heartily in advance! Scott [ here's my sql, pre-baked; note that each university hosts a *unique* set of departments in this most-contrived example ] CREATE TABLE universities (uni_pkey SERIAL PRIMARY KEY, uni_name text); CREATE TABLE departments (dpt_pkey SERIAL PRIMARY KEY, dpt_name text, uni_pkey int REFERENCES universities); CREATE TABLE students (stu_pkey SERIAL PRIMARY KEY, stu_name text, dpt_pkey int REFERENCES departments); INSERT INTO universities (uni_name) VALUES ('cal'); INSERT INTO universities (uni_name) VALUES ('stanford'); INSERT INTO universities (uni_name) VALUES ('ucla'); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('art', 1); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('physics', 1); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('oceanography', 1); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('math', 2); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('chemistry', 2); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('geography', 2); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('design', 3); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('geology', 3); INSERT INTO departments (dpt_name, uni_pkey) VALUES ('archeology', 3); INSERT INTO students (stu_name, dpt_pkey) VALUES ('maria', 1); INSERT INTO students (stu_name, dpt_pkey) VALUES ('ed', 1); INSERT INTO students (stu_name, dpt_pkey) VALUES ('brian', 2); INSERT INTO students (stu_name, dpt_pkey) VALUES ('claire', 2); INSERT INTO students (stu_name, dpt_pkey) VALUES ('samantha', 2); INSERT INTO students (stu_name, dpt_pkey) VALUES ('siobhan', 2); INSERT INTO students (stu_name, dpt_pkey) VALUES ('pilar', 3); INSERT INTO students (stu_name, dpt_pkey) VALUES ('george', 3); INSERT INTO students (stu_name, dpt_pkey) VALUES ('nick', 3); INSERT INTO students (stu_name, dpt_pkey) VALUES ('bruce', 4); INSERT INTO students (stu_name, dpt_pkey) VALUES ('estelle', 5); INSERT INTO students (stu_name, dpt_pkey) VALUES ('harry', 6); INSERT INTO students (stu_name, dpt_pkey) VALUES ('rocio', 6); INSERT INTO students (stu_name, dpt_pkey) VALUES ('jose', 7); INSERT INTO students (stu_name, dpt_pkey) VALUES ('steve', 8); INSERT INTO students (stu_name, dpt_pkey) VALUES ('henry', 8); INSERT INTO students (stu_name, dpt_pkey) VALUES ('chris', 9); INSERT INTO students (stu_name, dpt_pkey) VALUES ('john', 9); ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] data integrity and inserts
I want to ensure data integrity when inserting into a table, preventing multiple entries of identical rows of data. Does this call for using a trigger? How would triggers perform a query to test if data already exists in the table? (The doco outlines how triggers perform tests on NEW data inserted into a table; but I haven't found anything on data already extant.) Thanks in advance! Scott sample table: CREATE TABLE names (the_id SERIAL PRIMARY KEY, the_name text); ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] data integrity and inserts
1. CREATE TABLE names (the_id SERIAL PRIMARY KEY, the_name text); CREATE UNIQUE INDEX uidx_thename ON names(the_name); vs. 2. CREATE TABLE names (the_id SERIAL PRIMARY KEY, the_name text UNIQUE); Is the UNIQUE constraint in the second solution merely short-hand for the explicit index declaration of the first solution? Or is there a functional difference between them that I should choose between? Thanks again! Scott On Dec 1, 2004, at 10:11 AM, Scott Frankel wrote: I want to ensure data integrity when inserting into a table, preventing multiple entries of identical rows of data. Does this call for using a trigger? How would triggers perform a query to test if data already exists in the table? (The doco outlines how triggers perform tests on NEW data inserted into a table; but I haven't found anything on data already extant.) Thanks in advance! Scott sample table: CREATE TABLE names (the_id SERIAL PRIMARY KEY, the_name text); ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] simple query question: return latest
On Nov 12, 2004, at 8:24 AM, Michael Fuhr wrote: [Top-posting fixed] On Fri, Nov 12, 2004 at 09:06:08AM -0500, Goutam Paruchuri wrote: Scott Frankel wrote: ORDER BY DESC LIMIT 1 is much simpler and more readable than a sub-query. Though the sub-query approach looks to be a good template for ensuring more accurate results by being more explicit. Scott, how would a subquery ensure more accurate results by being more explicit? Good question. I'm just now starting to construct sub-queries. Perhaps naively, I assumed that setting g.date explicitly equal to the results of a MAX function would return more reliable results than limiting a return list to just the first value listed. Though it's entirely possible that both approaches use the same logic under the hood. Nonetheless, I'm using the DESC LIMIT 1 approach for now as it yields the results I need and is much more readable. Thanks again! Scott ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] sub-query question
How does one embed a sub-query lookup to one table in order to replace a foreign key id number with it's name in a SELECT on a second table? i.e.: given the following two tables, I want to replace the color_id of 1 with the color_name 'red.' (The SQL to create the two tables follows below.) test=# SELECT * from users ; color_id | name | the_date --+--+ 1 | john | 2004-03-10 3 | jane | 2004-04-12 1 | joe | 2004-05-14 2 | jepe | 2004-06-16 (4 rows) test=# SELECT * from colors; color_id | color_name --+ 1 | red 2 | green 3 | blue (3 rows) My attempts yield an 'f' which looks suspiciously like a boolean false. Is there an ordering issue with my sub-query, such that the sub-query doesn't have enough info to perform its lookup? Here's my query: SELECT ( u.color_id = ( SELECT c.color_name FROM colors c WHERE color_id = 1)) AS color_name, u.name, u.the_date FROM users u WHERE u.color_id = 1 ORDER BY u.the_date DESC LIMIT 1; It returns: color_name | name | the_date +--+ f | joe | 2004-05-14 (1 row) Thanks! Scott Here's the SQL to create my test tables: CREATE TABLE colors (color_id SERIAL PRIMARY KEY, color_name text); CREATE TABLE users (color_id integer REFERENCES colors, name text, the_date date); INSERT INTO colors (color_name) VALUES ('red'); INSERT INTO colors (color_name) VALUES ('green'); INSERT INTO colors (color_name) VALUES ('blue'); INSERT INTO users (color_id, name, the_date) VALUES (1, 'john', '2004-03-10'); INSERT INTO users (color_id, name, the_date) VALUES (3, 'jane', '2004-04-12'); INSERT INTO users (color_id, name, the_date) VALUES (1, 'joe', '2004-05-14'); INSERT INTO users (color_id, name, the_date) VALUES (2, 'jepe', '2004-06-16'); ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] simple query question: return latest
Still too new to SQL to have run across this yet ... How does one return the latest row from a table, given multiple entries of varying data? i.e.: given a table that looks like this: color | date + red| 2004-01-19 blue | 2004-05-24 red| 2004-04-12 blue | 2004-05-24 How do I select the most recent entry for 'red'? Thanks in advance! Scott ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] simple query question: return latest
On Nov 11, 2004, at 5:09 PM, Michael Glaesemann wrote: Scott, On Nov 12, 2004, at 10:00 AM, Scott Frankel wrote: color | date + red| 2004-01-19 blue | 2004-05-24 red| 2004-04-12 blue | 2004-05-24 How do I select the most recent entry for 'red'? SELECT color, MAX(date) FROM giventable WHERE color = 'red' -- omit this line if you'd like to see the latest date for each color GROUP BY color; Unless I'm missing something, this returns every listing for color=red, in max order. So if I want the ONE most recent entry, is this something I have to offload to my app that parses the returned rows? Or is there a function in postgres that can return THE most recent entry? OT hint: You might want to take a look at the list of PostgreSQL Keywords in the documentation and avoid using them (such as date) to help you avoid naming issues in the future. Hmm. Good tip. Bad example terminology. Thanks! Scott Hope this helps. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] simple query question: return latest
ORDER BY DESC LIMIT 1 is much simpler and more readable than a sub-query. Though the sub-query approach looks to be a good template for ensuring more accurate results by being more explicit. Thanks to all who responded! Scott SELECT * FROM colortable WHERE color = 'red' ORDER BY date DESC LIMIT 1; SELECT g.color, g.date, g.entered_by FROM giventable g WHERE g.color = 'red' AND g.date = (SELECT MAX(g2.date) FROM giventable g2 WHERE g2.color= g.color ) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] basic debugging question
I'm attempting to debug a script that should perform a simple INSERT of values, but for some reason doesn't. The insert appears to occur without error, printing INSERT 18015 1 upon completion. Nonetheless, no data values appear to be added to the table when queried in psql. Questions: - What does the status msg, INSERT 18015 1, refer to? - What is this output called? (So I can search the documentation for it.) - Is there something clever I can access -- besides this list ;) -- so I can peek inside INSERT 18015 1 to see what pgres is thinking about? Note that when I perform the INSERT by hand in psql, the row of data is entered without incident. Thanks in advance! Scott ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] basic debugging question
I should have *myself* committed. Thanks for the suggestions (and OID tip)! It turned out that my script was not committing the transaction, so the insert was getting rolled-back. Thanks Scott On Oct 26, 2004, at 12:39 PM, Scott Frankel wrote: I'm attempting to debug a script that should perform a simple INSERT of values, but for some reason doesn't. The insert appears to occur without error, printing INSERT 18015 1 upon completion. Nonetheless, no data values appear to be added to the table when queried in psql. Questions: - What does the status msg, INSERT 18015 1, refer to? - What is this output called? (So I can search the documentation for it.) - Is there something clever I can access -- besides this list ;) -- so I can peek inside INSERT 18015 1 to see what pgres is thinking about? Note that when I perform the INSERT by hand in psql, the row of data is entered without incident. Thanks in advance! Scott ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] attn.: psycopg users -- simple question
Sorry to burden this list (again) with a psycopg question. I'm stuck at a table creation step and am not getting results from their list. Question: what's the appropriate syntax for specifying a primary key? My very simple table creation test (based on their first.py example is failing ... Here's what I'm trying. Non-pythonated syntax works in pgsql: no go in psycopg: curs.execute(CREATE TABLE key_test ( key_col CHAR(9) PRIMARY KEY, nother_col CHAR(256))) pure joy in pgsql: cs_test=# CREATE TABLE key_test ( cs_test(# key_col CHAR(9) PRIMARY KEY, cs_test(# nother_col CHAR(256) cs_test(# ); Thanks again! Scott ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] attn.: psycopg users -- simple question
Ack! I sent this msg on October 14. Since then, I've gotten the primary key stuff working. Please disregard this msg. Thanks Scott On Oct 14, 2004, at 4:16 PM, Scott Frankel wrote: Sorry to burden this list (again) with a psycopg question. I'm stuck at a table creation step and am not getting results from their list. Question: what's the appropriate syntax for specifying a primary key? My very simple table creation test (based on their first.py example is failing ... Here's what I'm trying. Non-pythonated syntax works in pgsql: no go in psycopg: curs.execute(CREATE TABLE key_test ( key_col CHAR(9) PRIMARY KEY, nother_col CHAR(256))) pure joy in pgsql: cs_test=# CREATE TABLE key_test ( cs_test(# key_col CHAR(9) PRIMARY KEY, cs_test(# nother_col CHAR(256) cs_test(# ); Thanks again! Scott ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] OS X Install
I recently installed PostGreSQL-7.4.5 on my OSX 10.3.5 system. I did not, however have the problem you're encountering. There was no postgres user already created on my system. 1. It's not like postgres just rolls off the tongue. It's hard to imagine another user of your system choosing that name for some reason ;) 2. If the postgres user was created by/during your install, then there should be some recourse for setting its password. I don't remember offhand, but I think NetInfo uses a klein star * to denote that the password is NOT set -- then allowing you to set it (hopefully ...) 3. If your is a single user system, I'd say that MAYBE you could remove the postgres user and create it fresh to complete the install. 4. I'm still very new to PostGreSQL, but I'd hazard to guess that there's nothing so explicitly intrinsic to the username postgres that you couldn't create a new user, postgresfoo and continue with the install instructions using that username. G'luck! Scott On Oct 15, 2004, at 5:09 PM, Nathan Mealey wrote: I am trying to install PostgreSQL on OS X 10.3, using the package from Entropy.ch. The installation instructions there, as well as anywhere else I have seen them on the net, say to create a user (using the System Preferences pane) with a shortname postgres. The problem is, this user already exists in my netinfo database/domain, and so I cannot either create a new user with the same short name, or use this user, because I do not know the password (I assume it is a system default user). Thus, I am unable to complete the installation, because I cannot run the following commands as user postgres. /usr/local/bin/initdb -D /usr/local/pgsql/datax-tad-bigger /x-tad-bigger/usr/local/bin/pg_ctl -D /usr/local/pgsql/data -l postgres.log start Has anyone else encountered this before? I'm so confused by this...why does this user already exist, and why can't I make use of it for this installation? Should I create a different user? NM -- Nathan Mealey Director of Operations Cycle-Smart, Inc. P.O. Box 1482 Northampton, MA 01061-1482 [EMAIL PROTECTED] (413) 587-3133 (413) 210-7984 Mobile (512) 681-7043 Fax
[GENERAL] test -- please ignore
This is a test -- please ignore ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] psycopg help
First, apologies in advance for this somewhat OT post ... I'm looking for a source of information on using the psycopg interface to postgresql. A mailing list would be ideal. I've poked at their wiki, but not found what I'm looking for. Also, new to both postrgresql psycopg, my questions appear too basic for the doc/examples they provide with their installation. i.e.: - What's the appropriate syntax for specifying a primary key? - What is the data type text? And how does it differ from CHAR(len), c.? My very simple table creation test (based on their first.py example is failing ... Here's what I'm trying. Non-pythonated syntax works in pgsql: no go: curs.execute(CREATE TABLE key_test ( key_col CHAR(9) PRIMARY KEY, nother_col CHAR(256))) pure joy: cs_test=# CREATE TABLE key_test ( cs_test(# key_col CHAR(9) PRIMARY KEY, cs_test(# nother_col CHAR(256) cs_test(# ); Thanks! Scott ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] install problem
If I understand your description of the problem sufficiently, I believe you're running into trouble at the su postres step. Although I'm quite new to postgres, I do know that creating a user account named postgres is recommended, if not required, by the installation process. Entering su postgres on the cmd-line switches the user (su) in that shell to username: postgres -- it's not invoking a program, per se. Then logging in, even via su, requires password authentication. Hope this helps - Scott On Oct 9, 2004, at 7:29 AM, wayne schlemitz wrote: I am using SuSE 8.0 Professional Linux and installing postresql 7.4 by placing the tar files in /temp and untar and unzip and followed the short install procedure. Log in as root ./ configure gmake gmake install useradd mkdir /usr/local/pgsql/data chown postgres /usr/local/pgsql/data su postgres /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data same as above logfile 21 At this point I checked the last 2 lines to enter and was told I do not have the password for the postges sence it is not a person but a program it should not ask for the password. Have not had sucess to call up the program or to finish the last 2 lines. /usr/local/pgsql/bin/createdb test /usr/local/pgsql/bin/pgsql test What can I do to finish the last 2 lines and call up the potgres or pgsql? ___ Do you Yahoo!? Declare Yourself - Register online to vote today! http://vote.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] interfaces for python
I'd like to know if anyone has recommendations for which Python DB-API 2.0 interface to use with PostGreSQL-7.4.5. The database and tools to interact with it will be hosted on a MacOS 10.3.x machine. The db schema represents a small production studio environment with only a handful of users. db connection will be intermittent. pyPgSQL? PyGreSQL? Anything I should consider? Thanks in advance! Scott ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] interfaces for python -- thanks
Thanks for the responses! I have selected psycopg based on: - your recommendations (though massively parallel connections aren't currently likely in my environment - my success in building the target (with kudos props to their install documentation) - my ability to pronounce its name ;) Now back to the schema ... Thanks Scott On Oct 6, 2004, at 9:13 PM, Scott Frankel wrote: I'd like to know if anyone has recommendations for which Python DB-API 2.0 interface to use with PostGreSQL-7.4.5. The database and tools to interact with it will be hosted on a MacOS 10.3.x machine. The db schema represents a small production studio environment with only a handful of users. db connection will be intermittent. pyPgSQL? PyGreSQL? Anything I should consider? Thanks in advance! Scott ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] postgres.h location?
I'm looking for the file postgres.h in my recent install of postgres-7.4.5 on a MacOS 10.3.5 system. I'm attempting to build PyGreSQL-3.5, which appears to require the postgres include dir. My build of postgres-7.4.5 did produce an include dir, /usr/local/pgsql/include; but it does not contain postgres.h. Curiously, it does contain a file called postgres_ext.h. I'm sure they're not one-and-the-same ... Suggestions? Thanks in advance! Scott ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] postgres.h location?
Default now installs all headers ... with v8.0 of postgres, right? New to postgres (and db's for that matter) I just downloaded the tarball for postgres-7.4.5. Invoking the make install-all-headers cmd did indeed produce the servers subdir under include. Unfortunately, my attempts to build PyGreSQL still return a crudjillion error msgs. But that's a matter for another list ;) Thanks for the feedback - Scott On Oct 6, 2004, at 3:26 PM, Bruce Momjian wrote: The option install-all-headers was removed in the past few days because install by default now installs all headers. --- Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: On Wed, Oct 06, 2004 at 03:27:39PM -0400, Tom Lane wrote: You need to do make install-all-headers while installing to install the server-side headers there. BTW, did this just change in 8.0? There was a proposal to change it, but I dunno if it was committed. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] newby question
Additional note: configure warned of an old version of bison when I attempted an install of postgresql7.4.5 the other day. Seems the version that comes with OSX 10.3.x is too old for postgres. Only trick I encountered in installing bison from tarball was locating the install product in /usr/local/bin and copying it to /usr/bin (either because that's where postgres wants to find it, or because my path does not include /usr/local). Scott On Oct 1, 2004, at 7:26 AM, Kevin Barnard wrote: Apache/PHP are already prebuilt on Mac OS X. All you need to do is donwload the postgres tarball config and compile. Small notte on the compile. When you compile postgres either turn off readline support or download readline from GNU and compile the static lib. I don't remember off hand but I think an older version of Postgres is compiled into the standard Mac PHP. If not it is fairly easy to recomiple PHP on the Mac. If you have any further questions about compiling ask I can walk you though it if you need. On Fri, 01 Oct 2004 11:45:18 +0100, Richard Huxton [EMAIL PROTECTED] wrote: Bernd Buldt wrote: Howdy! I'd like to set up a database (mostly a bibliography), which I'd like to connect to a webpage such that simple queries to the database can be made by visitors of my homepage. I seem to remember that FileMaker allows for this, but I'd prefer a UNIX-based solution (under MacOS X). Hence my question (before I start digging): Can anyone on the list confirm that this is doable (w/o too much hassles ) with PostGresQL? Thx for your time! Best, Bernd Can't say I've done it with MacOS-X, but Apache+PHP+PostgreSQL are a common combination. Worth checking sourforge.net / freshmeat.net and see if there are any projects doing what you want before starting your own though. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] newby question
Good to know ... Scott On Oct 1, 2004, at 9:08 AM, Doug McNaught wrote: Scott Frankel [EMAIL PROTECTED] writes: Additional note: configure warned of an old version of bison when I attempted an install of postgresql7.4.5 the other day. Seems the version that comes with OSX 10.3.x is too old for postgres. You only actually need Bison if you are building from CVS--the release tarballs are pre-Bisonated, so you can ignore that warning. :) -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] readline install questions
I am attempting to install postgresql7.4.5 on OSX 10.3.5 and have questions about readline. For use with Python, I have readline.so installed in /Library/Python/2.3/. - Is this manifestation of readline sufficient/appropriate for postgres? - If not, what do I need and where do I get it from? - If readline.so is sufficient/appropriate, where should I copy it to so that configure can find it? - Conversely, where is the path specified so I could point it to the file that already exists? Thanks in advance! Scott ---(end of broadcast)--- TIP 8: explain analyze is your friend