Re: [GENERAL] speeding up big query lookup
On fös, 2006-08-25 at 18:34 -0400, Silvela, Jaime (Exchange) wrote: This is a question on speeding up some type of queries. I have a very big table that catalogs measurements of some objects over time. Measurements can be of several (~10) types. It keeps the observation date in a field, and indicates the type of measurement in another field. I often need to get the latest measurement of type A for object X. The table is indexed by object_id. one popular way is to create a composite index: CREATE INDEX object_val_id_type_date ON object_val(object_id, object_val_type_id, observation_date); then you could SELECT * FROM object_val WHERE object_id=? AND object_val_type_id=? ORDER BY observation_date DESC LIMIT 1; Hope this helps gnari ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] alternative to using a sequence
snacktime [EMAIL PROTECTED] writes: I have an application that processes financial transactions. Each of these transactions needs to be sent with a sequence number. It starts at 1 and resets to 1 once it hits 8000. I'm trying to think of the most elegant solution without having to create a sequence for each user (there are hundreds). There is a table that holds the configuration parameters for each merchant, so a field in that table to hold the sequence number would be ideal. In the past I've used sequences as well as just a field which I query then update. Any other ideas? Take a look at http://www.varlena.com/GeneralBits/ There are some ideas there. Of course, gapless sequences perform a bit worse than sequences. There was this discussion here on the ML last week as well, so in the archives you'll find even more things to think. -- Jorge Godoy [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Win2000 professional / Error message while installing PostgreSQL Failed to create process: 2! .
Hi all,I just tried to install postgresql 8.1 in windows 2000 professional.After selecting language option, clicking NEXT button gives the message Failed to create process: 2! and exits.In postgresql site, its under bug no 1872 and no solutions found. http://archives.postgresql.org/pgsql-bugs/2005-09/msg00059.phpThanks in advance.:)Purusothaman A
Re: [GENERAL] alternative to using a sequence
On 8/26/06, snacktime [EMAIL PROTECTED] wrote: I have an application that processes financial transactions. Each of these transactions needs to be sent with a sequence number. It starts at 1 and resets to 1 once it hits 8000. I'm trying to think of the most elegant solution without having to create a sequence for each user (there are hundreds). There is a table that holds the configuration parameters for each merchant, so a field in that table to hold the sequence number would be ideal. In the past I've used sequences as well as just a field which I query then update. Any other ideas? How many sequences are we talking about? merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Shared Objects (Dynamic loading)
Hi, Do we have any concept of shared objects in perl. Just wondering, how do we dynamically load something written in perl in postgresql. Thanks, Jas On 8/24/06, Michael Fuhr [EMAIL PROTECTED] wrote: On Thu, Aug 24, 2006 at 03:46:00PM -0400, Jasbinder Bali wrote: The way we use ECPG for the database related activites while working with C, what do i need to look up if i'm using perl.For information about writing server-side Perl functions see thePL/Perl documentation (adjust the link if you're using a versionother than 8.1): http://www.postgresql.org/docs/8.1/interactive/plperl.htmlIf the function needs to execute SQL statements in the same backendin which it's running then see the Database Access from PL/Perl section.If it needs to connect to a different database then youcould use the DBI module. As alot of people have pointed out that perl is the best language to use while dealing with email parsing. Best is a matter of opinion; others might prefer Ruby, Python,or something else.Perl is good at text handling, it has a largenumber of third-party modules (CPAN), and there's a lot of material written about it so it's a popular choice. We have the perl code ready to parse the email. Just wondering what would be the best method to deal with database (postgresql) calls from the perl code. See the aforementioned PL/Perl documentation.You could also usePL/Perl just for parsing and use PL/pgSQL for working with thedatabase.--Michael Fuhr
Re: [GENERAL] Can I read data load files without loading in db?
Peter Eisentraut wrote: Am Freitag, 25. August 2006 13:51 schrieb Martijn van Oosterhout: It really depends on the format they send you. It's either plain text, in which case the COPY commands will tell you which fields. If it's the custom format you can use tar to extract it IIRC. You can also use pg_restore to extract binary dump formats to plain text. I used less to look at one of the files. In this case its named 'db-private.pg-dump.1' and in a subdirectory called 'database' on the CD. less shows much familiar data in the file. I would have assumed it was in text format. I don't notice any COPY commands in the file as I'm looking at it via less, so I try to locate some via grep and it tells me that there are matches, but that its a binary file so it won't show me the actual occurrences. !! I guess that I'll need to try your suggestion of using pg_restore to extract them to plain text. Apparently I'll need to add software to this Fedora Core 3 installation since pg_restore isn't defined currently. Do you recall the switches needed? Thanks! -bC ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Can I read data load files without loading in db?
Thanks for your reply Martijn. I do not know for certain that its in pg_dump format, not having dealt with it specificly before. I will examine the files more closely to look for COPY statements. If not in that format, what else could it be that could be shoved right into a postgresql database? In any case, would an associated file (schema type?) tell me the field name arrangements that the data files are loaded with? or if COPY statements are used, would they name the fields being targeted? Thanks again.On 8/25/06, Martijn van Oosterhout kleptog@svana.org wrote: On Wed, Aug 23, 2006 at 12:27:09PM -0700, [EMAIL PROTECTED] wrote: We have an awkward situation. An affiliate organization periodically sends us a stack of CDs.On the first one there are a couple of small scripts to handle installing the data and/or upgrading the database schema.Most of the CD's contents are large data files to be used to update a postgresql database.I assume from the question that the data is in pg_dump format? i.e. awhole pile of COPY statements with data?In that case just write a perl script to extract the data.Or specify more precisely if that's not the problem. Have anice day,--Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFE7tZDIB7bNG8LQkwRAmqwAJ0S3UU1Im2CXnZ7XZ3231JUukrHGACgjiNHFK36quKQzA7vwwweFfqaYCE==WLfn-END PGP SIGNATURE-
Re: [GENERAL] How do i store arbitrary questions and answers in SQL?
I agree with Tim, that your option 3 is really not all that hard, and I suggest would provide you with the best solution. I'm assuming, however, that your application asks the question and checks the user supplied answer with the answer in the db (and not Jeopardy style). I might add a 'join' or 'answertype' field in the questions (qu) table to assist when drawing the application and to immediately know what table to verify the supplied answer against. Otherwise, I think you might need a query to check the type on the answer the user supplies and make a possible few passes (through the numeric and integer tables for example when the user's response is 42) - or some combination of outers to find the non null value field. For what it's worth, I'd probably also include a 'type' table to hold my application specific mask for the user's reply, and perhaps a attribute to hold the joining table name (I'm thinking about the future requirement to add a new type without much coding changes on the application side). So, now I'm interested in this, and I could see a use for something similar in the near future, I'll put the following out there for comment. Obviously I haven't put it into practice but perhaps it could work. One of differences from your original post is the absense of a serial field on the responses. I've only put a primary key which would link back to the the questions.qid field (so maybe it should be a fk? see my note about my pg newness in a moment). I'm assuming this is not a multiple choice type of situation. Each question has one answer (although not currently enforced in the db layer here since there is nothing stopping you from placing an answer in more than one of the response tables - I'm relatively new to pg and not sure of anyway to deal with this). It's early, haven't had my first cup yet, but I would start with something like the following: CREATE TABLE questions -- holds the questions or challenges ( qid serial NOT NULL, qchallenge text, qtype int2, -- linking to types.tid CONSTRAINT questions_pkey PRIMARY KEY (qid) ) WITHOUT OIDS; CREATE TABLE types -- mostly to assist application development ( tid serial NOT NULL, tdescription varchar(25), tmask varchar(25), -- just a thought, could be useful when building a web app tjoin varchar, -- again, just thinking about ease of new additions CONSTRAINT types_pkey PRIMARY KEY (tid) ) WITHOUT OIDS; CREATE TABLE response_numeric ( rnqid int2 NOT NULL, -- linking to questions.qid rnvalue numeric, CONSTRAINT response_numeric_pkey PRIMARY KEY (rnqid) ) WITHOUT OIDS; CREATE TABLE response_integer ( riqid int2 NOT NULL, rivalue int4, CONSTRAINT response_integer_pkey PRIMARY KEY (riqid) ) WITHOUT OIDS; CREATE TABLE response_text ( rtqid int2 NOT NULL, rtvalue text, CONSTRAINT response_text_pkey PRIMARY KEY (rtqid) ) WITHOUT OIDS; CREATE TABLE response_date ( rdqid int2 NOT NULL, rdvalue date, CONSTRAINT response_date_pkey PRIMARY KEY (rdqid) ) WITHOUT OIDS; Tim Allen wrote: 3. Different answer tables each with answer types - same problem as 2 but even harder. This is the other option I mentioned above. It's not hard at all. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How do i store arbitrary questions and answers in SQL?
Dang. I re-read your post and see that you are not checking for the correct answer, you simply want to store the responses like a questionare. Well I still think a different table for each type of response would be beneficial, obviously, adding a serial field like you have done and the foreign key linking to the qid. I had a completely different problem on my mind (testing user's knowledge). Sorry about that. macgillivary wrote: I agree with Tim, that your option 3 is really not all that hard, and I suggest would provide you with the best solution. I'm assuming, however, that your application asks the question and checks the user supplied answer with the answer in the db (and not Jeopardy style). I might add a 'join' or 'answertype' field in the questions (qu) table to assist when drawing the application and to immediately know what table to verify the supplied answer against. Otherwise, I think you might need a query to check the type on the answer the user supplies and make a possible few passes (through the numeric and integer tables for example when the user's response is 42) - or some combination of outers to find the non null value field. For what it's worth, I'd probably also include a 'type' table to hold my application specific mask for the user's reply, and perhaps a attribute to hold the joining table name (I'm thinking about the future requirement to add a new type without much coding changes on the application side). So, now I'm interested in this, and I could see a use for something similar in the near future, I'll put the following out there for comment. Obviously I haven't put it into practice but perhaps it could work. One of differences from your original post is the absense of a serial field on the responses. I've only put a primary key which would link back to the the questions.qid field (so maybe it should be a fk? see my note about my pg newness in a moment). I'm assuming this is not a multiple choice type of situation. Each question has one answer (although not currently enforced in the db layer here since there is nothing stopping you from placing an answer in more than one of the response tables - I'm relatively new to pg and not sure of anyway to deal with this). It's early, haven't had my first cup yet, but I would start with something like the following: CREATE TABLE questions -- holds the questions or challenges ( qid serial NOT NULL, qchallenge text, qtype int2, -- linking to types.tid CONSTRAINT questions_pkey PRIMARY KEY (qid) ) WITHOUT OIDS; CREATE TABLE types -- mostly to assist application development ( tid serial NOT NULL, tdescription varchar(25), tmask varchar(25), -- just a thought, could be useful when building a web app tjoin varchar, -- again, just thinking about ease of new additions CONSTRAINT types_pkey PRIMARY KEY (tid) ) WITHOUT OIDS; CREATE TABLE response_numeric ( rnqid int2 NOT NULL, -- linking to questions.qid rnvalue numeric, CONSTRAINT response_numeric_pkey PRIMARY KEY (rnqid) ) WITHOUT OIDS; CREATE TABLE response_integer ( riqid int2 NOT NULL, rivalue int4, CONSTRAINT response_integer_pkey PRIMARY KEY (riqid) ) WITHOUT OIDS; CREATE TABLE response_text ( rtqid int2 NOT NULL, rtvalue text, CONSTRAINT response_text_pkey PRIMARY KEY (rtqid) ) WITHOUT OIDS; CREATE TABLE response_date ( rdqid int2 NOT NULL, rdvalue date, CONSTRAINT response_date_pkey PRIMARY KEY (rdqid) ) WITHOUT OIDS; Tim Allen wrote: 3. Different answer tables each with answer types - same problem as 2 but even harder. This is the other option I mentioned above. It's not hard at all. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Content of pg_class.relacl
Can someone tell me how the content of the "relacl" column of the "pg_class" system table is interpreted? The documentation simply states "Access permissions. See the descriptions of GRANT and REVOKE for details." Much of the content of this column is obvious from the GRANT description - "r" = SELECT, "a" = INSERT, etc. But, there are other parts of the content that are not described. For example, there are asterisks in the permissions string for some users. These appear to represent permission with grant option, is this correct?Ialso can't find any information on the data following the slash.For example,in the "relacl" value "testuser=arwdRxt/csi", what does the "/csi" represent? Is this the owner of the table? The grantor? Is this discussed anywhere in the PostgreSQL documentation? Thanks in advance for any replies.
[GENERAL] copy command
Hi! - Fedora Core 5, cannot copy text file into a table called datetime, yet the permissions seem OK: copy datetime from '/home/jb/Desktop/DateTimeData.txt' with csv; ERROR: could not open file /home/jb/Desktop/DateTimeData.txt for reading: Permission denied -rw-rw-r-- 1 jb jb 83 Aug 25 14:30 DateTimeData.txt Linux log-in is user: jb, su to root for /sbin/service postgresql start (was installed using default user: postgres and the servers still use postgres). This might be more of a linux question, but for some reason the back-end Postgresql server can't get into the needed text file -? Ideas appreciated. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Can't populate database using Copy
copy tabledaily from '/home/jb/FinalData.txt' with csv;ERROR: could not open file "/home/jb/FinalData.txt" for reading: Permission denied. Yet everything else works, including populating tables with Insert queries.All the permissions were changed for world access ( perhaps bad idea but out of frustration; and am the only person on this computer ).drwxrwxrwx 3 root root 4096 Aug 21 10:19 home drwxrwxrwx 25 jb jb 4096 Aug 26 11:18 jb -rwxrwxrwx 1 jb jb 255 Aug 25 18:15 FinalData.txtI see other people with this same problem on the mailing lists, but no example-solutions are offered. Can you get the newbys started ? Thanks.
[GENERAL] mising chunk
I've got a problem with conversion.Database is running on pg8.1.3I've 2 tables, in one there is mediabinary collumn of bytea type. And during conversion to another table / conversion only move data from timestamp type to timestamptz but mediabinary in botha are the same. And during query which move data I recieve ' MISSING CHUNK NUMBER 0 FOR TOAST VALUE .. I've re-index this table, vacumed and nothing. Everytime I get this info. Please give me some advice how to fix this or even drop data to file I don't know, any suggestion
[GENERAL] implementing a read lock
I have an application that processes credit card transactions,and contains a table called authorizations. The authorizations table contains information returned by the bank necessary to capture the transaction. Nothing should block the application from inserting new rows into the authorizations table. When the authorizations are captured, one or more rows will be fetched, captured, and if successful the rows will be deleted. No updates are done on the table, only inserts or deletes. Naturally I want to prevent different instances of the same application from trying to select the same rows to capture, resulting in duplicate charges. I can't lock the whole table because new authorizations are constantly coming in. Is creating a separate table that I use just as a lock table the best approach? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] mising chunk
I had this problem a few weeks ago. What I did was isolate it into an individual row that was causing the problem, then going over the fields one by one, till I located the free text field that was responsible. Then I repopulated it with the value in the latest backup I could find. I wasnt able to find any better techniques on the web. For locating the faulty row, you might want to write a procedural routine that will query rows sequentially. I did it in a very crude way by using LIMIT statements till I hit the mark. As to why this happens, the suggestion I got here was faulty hardware. Good luck From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andrzej Folek Sent: Saturday, August 26, 2006 3:47 AM To: pgsql-general@postgresql.org Subject: [GENERAL] mising chunk I've got a problem with conversion. Database is running on pg8.1.3 I've 2 tables, in one there is mediabinary collumn of bytea type. And during conversion to another table / conversion only move data from timestamp type to timestamptz but mediabinary in botha are the same. And during query which move data I recieve ' MISSING CHUNK NUMBER 0 FOR TOAST VALUE .. I've re-index this table, vacumed and nothing. Everytime I get this info. Please give me some advice how to fix this or even drop data to file I don't know, any suggestion *** Bear Stearns is not responsible for any recommendation, solicitation, offer or agreement or any information about any transaction, customer account or account activity contained in this communication. Bear Stearns does not provide tax, legal or accounting advice. You should consult your own tax, legal and accounting advisors before engaging in any transaction. In order for Bear Stearns to comply with Internal Revenue Service Circular 230 (if applicable), you are notified that any discussion of U.S. federal tax issues contained or referred to herein is not intended or written to be used, and cannot be used, for the purpose of: (A) avoiding penalties that may be imposed under the Internal Revenue Code; nor (B) promoting, marketing or recommending to another party any transaction or matter addressed herein. *** ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Shared Objects (Dynamic loading)
On Sat, Aug 26, 2006 at 03:32:37PM -0400, Jasbinder Bali wrote: Do we have any concept of shared objects in perl. Just wondering, how do we dynamically load something written in perl in postgresql. A PL/Perl function can load external code with use, require, or do. Since those are potentially dangerous operations you'll need to create the function with plperlu, which means you'll need to create the function as a database superuser. See Trusted and Untrusted PL/Perl in the documentation for more information. http://www.postgresql.org/docs/8.1/interactive/plperl-trusted.html Regarding use, require, and do see the Perl documentation, in particular the perlfunc and perlmod manual pages. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Content of pg_class.relacl
On Fri, Aug 25, 2006 at 09:25:34AM -0400, Ed Brown wrote: Can someone tell me how the content of the relacl column of the pg_class system table is interpreted? The documentation simply states Access permissions. See the descriptions of GRANT and REVOKE for details. Much of the content of this column is obvious from the GRANT description - r = SELECT, a = INSERT, etc. But, there are other parts of the content that are not described. For example, there are asterisks in the permissions string for some users. These appear to represent permission with grant option, is this correct? I also can't The GRANT documentation says * -- grant option for preceding privilege find any information on the data following the slash. For example, in the relacl value testuser=arwdRxt/csi, what does the /csi represent? Is this the owner of the table? The grantor? Is this discussed anywhere in the PostgreSQL documentation? The GRANT documentation says / -- user who granted this privilege Did you overlook those or did you see them and have questions about what they mean? -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] implementing a read lock
snacktime [EMAIL PROTECTED] writes: I have an application that processes credit card transactions,and contains a table called authorizations. The authorizations table contains information returned by the bank necessary to capture the transaction. Nothing should block the application from inserting new rows into the authorizations table. When the authorizations are captured, one or more rows will be fetched, captured, and if successful the rows will be deleted. No updates are done on the table, only inserts or deletes. Naturally I want to prevent different instances of the same application from trying to select the same rows to capture, resulting in duplicate charges. I can't lock the whole table because new authorizations are constantly coming in. Is creating a separate table that I use just as a lock table the best approach? I'm not quite sure why SELECT FOR UPDATE wouldn't work for you. The capturing process would SELECT candidate transactions (where the status is 'NEW', say) FOR UPDATE, mark their status as 'PROCESSING', then COMMIT and try to process each transaction. Once the status is known it can mark the row as 'DONE' or 'FAILED'. SELECT FOR UPDATE doesn't block inserts of new rows. -Doug ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Inserting Data
Michael This works perfectly. Thanks very much for your help. What is the reason for redefining type_ as device_type ??? Bob - Original Message - From: Michael Fuhr [EMAIL PROTECTED] To: Bob Pawley [EMAIL PROTECTED] Cc: Postgresql pgsql-general@postgresql.org Sent: Friday, August 25, 2006 12:31 PM Subject: Re: [GENERAL] Inserting Data On Fri, Aug 25, 2006 at 09:35:21AM -0700, Bob Pawley wrote: The library.devices table holds the static information on each of the devices that are available to the user. Is library.devices.device_number a unique attribute? That is, for a given device_number, is there at most one row in library.devices? Or can a given device_number have multiple rows with different attributes? If multiple rows then is device_number at least unique with respect to the type_ column? What I want to do is transfer the device_id (serial) identification of each of the devices entered in the device_number column into different tables. By comparing the p_id device_number to the library device_number I should be able to identify whether a device is a monitor (mon) or end-device (end). If device_number is unique then you could get the device type without querying library.devices multiple times. For example: CREATE OR REPLACE FUNCTION loop_association() RETURNS trigger AS $$ DECLARE device_type varchar; BEGIN SELECT type_ INTO device_type FROM library.devices WHERE device_number = NEW.device_number; IF device_type = 'end' THEN INSERT INTO p_id.association (devices_id) VALUES (NEW.devices_id); ELSIF device_type = 'mon' THEN INSERT INTO p_id.loops (monitor) VALUES (NEW.devices_id); END IF; RETURN NULL; END; $$ LANGUAGE plpgsql; However, this might not work as written if I understand what you say here: The following is my attempt to compare the device_number with the library.devices to determine the device type. This doesn't seem to narrow the field down to a single return. If I use INSERT with SELECT I get multiple rows of identical information or, with 'primary key' the transaction is rejected. Are you saying that a query like the following might return more than one row? SELECT * FROM library.devices WHERE device_number = 1 AND type_ = 'end'; Or have I misunderstood what you mean by This doesn't seem to narrow the field down to a single return? -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Inserting Data
On Sat, Aug 26, 2006 at 01:22:49PM -0700, Bob Pawley wrote: What is the reason for redefining type_ as device_type ??? The function doesn't redefine type_; it selects the value of the type_ column into a local variable named device_type to avoid querying library.devices more than once. That local variable must have a different name to avoid syntax errors that would result from the ambiguity of having a variable and a table column with the same name. -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Can't populate database using Copy
On Sat, Aug 26, 2006 at 11:35:56AM -0700, Jim wrote: copy tabledaily from '/home/jb/FinalData.txt' with csv; ERROR: could not open file /home/jb/FinalData.txt for reading: Permission denied. Yet everything else works, including populating tables with Insert queries. All the permissions were changed for world access ( perhaps bad idea but out of frustration; and am the only person on this computer ). drwxrwxrwx 3 root root 4096 Aug 21 10:19 home drwxrwxrwx 25 jb jb 4096 Aug 26 11:18 jb -rwxrwxrwx 1 jb jb 255 Aug 25 18:15 FinalData.txt Might this be an SELinux problem? -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Something blocking in libpq_gettext?
Tom Lane [EMAIL PROTECTED] writes: If this is a common result from short-lived network problems then you have a beef with the TCP stack at one end or the other ... TCP is supposed to be more robust than that. Or a beef with some firewall or router along the way. NAT routers are particularly prone to breaking TCP's robustness guarantees. -- greg ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Inserting Data
Michael Thanks again for your help. Bob - Original Message - From: Michael Fuhr [EMAIL PROTECTED] To: Bob Pawley [EMAIL PROTECTED] Cc: Postgresql pgsql-general@postgresql.org Sent: Saturday, August 26, 2006 1:36 PM Subject: Re: [GENERAL] Inserting Data On Sat, Aug 26, 2006 at 01:22:49PM -0700, Bob Pawley wrote: What is the reason for redefining type_ as device_type ??? The function doesn't redefine type_; it selects the value of the type_ column into a local variable named device_type to avoid querying library.devices more than once. That local variable must have a different name to avoid syntax errors that would result from the ambiguity of having a variable and a table column with the same name. -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] copy command
Jim Bryan [EMAIL PROTECTED] writes: /home/jb/Desktop/DateTimeData.txt for reading: Permission denied -rw-rw-r-- 1 jb jb 83 Aug 25 14:30 DateTimeData.txt I suspect it doesn't have x permission on some parent directory, what does ls -ld /home /hoome/jb /home/jb/Desktop print? -- greg ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Cutting the Gborg throat
Hello, I have been looking at the migration of Gborg lately. It looks like the only two active projects on that site are Slony, and pljava. Libpqxx has recently moved to their own Trac site. I would like to suggest the following course of action. 1. We set a read only date of 12/31/06 This means that all login capability, register capability, mailing lists, bug tracking, cvs etc... will be shut off. It will only be up for reference. 2. We set a shut down date of 03/31/07 This means it is down, rm -rf, whatever it takes. Why? Because past performance on the migration of gborg to pgfoundry has shown that it will not be done, regardless of who volunteers or who *thinks* they have time to do it. So what will this really take? 1. The respective project members need to register their projects with pgfoundry. 2. The respective project members need to migrate their tickets if they see fit. 3. The respective members need to register the email lists they want. 4. There needs to be communication between one project at a time to arrange a cut-over of their mailing lists. We have the ability to import mbox files into mailman so their mail archives will be preserved. Question: How is CVS handled on Gborg? Do they have their own repos? If so we can just move them to pgfoundry yes? Except for the pending screaming and yelling... is there really anything wrong with this? To cut a few things off at the pass: I think it is completely reasonable that we ask fellow community members to help in their own migration as this is a gratis service. I also think if those members were asked, they would be happy to help. I am willing to have Devrim and Darcy help with this. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] copy command
Gregory Stark [EMAIL PROTECTED] writes: Jim Bryan [EMAIL PROTECTED] writes: /home/jb/Desktop/DateTimeData.txt for reading: Permission denied -rw-rw-r-- 1 jb jb 83 Aug 25 14:30 DateTimeData.txt I suspect it doesn't have x permission on some parent directory, Yeah, user home directories are not normally world-readable in Fedora. Even if you've made yours readable, the default SELinux policy in FC5 prevents daemons such as the PG postmaster from accessing random parts of the filesystem. Consider using psql's \copy instead. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] jabber.postgresql.org is up
Hello, The community jabber server is now up. We are using the Wildfire server from Jive Software, backed to a PostgreSQL database (of course). Our current enabled features are: 1. Server side storage (for static groups etc..) -- We currently have a Slaves_to_WWW group for example 2. MUC (basically IRC for jabber) -- all muc conversations are logged to postgresql and in the long run will likely be searchable by members of the jabber service. 3. tls or ssl is required We do not support auto-registration at this time. -- The idea behind this server is to allow active project members to communicate without having to use public channels. We do not support communication with other jabber servers at this time. -- I don't really see a benefit to this at this time, but it is something that is easy to set up so it is a consideration in the future. If you are a project member (Gforge admin, Web team member, commiter etc...) please let me know if you would like an account. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly