Re: [GENERAL] 'NOW' in UTC with no timezone
Tom Lane [EMAIL PROTECTED] writes: Stuart Bishop [EMAIL PROTECTED] writes: I'm trying to determine the best way of saying 'The current time in UTC with no time zone information'. Isn't that a contradiction in terms? Not if you're used to the Unix concept of storing seconds since the epoch. In that model the quantity you're storing is entirely time zone agnostic. But anyone who is worried about timezones and yet is storing his data in timestamp-without-time-zone columns probably needs to reconsider exactly what his data represents. The SQL approach of storing a time zone with the timestamp makes things very confusing. For unix people it requires a time zone in precisely the opposite circumstances from when they expect to use one. And It means two timestamps representing the same point in time can have subtly different behaviours if they're stored with different time zones. I think what this user wants is to store a timestamp with time zone and always store his time with the time zone UTC. That lets him store timestamps using the time since epoch mentality, but print them accurately in whatever time zone he wants. If you stored them without time zone then postgres wouldn't let you easily display them in non-UTC time zones. It considers them to be a particular time of a particular day in whatever time zone you're in. It could be useful to represent 3pm in your local time zone which can be useful for some purposes. For example, I'm using it to represent the expiry time of specials, since they expire on a particular date in your local time zone. If you transport the printout from one time zone to another the expiry time actually changes. In practice I would have been just as happy storing UTC and then printing using AT TIMEZONE UTC. -- greg ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] update query confusion
The following query updated all the rows in the AssembliesBatch table, not just where batchID=5. There are 2 rows in the AssembliesBatch table with batch ID of 5 and I wanted to update both of them with their price, based on the data in the from clause. One row has 105 units and the other row has 2006 units. the active price in both rows is 6.6 and the pricedifferential is 0. My expectation is that the first row would be updated to 693 and the second to be updated to 13239.6. Instead every row in the table was updated to 693. This syntax works in MS SQL Server to update exactly as I expected, with the difference that you have to use the aliasname after the update keyword and postgresql does not allow that. If anyone can help, I would greatly appreciate it. update AssembliesBatch set BuildPrice=a.units*(coalesce(ActivePrice,0) + coalesce(PriceDifferential,0)) from AssembliesBatch a join assemblies b on a.AssemblyID=b.assemblyID left join qry_AssemblyPrices c on c.AssemblyID=b.assemblyID left join ProductQuantityPrice d on d.ProductID=b.ProductID inner join qry_TotalBatchProductCards e on e.ProductID=b.ProductID and e.BatchID=a.BatchID and e.TotalCards between minquantity and maxquantity where a.BatchID=5; Thank You Sim Zacks IT Manager CompuLab 04-829-0145 - Office 04-832-5251 - Fax ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] 'NOW' in UTC with no timezone
I'm trying to determine the best way of saying 'The current time in UTC with no time zone information'. Isn't that a contradiction in terms? Not if you're used to the Unix concept of storing seconds since the epoch. In that model the quantity you're storing is entirely time zone agnostic. But then one is storing an interval, not a point in time. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] 'NOW' in UTC with no timezone
On Oct 12, 2004, at 9:43 PM, Karsten Hilbert wrote: I'm trying to determine the best way of saying 'The current time in UTC with no time zone information'. Isn't that a contradiction in terms? Not if you're used to the Unix concept of storing seconds since the epoch. In that model the quantity you're storing is entirely time zone agnostic. But then one is storing an interval, not a point in time. By that logic, all times are intervals. '2004-10-12 22:09' is 2004 years, 10 months, 12 days, 22 hours, 9 minutes since 0. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Rule uses wrong value
(I thought I posted this yesterday from Google Groups, but it doesn't appear to have taken.) I'm having a problem with a rule designed to log new rows inserted into one table. The base table is very volatile; rows are inserted from various places, including both application code and triggers. Then they are read out by another application (the table is used as a job queue) and deleted. My rule attempts to re-record the rows in another table for audit and debug purposes. Here's the important bits of the base table: Table public.job_queue Column |Type|Modifiers ++-- job_id |integer |not null default nextval('job_queue_job_id_seq'::text) ... The rule looks like this: rul_job_queue_trace_log AS ON INSERT TO job_queue DO INSERT INTO job_queue_trace (job_id, ...) VALUES (new.job_id, ...) It appears that the rule is inserting the row copies into job_queue_trace with a job_id value that is one higher than the job_id from the original row. Almost as though it was re-evaluating the sequence ... -- (Posted from an account used as a SPAM dump. If you really want to get in touch with me, dump the 'jboes' and substitute 'mur'.) Jeffery Boes [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Confused with LABEL and LOOP
Hi, I'm a bit confused with the use ofLabel for loops in PostgreSQL. I'm usingPostgreSQL 8.0 Beta2 version on WIN32 platform. CREATE OR REPLACE FUNCTION TEST( ) RETURNS int4 AS ' BEGIN LOOP1LOOP Raise Notice ''Loop 1''; EXIT LOOP1;END LOOP; RETURN 0; END' LANGUAGE 'plpgsql'; When I call SELECT TEST(), I get the error message : ERROR: control reached end of function without RETURNCONTEXT: PL/pgSQL function "test" If I just change EXIT LOOP1 to EXIT - everything is OK I think we previously had the possibility to indicate the LABEL in EXIT statement... Regards, Patrick --- Patrick Fiche email : [EMAIL PROTECTED] --- Protected by Polesoft Lockspam http://www.polesoft.com/refer.html
[GENERAL] Return resultset from a function
Hi Every1, I'm fairly new to Postgres and ran into a problem quite soon. Until now I used MS Sql2000 and with it I was able to write a simple stored procedure that returned a resultset (for example: 'select * from table1'). But I don't know the way how to do it in Postgres, cause I can't define a function that returns a resultset. I know it's a lame question but I need some hints to solve this problem. Thanks in advance! Peter Neumann Hungary ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] OS not good for database
Simon Windsor wrote: Hi Can you provide a link to the interview? Ops! I forget it :-( Here it is: http://www.alwayson-network.com/comments.php?id=6186_0_4_0_C Regards Gaetano Mendolaa ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] 'NOW' in UTC with no timezone
Greg Stark [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Stuart Bishop [EMAIL PROTECTED] writes: I'm trying to determine the best way of saying 'The current time in UTC with no time zone information'. Isn't that a contradiction in terms? Not if you're used to the Unix concept of storing seconds since the epoch. In that model the quantity you're storing is entirely time zone agnostic. Not at all. In my worldview, the Unix concept is seconds since midnight 1/1/1970 00:00 UTC, and therefore it is essentially UTC time, because (a) its absolute meaning doesn't change depending on your local timezone, but (b) unless you are in UTC, you have to rotate it to your local timezone for display. For comparison, various not-Unix operating systems get this wrong, and store seconds since local-time midnight, simplifying display at the price of not knowing what time it Really Is. The SQL approach of storing a time zone with the timestamp makes things very confusing. For unix people it requires a time zone in precisely the opposite circumstances from when they expect to use one. Yes, obviously you are confused ;-) Postgres implements TIMESTAMP WITH TIME ZONE as the Unix concept: what is stored internally is seconds since the UTC epoch. We rotate to or from local timezone for input/display. TIMESTAMP WITHOUT TIME ZONE is essentially the other idea: it stores seconds since a local-midnight epoch in an unspecified time zone. No timezone adjustment is done during input or display. If timezones are at all significant in terms of your application, you almost certainly want to be storing your data as TIMESTAMP WITH TIME ZONE, which amounts to asserting that you know what time the values Really Are in global terms. Otherwise the rotation facilities are going to be fighting you every step of the way. (Note that this is arguably not what the SQL standard means by TIMESTAMP WITH TIME ZONE, but it's what Postgres implements.) It could be useful to represent 3pm in your local time zone which can be useful for some purposes. TIME WITHOUT TIME ZONE? regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] update query confusion
Ok. I got it working by adding and assembliesBatch.AssembliesBatchID=a.AssembliesBatchID to the where clause. This seems a bit awkward sytactically. Is there a cleaner way of doing it? Thank You Sim Zacks IT Manager CompuLab 04-829-0145 - Office 04-832-5251 - Fax The following query updated all the rows in the AssembliesBatch table, not just where batchID=5. There are 2 rows in the AssembliesBatch table with batch ID of 5 and I wanted to update both of them with their price, based on the data in the from clause. One row has 105 units and the other row has 2006 units. the active price in both rows is 6.6 and the pricedifferential is 0. My expectation is that the first row would be updated to 693 and the second to be updated to 13239.6. Instead every row in the table was updated to 693. This syntax works in MS SQL Server to update exactly as I expected, with the difference that you have to use the aliasname after the update keyword and postgresql does not allow that. If anyone can help, I would greatly appreciate it. update AssembliesBatch set BuildPrice=a.units*(coalesce(ActivePrice,0) + coalesce(PriceDifferential,0)) from AssembliesBatch a join assemblies b on a.AssemblyID=b.assemblyID left join qry_AssemblyPrices c on c.AssemblyID=b.assemblyID left join ProductQuantityPrice d on d.ProductID=b.ProductID inner join qry_TotalBatchProductCards e on e.ProductID=b.ProductID and e.BatchID=a.BatchID and e.TotalCards between minquantity and maxquantity where a.BatchID=5; Thank You Sim Zacks IT Manager CompuLab 04-829-0145 - Office 04-832-5251 - Fax ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] update query confusion
Sim Zacks [EMAIL PROTECTED] writes: This syntax works in MS SQL Server to update exactly as I expected, with the difference that you have to use the aliasname after the update keyword and postgresql does not allow that. If anyone can help, I would greatly appreciate it. update AssembliesBatch set BuildPrice=a.units*(coalesce(ActivePrice,0) + coalesce(PriceDifferential,0)) from AssembliesBatch a join assemblies b on a.AssemblyID=b.assemblyID left join qry_AssemblyPrices c on c.AssemblyID=b.assemblyID left join ProductQuantityPrice d on d.ProductID=b.ProductID inner join qry_TotalBatchProductCards e on e.ProductID=b.ProductID and e.BatchID=a.BatchID and e.TotalCards between minquantity and maxquantity where a.BatchID=5; I believe that SQL Server identifies the target table (AssembliesBatch) with AssembliesBatch a, whereas Postgres does not, turning this into an unconstrained self-join. You need to do something more like update AssembliesBatch set BuildPrice=AssembliesBatch.units*(coalesce(ActivePrice,0) + coalesce(PriceDifferential,0)) from assemblies b left join qry_AssemblyPrices c on c.AssemblyID=b.assemblyID left join ProductQuantityPrice d on d.ProductID=b.ProductID inner join qry_TotalBatchProductCards e on e.ProductID=b.ProductID and e.BatchID=AssembliesBatch.BatchID and e.TotalCards between minquantity and maxquantity where AssembliesBatch.AssemblyID=b.assemblyID and AssembliesBatch.BatchID=5; If we supported an alias for the update target table you could write this as update AssembliesBatch a set BuildPrice=a.units*(coalesce(ActivePrice,0) + coalesce(PriceDifferential,0)) from assemblies b left join qry_AssemblyPrices c on c.AssemblyID=b.assemblyID left join ProductQuantityPrice d on d.ProductID=b.ProductID inner join qry_TotalBatchProductCards e on e.ProductID=b.ProductID and e.BatchID=a.BatchID and e.TotalCards between minquantity and maxquantity where a.AssemblyID=b.assemblyID and a.BatchID=5; which is a bit less typing but not fundamentally different. However, the SQL spec does not allow an alias there and at present we have not decided to extend the spec in this particular direction. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] query gone haywire :)
On Fri, 2004-10-08 at 10:11 -0400, Tom Lane wrote: Robin Ericsson [EMAIL PROTECTED] writes: - Index Scan using idx_d_entered on data (cost=0.00..18024.04 rows=50360 width=16) (actual time=0.210..0.247 rows=1 loops=1) Index Cond: 'now'::text)::timestamp(6) with time zone)::timestamp without time zone - '00:01:00'::interval) entered) You're running into the well-known problem that the planner can't make good estimates for index conditions that involve non-constant terms (such as CURRENT_TIMESTAMP). Lacking a decent estimate, it guesses that this scan will produce many more rows than it really will, and so it tends to favor plans that would be good in that scenario, but are not optimal for retrieving just a couple of rows. One workaround is to do the date arithmetic on the client side; another is to cheat by hiding the arithmetic in a function like ago(interval) that you lyingly claim is IMMUTABLE. See the pgsql-performance archives. I did run a new explain analyze on the query and found the attached result. status=# EXPLAIN ANALYZE status-# SELECT status-# data.entered, status-# data.machine_id, status-# datatemplate_intervals.template_id, status-# data_values.value status-# FROM status-# data, data_values, datatemplate_intervals status-# WHERE status-# datatemplate_intervals.id = data_values.template_id AND status-# data_values.data_id = data.id AND status-# data.machine_id IN (2,3) AND status-# current_timestamp::timestamp - interval '60 seconds' data.entered; It seems very strange that it does a full index scan on idx_dv_data_id. Regards, Robin Hash Join (cost=28646.01..274260.15 rows=555706 width=24) (actual time=102323.087..102323.196 rows=5 loops=1) Hash Cond: (outer.template_id = inner.id) - Merge Join (cost=28644.09..265922.62 rows=555706 width=24) (actual time=102322.632..102322.709 rows=5 loops=1) Merge Cond: (outer.data_id = inner.id) - Index Scan using idx_dv_data_id on data_values (cost=0.00..205034.19 rows=9580032 width=16) (actual time=17.503..86263.130 rows=9596747 loops=1) - Sort (cost=28644.09..28870.83 rows=90697 width=16) (actual time=0.829..0.835 rows=1 loops=1) Sort Key: data.id - Index Scan using idx_d_entered on data (cost=0.00..20202.81 rows=90697 width=16) (actual time=0.146..0.185 rows=1 loops=1) Index Cond: 'now'::text)::timestamp(6) with time zone)::timestamp without time zone - '00:01:00'::interval) entered) Filter: ((machine_id = 2) OR (machine_id = 3)) - Hash (cost=1.74..1.74 rows=74 width=8) (actual time=0.382..0.382 rows=0 loops=1) - Seq Scan on datatemplate_intervals (cost=0.00..1.74 rows=74 width=8) (actual time=0.024..0.250 rows=74 loops=1) Total runtime: 102323.491 ms (13 rows) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Rule uses wrong value
Jeff Boes [EMAIL PROTECTED] writes: It appears that the rule is inserting the row copies into job_queue_trace with a job_id value that is one higher than the job_id from the original row. Almost as though it was re-evaluating the sequence ... No kidding. A rule is a macro and therefore has the usual risks of multiple evaluations of arguments. The only way to do what you want is with a trigger. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] 'NOW' in UTC with no timezone
Tom Lane [EMAIL PROTECTED] writes: Greg Stark [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Stuart Bishop [EMAIL PROTECTED] writes: I'm trying to determine the best way of saying 'The current time in UTC with no time zone information'. Isn't that a contradiction in terms? Not if you're used to the Unix concept of storing seconds since the epoch. In that model the quantity you're storing is entirely time zone agnostic. Not at all. In my worldview, the Unix concept is seconds since midnight 1/1/1970 00:00 UTC, and therefore it is essentially UTC time, because (a) its absolute meaning doesn't change depending on your local timezone, but (b) unless you are in UTC, you have to rotate it to your local timezone for display. Well one sense it has no time zone since it's just a quantity of time. The number of seconds since the epoch to a particular point in time is the same no matter where you are. In another sense it's related to UTC because the epoch is specified in UTC. That's why the user's description of The current time in UTC with no time zone information is applicable. The SQL approach of storing a time zone with the timestamp makes things very confusing. For unix people it requires a time zone in precisely the opposite circumstances from when they expect to use one. Yes, obviously you are confused ;-) Hm. Further experimentation shows I was indeed confused. I guess my confusion comes from the way postgres interprets unadorned time stamps as being in local time. And then always displays timestamps converted to local time. I thought it was remembering the time zone specified in the original input. In fact it's not doing that. I am beginning to like the idea you suggested of leaving the server set to UTC and just manually specifying time zones whenever I want to convert to local time. -- greg ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Confused with LABEL and LOOP
Patrick Fiche [EMAIL PROTECTED] writes: I'm a bit confused with the use of Label for loops in PostgreSQL. I'm using PostgreSQL 8.0 Beta2 version on WIN32 platform. CREATE OR REPLACE FUNCTION TEST( ) RETURNS int4 AS ' BEGIN LOOP1 LOOP Raise Notice ''Loop 1''; EXIT LOOP1; END LOOP; RETURN 0; END' LANGUAGE 'plpgsql'; When I call SELECT TEST(), I get the error message : ERROR: control reached end of function without RETURN CONTEXT: PL/pgSQL function test Hmm. I think this *should* work, and experimentation shows that it did work up through PG 7.2. It's broken in 7.3 and later. Jan, do you see any fine point that makes this function wrong? Sure looks like a bug to me. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] 'NOW' in UTC with no timezone
Greg Stark [EMAIL PROTECTED] writes: I guess my confusion comes from the way postgres interprets unadorned time stamps as being in local time. And then always displays timestamps converted to local time. I thought it was remembering the time zone specified in the original input. In fact it's not doing that. Indeed not. (I think that the SQL spec contemplates that TIMESTAMP WITH TIME ZONE *should* work that way, but that's not what we've done.) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Message-ID as unique key?
Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? I eventually want to have a cron job process my inbox and don't want successive cron tasks to keep re-entering the same email :) Jerry ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Rule uses wrong value
Tom Lane wrote: Jeff Boes [EMAIL PROTECTED] writes: It appears that the rule is inserting the row copies into job_queue_trace with a job_id value that is one higher than the job_id from the original row. Almost as though it was re-evaluating the sequence ... No kidding. A rule is a macro and therefore has the usual risks of multiple evaluations of arguments. The only way to do what you want is with a trigger. regards, tom lane But shouldn't new.job_id use the value that was already recorded in the original row? I'm not using -- INSERT INTO job_queue_trace (job_id) VALUES (nextval(...)) but INSERT INTO job_queue_trace (job_id) VALUES (new.job_id) Why is the sequence involved? -- Jeff Boes vox 269.226.9550 ext 24 Database Engineer fax 269.349.9076 Nexcerpt, Inc. http://www.nexcerpt.com ...Nexcerpt... Extend your Expertise ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] connection or query affected
please cc me If I am using some server side langauge to access Postgres - php, python, perl, asp, if I make a connection, do the following actions affect the connection, or the individual query that contains them: turn off autocommit start transaction commit transaction SET schema ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Message-ID as unique key?
No, it's not a global unique identifier. In fact you cannot even be sure it will always be there in all mails, depending on your mail processing chain. Most of the email clients will add one, and most of the mail transfer agents will also add one if missing, but there's no general rule of how to create the mail id. Usually it will be unique for the specific instance of the user/transfer agent which generates it, as it serves for exact this purpose, to identify the mail instance on the agent instance, but nothing more. So: don't use it as a unique identifier. Cheers, Csaba. On Tue, 2004-10-12 at 17:01, Jerry LeVan wrote: Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? I eventually want to have a cron job process my inbox and don't want successive cron tasks to keep re-entering the same email :) Jerry ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Change query priority
Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: I don't know how effective this would be, but you could wrap the system call setpriority() in a user-defined function if your platform supports it. This would set the nice value of the backend process, which might serve as a crude prioritization mechanism. Every couple of months someone comes along and says why don't you provide a way to renice a backend ... but in point of fact it's somewhere between useless and counterproductive for most query loads. The useless part comes in because nice only affects CPU priority not I/O priority, but I/O load is the thing that counts for most database applications. The counterproductive part comes in because of an effect called priority inversion. The niced-down process may be holding a lock that is wanted by some higher-priority process --- but the kernel scheduler knows nothing of that, and will leave the niced-down process at the bottom of the queue, and thus the high-priority process is effectively stuck at the bottom too. Without change priority doesn't means we are immune to a priority inversion, for example the way semaphore are implemented in Linux doesn't prevent you to be bitten, at least IIRC the Linux kernel doesn't trace chain locks... however I'd ve worried about priority inversion if I have hard deadline, have hard deadline and database in the same sentence is like put windows and security in the same sentence too... I feel that renice a backend will not kill your system. Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Message-ID as unique key?
On Tue, 2004-10-12 at 17:01, Jerry LeVan wrote: Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? Its supposed to be that, yes. Unless you meet a broken client or MTA. I eventually want to have a cron job process my inbox and don't want successive cron tasks to keep re-entering the same email :) Why not a script which receives the mail in behalf of the user? Time to run some spam checkers too. Regards Tino ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Message-ID as unique key?
Jerry LeVan [EMAIL PROTECTED] writes: Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? It is intended to be, but since it is generated by either the MUA or the first MTA that sees the message, there is a lot of scope for broken software to screw things up. -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Message-ID as unique key?
On Tue, 12 Oct 2004 11:01:08 -0400, Jerry LeVan [EMAIL PROTECTED] wrote: Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? I eventually want to have a cron job process my inbox and don't want successive cron tasks to keep re-entering the same email :) In terms of Internet mail? Answer is... almost. The idea is that each mail has an unique Message-ID, but there are cases when few different mails get same Message-ID. Such can be the case with mailing lists, like the one you are reading right now. Suppose you are crosssending a message, telling: To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] the message will arrive here and a copy will be sent to each mailing list. Then these twin messages will be processed by mailing list software, subjects will have [something] prepended in case of pgsql-general, the linux-kernel will have custom signature at the end of a message, pgsql-general will have TIPS as a signature. Then suppose you are subscribed to both lists. You will receive both messages (which look slightly different) but with same Message-ID. Oh, and if you store a Sent-mail in same/similar fold^H^H^H^Htable, be warned that when this message comes back from pgsql-general or most any other mailing list it will have the same Message-ID. So... I think you might want to discard messages with duplicate Message-IDs (loosing one of lkml- or pgsql-general- list, whichever comes later), but you should do it silently. Mail should not be rejected or you're risking getting bounced of the mailing list. HTH, HAND, Dawid ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Message-ID as unique key?
I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? I think you're looking for RFC 2822 (http://www.faqs.org/rfcs/rfc2822.html). I seem to recall that one of the rfc's listed a time limit of two years for uniqueness, though I'm at a loss to find which one at the moment. Pertinent sections: 3.6.4. Identification fields Though optional, every message SHOULD have a Message-ID: field. snip The message identifier (msg-id) itself MUST be a globally unique identifier for a message. The generator of the message identifier MUST guarantee that the msg-id is unique. There are several algorithms that can be used to accomplish this. Since the msg-id has a similar syntax to angle-addr (identical except that comments and folding white space are not allowed), a good method is to put the domain name (or a domain literal IP address) of the host on which the message identifier was created on the right hand side of the @, and put a combination of the current absolute date and time along with some other currently unique (perhaps sequential) identifier available on the system (for example, a process id number) on the left hand side. Using a date on the left hand side and a domain name or domain literal on the right hand side makes it possible to guarantee uniqueness since no two hosts use the same domain name or IP address at the same time. Though other algorithms will work, it is RECOMMENDED that the right hand side contain some domain identifier (either of the host itself or otherwise) such that the generator of the message identifier can guarantee the uniqueness of the left hand side within the scope of that domain. Regards, Bruce Ritchie ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] OS not good for database
at the time, everybody bought Microsoft's IAS; it was the dominant web server around85% share, something like that. And Apache just utterly crushed them What I want to know is when did Microsoft have 85% of the web server market? I sure don't remember that. Maybe he means 85% of the Windows Web server market. I really like Postgres and have found MySQL useful in the past but I do home that Oracle does well. From what I have seen they do have a good product and work hard at supporting it. I can tell you that I do wish that more open source projects supported Postgres. Things like PHPNuke. Yes I know that you can make it work but it is not easy to find the docs for. I also know that somewhere their is a version of bugzilla that runs on Postgres but I have not had much luck finding it. DBD::pg for Windows needs to be fixes ASAP. I use Linux on my servers but most of the workstations here are Windows. Frankly better support for Python, Ruby, and any of the other scripting languages under windows would really help. MySQl has more support in a Windows enviroment. I am talking client support not server support. Frankly runing postgres under windows has not practical application for me. I am sure that other people will find a lot of use for it though. Other than that I love Postgres. Bruno Wolff III wrote: On Sun, Oct 10, 2004 at 14:18:47 +0200, Pierre-Frdric Caillaud [EMAIL PROTECTED] wrote: Hi Can you provide a link to the interview? Simon http://www.alwayson-network.com/comments.php?id=6186_0_4_0_C Someone should tell Larry that Linux actually does predate Apache. His comments about Linux being backed by companies seemed strange, as MySQL is effectly owned by a company. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Return resultset from a function
On Tue, Oct 12, 2004 at 04:15:28PM -0700, Neumann P?ter wrote: I'm fairly new to Postgres and ran into a problem quite soon. Until now I used MS Sql2000 and with it I was able to write a simple stored procedure that returned a resultset (for example: 'select * from table1'). But I don't know the way how to do it in Postgres, cause I can't define a function that returns a resultset. I know it's a lame question but I need some hints to solve this problem. I think you're looking for a set-returning function. The following General Bits article has examples: http://www.varlena.com/varlena/GeneralBits/26.html -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Message-ID as unique key?
Well, this is for my personal mail...I think I will probably give it a try. The program can log errors so in the few cases that might occur I think I can manufacture my own message id. I guess that the bottom line is that if it exists it is unique... Jerry On Oct 12, 2004, at 11:25 AM, Doug McNaught wrote: Jerry LeVan [EMAIL PROTECTED] writes: Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? It is intended to be, but since it is generated by either the MUA or the first MTA that sees the message, there is a lot of scope for broken software to screw things up. -Doug -- Let us cross over the river, and rest under the shade of the trees. --T. J. Jackson, 1863 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Confused with LABEL and LOOP
Tom Lane [EMAIL PROTECTED] writes: Patrick Fiche [EMAIL PROTECTED] writes: I'm a bit confused with the use of Label for loops in PostgreSQL. Sure looks like a bug to me. Ah-hah: it's a case-sensitivity problem. The construct downcases its label identifier, but EXIT forgets to do so. Spell it as EXIT loop1 and you'll be OK. I'll fix this for 8.0. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Message-ID as unique key?
On Tue, Oct 12, 2004 at 11:01:08AM -0400, Jerry LeVan wrote: Hi, I am futzing around with Andrew Stuarts Catchmail program that stores emails into a postgresql database. I want to avoid inserting the same email more than once... (pieces of the email actually get emplaced into several tables). Is the Message-ID header field a globally unique identifer? Not a postgresql related issue, but, yes Message-ID: is, by definition, a globally unique identifier. If there are two messages with the same Message-ID then the sender is asserting that those two messages are identical. See RFC 2822 section 3.6.4. You will sometimes see a message generated without a Message-ID at all, but that will usually have had a Message-ID added by some MTA along the delivery route. If your MX doesn't add Message-IDs when missing then you may well see incoming email without Message-IDs (mostly spam). In practice there are varying levels of competence in implementation of Message-ID generation, so you'll very rarely see syntactically incorrect Message-IDs that may, in theory, clash. I eventually want to have a cron job process my inbox and don't want successive cron tasks to keep re-entering the same email :) I wouldn't try and use Message-ID as a primary key, though. Give yourself a serial field. I don't use Message-ID at all in my postgresql-based mailstore. Instead I use a maildir style spool directory for incoming mail and the processes that import those spooled messages into the mailstore use standard maildir techniques for locking the message on disk, writing it to the DB, moving it atomically from the new/ to the cur/ directory, then commiting the database write. I've pumped millions of emails through this in production with no problems. Cheers, Steve ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Rule uses wrong value
Jeff Boes [EMAIL PROTECTED] writes: Tom Lane wrote: No kidding. A rule is a macro and therefore has the usual risks of multiple evaluations of arguments. But shouldn't new.job_id use the value that was already recorded in the original row? There is no value that was already recorded in the original row; if you want to think in those terms you should use a trigger. It's fundamentally wrong to think of a rule in that way. In the rule, new.job_id is effectively a macro parameter that gets replaced by the INSERT's corresponding expression, ie, nextval(...). regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Change query priority
Gaetano Mendola [EMAIL PROTECTED] writes: I feel that renice a backend will not kill your system. It won't kill the system, but it probably won't accomplish what you hoped for, either. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] OS not good for database
David Siebert [EMAIL PROTECTED] writes: I also know that somewhere their is a version of bugzilla that runs on Postgres but I have not had much luck finding it. Red Hat runs their bugzilla on Postgres: http://bugzilla.redhat.com/bugzilla/ Source code is available from a link near the end of that page. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Message-ID as unique key?
Jerry LeVan writes I eventually want to have a cron job process my inbox and don't want successive cron tasks to keep re-entering the same email :) That's the hard way to do it, it's easier to route messages to individual files. BTW I'm doing just that in a GPL'ed project, see the URL in my sig if you're interested. It also comes with a GUI to access the mail in the database. -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Message-ID as unique key?
On Tue, 2004-10-12 at 17:11, Csaba Nagy wrote: No, it's not a global unique identifier. [snip] Hey, you know what ? Good that I read this thread :D I'm in the process of writing a Java SMTP server, and had no clue about this rule... although I have read a few times the relevant RFC. It's true that currently I don't touch the mail, I just use internal IDs which I don't care if they are not globally unique, but if those IDs would have made it to the final product in some mails, there would have been some chance for collisions... as some other poster said, there is a good chance of non-conforming MTAs to be out there. Cheers, Csaba. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Rule uses wrong value
Tom Lane wrote: Jeff Boes [EMAIL PROTECTED] writes: Tom Lane wrote: No kidding. A rule is a macro and therefore has the usual risks of multiple evaluations of arguments. But shouldn't new.job_id use the value that was already recorded in the original row? There is no value that was already recorded in the original row; if you want to think in those terms you should use a trigger. It's fundamentally wrong to think of a rule in that way. In the rule, new.job_id is effectively a macro parameter that gets replaced by the INSERT's corresponding expression, ie, nextval(...). regards, tom lane Aha! Well, that certainly wasn't clear from the documentation: http://www.postgresql.org/docs/7.4/static/sql-createrule.html ... the special table names NEW and OLD may be used to refer to values in the referenced table. NEW is valid in ON INSERT and ON UPDATE rules to refer to *the new row being inserted* or updated. OLD is valid in ON UPDATE and ON DELETE rules to refer to the existing row being updated or deleted. To me, new row and old row imply what's already in the table. On the other hand, I hadn't seen this before: http://www.postgresql.org/docs/7.4/static/rules.html For INSERT commands, the target list describes the new rows that should go into the result relation. It consists of the *expressions in the VALUES clause* or the ones from the SELECT clause in INSERT ... SELECT. The first step of the rewrite process adds target list entries for any columns that were not assigned to by the original command but have defaults. Any remaining columns (with neither a given value nor a default) will be filled in by the planner with a constant null expression. -- Jeff Boes vox 269.226.9550 ext 24 Database Engineer fax 269.349.9076 Nexcerpt, Inc. http://www.nexcerpt.com ...Nexcerpt... Extend your Expertise ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] 'NOW' in UTC with no timezone
On Tue, Oct 12, 2004 at 10:43:09AM -0400, Tom Lane wrote: Greg Stark [EMAIL PROTECTED] writes: I guess my confusion comes from the way postgres interprets unadorned time stamps as being in local time. And then always displays timestamps converted to local time. I thought it was remembering the time zone specified in the original input. In fact it's not doing that. Indeed not. (I think that the SQL spec contemplates that TIMESTAMP WITH TIME ZONE *should* work that way, but that's not what we've done.) In something I'm working on at the moment I've settled on storing the timestamp and the timezone in seperate columns. The reason is that it really needs to represent time in a particular timezone. The operation of adding one day to a timestamp is dependant on a particular timezone due to daylight savings. If everything is always rotated to your current timezone the results will just be wrong... Since PostgreSQL doesn't actually support daylight savings timezones I'm going to do the processing in the application. I'd consider adding it to PostgreSQL too except this needs to work on pre-8.0 systems. Maybe what is needed is a TIMESTAMP WITH FIXED TIME ZONE type :) Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpZJehQ3YS8K.pgp Description: PGP signature
Re: [GENERAL] Need some advice on appropriate PL strategy... [solved/thanks]
From: Greg Stark [EMAIL PROTECTED] Subject: Re: Need some advice on appropriate PL strategy... Message-ID: [EMAIL PROTECTED] Eric D. Nielsen [EMAIL PROTECTED] writes: I guess I could alternatively just code up a simple mail function in another PL and then call that function from pl/pgsql. Is there any merit to this approach over the whole-trigger in another PL method? Well depending on your application this may be a reasonable approach. However you should at least think carefully before taking this route. It means the email processing is put into the critical path of performing the original update. I would suggest you consider another model, where you have a second process that connects to the database and checks for updates. It can either stay connected all the time and the trigger can use NOTIFY to wake it up. Or it can just check periodically. This has the advantage that you can write in any language that has a postgres driver, including PHP. It also means you can perform your database updates without having them depend on some large external system. This is a big advantage. It means when the mail system's borked you can keep your web application running and have it catch up when things are fixed. And it means when things are slow or erroneous you have one fewer moving parts to confuse you when debugging. Hmm, very good points. Thank you. I was hoping for a get/easy solution, but those never pan out :) Your suggestion is also much more flexible -- digesting or other similar aggregation of multiple events to single emails is much easier to implement in that scenario. Thanks again. Eric ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] connection or query affected
* Dennis Gearon [EMAIL PROTECTED] [2004-10-12 08:13:07 -0700]: turn off autocommit Per connection. start transaction commit transaction They're statements themselves that change the state of the connection. You start a transaction, run your queries, and then commit/rollback. SET schema It depends if you're setting your search path for subsequent queries which would be tracked per connection or you're actually prepending the schema where the table exists in the query. E.g. $dbh-query(SET search_path='my_schema'); - or - $dbh-query('SELECT foo FROM my_schema.bar WHERE active'); -- Steven Klassen - Lead Programmer Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Replication Support Services, (503) 667-4564 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Message-ID as unique key?
On Tue, Oct 12, 2004 at 08:50:30AM -0700, Steve Atkins wrote: Is the Message-ID header field a globally unique identifer? Not a postgresql related issue, but, yes Message-ID: is, by definition, a globally unique identifier. If there are two messages with the same Message-ID then the sender is asserting that those two messages are identical. See RFC 2822 section 3.6.4. Except that they usually recommend adding the domain as part of the message-id. This is where your problem comes in. A mail server hiding behind a NAT firewall that's a relay and doesn't receive incoming mail directly, is not going to have a domain. So you'll get things like localhost.localdomain or something completely fake. Indeed, two messages in this thread have this. Secondly, clients creating their own message-ids are not always that great. For example, Outlook, uses the computer name as the domain. They're not likely to be unique worldwide either. You'd still have to be pretty unlucky to get two the same though. It's possible Microsoft mashes the ethernet MAC ID in there too. The answer is, it very very close to unique, but not really guarenteed. All mail systems I'm aware of generate their own ID's anyway, look through the received headers... Hope this helps, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpfmPbJZX1LE.pgp Description: PGP signature
Re: [GENERAL] Reusable pl/pgsql samples ?
Roberto Mello used to maintain a PL/PgSQL Cookbook, but this link is dead, and I don't know if it's still around: http://techdocs.postgresql.org/redir.php?link=http:// www.brasileiro.net/postgres/cookbook -tfo On Oct 11, 2004, at 4:05 AM, Armen Rizal wrote: Hello all, Is there anybody know where I can find reusable pl/pgsql samples or function library ? Thanks, Armen ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Cache lookup failed for relation, when trying to DROP
On Wed, Oct 06, 2004 at 05:25:58PM +0100, Mark Gibson wrote: I had to remove Slony's schema manually as I was having problems with it. I was in the process of removing all Slony related stuff, and all my slave tables when this problem occurred, and was going to start again from scratch. Did your problem happen on a replica, or on the origin? There's a current dirty, evil hack in Slony that does extremely naughty things in the catalogues on the replicas. This is slated to go away in the future, but at the moment it's possible to trip over it if you don't use Slony's own admin tools. A -- Andrew Sullivan | [EMAIL PROTECTED] The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] connection or query affected
Steven Klassen wrote: * Dennis Gearon [EMAIL PROTECTED] [2004-10-12 08:13:07 -0700]: turn off autocommit Per connection. start transaction commit transaction They're statements themselves that change the state of the connection. You start a transaction, run your queries, and then commit/rollback. SET schema It depends if you're setting your search path for subsequent queries which would be tracked per connection or you're actually prepending the schema where the table exists in the query. E.g. $dbh-query(SET search_path='my_schema'); - or - $dbh-query('SELECT foo FROM my_schema.bar WHERE active'); Thanks, I meant the first of the two schema related queries above. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] data dir permissions and ownership
My permissions on pg's data dir got changed. Largely just curious (since pg appears to be working fine after changing as below), is this what they should be? chown -R postgres.nobody data chmod -R 0700 data __ Do you Yahoo!? Yahoo! Mail - You care about security. So do we. http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] PostgreSQL v8 NAMEDATALEn =128
Hi !sorry for my english ! I am french :-)I want to test postgreSQL v8 native Windows! Where i can get a version whichallows more than 64 characters (NAMEDATALEN ~=128) to the name of tables andcolumns ?I generate automaticly tables and the length of some of these tables is to64.I use Postgresql 7.5 that I have recompiled the source after to havemodified the NAMEDATALEN to 128!The server is very slow (ex. a query to table which contain 16.000 instancestake 1 munite). I don't know exactely why ?Someone think that the emulateur cygwin is responsible and they recommend meto install a postreSQL windows version.Personally I think it may a problem of conguration of the server and atuning could increase performance.SomeOne can help to tune my PostgreSQL?Sorry again for my english !!Thanks U for your help !!Hondjack...
Re: [GENERAL] data dir permissions and ownership
* CSN [EMAIL PROTECTED] [2004-10-12 11:40:52 -0700]: chown -R postgres.nobody data chmod -R 0700 data Try postgres.postgres, otherwise that looks okay AFAICT. -- Steven Klassen - Lead Programmer Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Replication Support Services, (503) 667-4564 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL v8 NAMEDATALEn =128
Hi, Am Di, den 12.10.2004 schrieb DEHAINSALA Hondjack um 20:43: Hi ! sorry for my english ! I am french :-) I want to test postgreSQL v8 native Windows! Where i can get a version which allows more than 64 characters (NAMEDATALEN ~=128) to the name of tables and columns ? I generate automaticly tables and the length of some of these tables is to 64. I wonder what could be the reason to do so? Is it kind of an ORM (object relational mapper)? I use Postgresql 7.5 that I have recompiled the source after to have modified the NAMEDATALEN to 128! The server is very slow (ex. a query to table which contain 16.000 instances take 1 munite). I don't know exactely why ? Someone think that the emulateur cygwin is responsible and they recommend me to install a postreSQL windows version. Personally I think it may a problem of conguration of the server and a tuning could increase performance. Most of the time some indices here and there help :-) You should have some example queries you find slow and use EXPLAIN or EXPLAIN ANALYZE with them. You can post the output here if you need help. Regards Tino ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Message-ID as unique key?
Csaba Nagy [EMAIL PROTECTED] writes: So: don't use it as a unique identifier. On Tue, 2004-10-12 at 17:01, Jerry LeVan wrote: Is the Message-ID header field a globally unique identifer? [Isn't it awfully confusing to put your answers before the thing you're responding to?] Let me suggest a higher-level view of your design decision here. You shouldn't use the message-id as a unique identifier on the general principle that it is untrusted network data. You can never trust entities outside your control to behave the way you expect, even if there's an authority to back you up. They could be deficient or even hostile. Even if you don't trust the message-id generated by external users to provide any particular semantics it can still be useful. You can allow clients to request a particular message by message-id for example. Just make sure you apply the regular security checks and don't assume that because they know the message-id they must have access to the message. And make sure your regression tests include testing the case of having thousands of messages with identical message-ids, even though that should never arise in practice if everything works the way it's supposed to. -- greg ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] How Do I Change The 'Owner' of a Database?
Hello, I used a user to create a database but postgreSQL insists that the 'postgres' user is the owner. It's a bit annoying to have to change users to link sequences to counter values. How can I cahnge the owner of the database to the proper user? Peter ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] OS not good for database
I have downloaded it but it comes with no docs that I can find. No docs directory nothing. There are also people that have PHPNuke running with Postgres but that is also lacking in docs. I really do not fault Postgres or the developers for this. I will say that I feel that is why MySQL is gets a lot more press than Postgres. I really like using Postgres and I even wrote our call center app using it as the back end. The program does not store the audio but it a super call back list. It also handle our RMAs and tracks unresolved support questions. It only handles about 100,000 calls a year but postgres has never lost a call and the server keeps chugging away. Tom Lane wrote: David Siebert [EMAIL PROTECTED] writes: I also know that somewhere their is a version of bugzilla that runs on Postgres but I have not had much luck finding it. Red Hat runs their bugzilla on Postgres: http://bugzilla.redhat.com/bugzilla/ Source code is available from a link near the end of that page. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How Do I Change The 'Owner' of a Database?
* pw [EMAIL PROTECTED] [2004-10-12 13:48:30 -0700]: How can I cahnge the owner of the database to the proper user? UPDATE pg_database SET datdba = (SELECT usesysid FROM pg_user WHERE usename = 'user_name') WHERE datname = 'database_name; There might be something you can do with 'alter database' as well. -- Steven Klassen - Lead Programmer Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Replication Support Services, (503) 667-4564 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] How Do I Change The 'Owner' of a Database?
Thanks for your help, I was looking at ALTER DATABASE but the docs don't disclose any attributes so *what the heck does one ALTER?* I'll try the query that you offered. Peter Steven Klassen wrote: There might be something you can do with 'alter database' as well. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Change query priority
Tom Lane wrote: Gaetano Mendola [EMAIL PROTECTED] writes: I feel that renice a backend will not kill your system. It won't kill the system, but it probably won't accomplish what you hoped for, either. That's true but right now renice a backend is the only way to procede in order to *try* to slow down some queries Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] How Do I Change The 'Owner' of a Database?
* pw [EMAIL PROTECTED] [2004-10-12 14:11:51 -0700]: I was looking at ALTER DATABASE but the docs don't disclose any attributes so *what the heck does one ALTER?* The actual settings are kind of nestled in the documentation a layer or two deep. Here are the pertinent URLs and neither of them have anything to do with ownership, FWIW. ;) http://www.postgresql.org/docs/current/interactive/sql-alterdatabase.html http://www.postgresql.org/docs/current/interactive/runtime-config.html Best Regards, -- Steven Klassen - Lead Programmer Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Replication Support Services, (503) 667-4564 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] deadlock with vacuum full on 7.4.5
I have a table that is usually really small (currently 316 rows) but goes through spasams of updates in a small time window. Therefore I have a vacuum full run every hour on this table. Last night one of these vacuum fulls deadlocked with a query on this table. Both were stuck doing nothing until I did a kill -INT on the backends doing the vacuum. So my questions: 1) What can I do to avoid this? 2) What do I do next time this happens to get more debugging info out of the situation? My postgres version: PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] deadlock with vacuum full on 7.4.5
Joseph Shraibman [EMAIL PROTECTED] writes: Last night one of these vacuum fulls deadlocked with a query on this table. Both were stuck doing nothing until I did a kill -INT on the backends doing the vacuum. So my questions: 1) What can I do to avoid this? 2) What do I do next time this happens to get more debugging info out of the situation? Look in pg_locks and pg_stat_activity. I think it is highly unlikely that there was a deadlock inside the database. Far more likely that both jobs were waiting on some idle-in-transaction client whose transaction was holding a lock on the table. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] error opening pg_clog file
On Saturday, our database machine locked up hard on us due to some faulty hardware. Since then, we have been getting messages like this: ERROR: could not access status of transaction 143934068 DETAIL: could not open file /usr/local/pgsql/data/pg_clog/0089: No such file or directory Looking in /usr/local/pgsql/data/pg_clog, the 0089 file isn't there. Some investigation revealed these messages: http://archives.postgresql.org/pgsql-hackers/2004-01/msg00534.php http://www.mail-archive.com/[EMAIL PROTECTED]/msg13874.html So, I did dd if=/dev/zero of=/usr/local/pgsql/data/pg_clog/0089 bs=8k count=1. I did an ls to verify that the file existed. I started the postmaster back up, tried a VACUUM, and got: vacuumdb: vacuuming of database hyperseek failed: ERROR: could not access status of transaction 144565028 DETAIL: could not open file /usr/local/pgsql/data/pg_clog/0089: No such file or directory I looked, and the 0089 file was gone again. Is there anything I can do to save the situation? (The PG version is 7.4.2) steve wolfe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] LISTEN/NOTIFY for lightweight replication
Hi, I'm trying to come up with a relatively simple multi-master replication solution. This is for multiple databases that need to be discreet, and change relatively infrequently (10-30 updates an hour), and almost never update each others data (less than once a day). The TCL-based replication project for multi-master is troublesome to configure and seems to really impact performance. It can be assumed that the master-slave setup will not work for me, nor do we want to purchase a commercial soluton, nor can we run this all from one central database. I'm considering the following, and am requesting advice and any suggestions: a. Use listen/notify to develop a notification when changes are made. This could be done between each node, or not (i.e. it could be a chain instead). b1. All of the add events are using sequences so each node has a unique set of new records. b2. When an add, update or delete is recorded, DDL of this is passed via the notify. c. If no local event happened prior during this event envelope (i.e. since the last update notification but before the new event completed), perform the event. d. If there is a record level conflict but no field level one, perform the event. e. If there is a field level conflict, raise an exception (TBD). There are plenty of things that might not work here, but I'm particularly interested in: 1. If this is brain-dead because of performance issues, I'd like to know upfront. 2. Is there a way to get at the system tables that would contain overall change events? Otherwise, is this information available in some log event? 3. Can this be readily translated to DDL? 4. Does anyone have some extended examples of using listen/notify, especially in any kind of distributed transaction capability? Thanks! __ Do you Yahoo!? Yahoo! Mail Address AutoComplete - You start. We finish. http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] deadlock with vacuum full on 7.4.5
Why then when I did a kill -INT on the vacuuming backends did everything unfreeze? Tom Lane wrote: Joseph Shraibman [EMAIL PROTECTED] writes: Last night one of these vacuum fulls deadlocked with a query on this table. Both were stuck doing nothing until I did a kill -INT on the backends doing the vacuum. So my questions: 1) What can I do to avoid this? 2) What do I do next time this happens to get more debugging info out of the situation? Look in pg_locks and pg_stat_activity. I think it is highly unlikely that there was a deadlock inside the database. Far more likely that both jobs were waiting on some idle-in-transaction client whose transaction was holding a lock on the table. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] deadlock with vacuum full on 7.4.5
Joseph Shraibman [EMAIL PROTECTED] writes: Why then when I did a kill -INT on the vacuuming backends did everything unfreeze? You could have had other stuff backed up behind the VACUUM FULL lock requests. It's not impossible that you had a deadlock *outside* the database, that is some wait loop that is partially within and partially outside the DB. But if you want me to believe there's a bug in our deadlock detector, you're going to have to offer some actual evidence... regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] deadlock with vacuum full on 7.4.5
That is what I wanted to know, how to get the evidence for next time. Tom Lane wrote: Joseph Shraibman [EMAIL PROTECTED] writes: Why then when I did a kill -INT on the vacuuming backends did everything unfreeze? You could have had other stuff backed up behind the VACUUM FULL lock requests. It's not impossible that you had a deadlock *outside* the database, that is some wait loop that is partially within and partially outside the DB. But if you want me to believe there's a bug in our deadlock detector, you're going to have to offer some actual evidence... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] deadlock with vacuum full on 7.4.5
Joseph Shraibman wrote: That is what I wanted to know, how to get the evidence for next time. select * from pg_locks Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Very slow delete.
Hi Tom, Thanks for your help. I checked types and indexes, to no avail. Vacuum didn't help. but vacuum full did, it's now fast again. Cheers, Brock On Mon, 11 Oct 2004 23:38:49 -0400, Tom Lane [EMAIL PROTECTED] wrote: Brock Henry [EMAIL PROTECTED] writes: delete from people where id '2000' Index Scan using people_pkey on people (cost=0.00..71.68 rows=2792 width=6) (actual time=1.361..5.657 rows=2000 loops=1) Index Cond: (id 3000) Total runtime: 13.006 ms 3 row(s) Total runtime: 63,869.322 ms So 13 msec to find the rows to delete, and 63850+ msec in overhead. Which is certainly from the foreign keys that reference this table, because the referencing tables have to be checked to see if they contain copies of the key values being deleted. You either don't have indexes on the referencing columns, or there is a datatype mismatch, or possibly you need to update statistics for those tables. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] memory leak of PQmakeEmptyPGresult??
I try to detect potential memory management bugs of my program with valgrind. (PostgreSQL is at version 8.0 beta2. Operating System is Red Hat Enterprise Linux 3. Valgrind is at version 2.2.0.) After Program terminated, Valgrind reports a memory lost error information as follows:==13524== 208 bytes in 2 blocks are definitely lost in loss record 4 of 12==13524== at 0x1B903D1C: malloc (vg_replace_malloc.c:131)==13524== by 0x1B95C8B2: PQmakeEmptyPGresult (in /usr/lib/libpq.so.3.0)==13524== by 0x1B95D14A: (within /usr/lib/libpq.so.3.0)==13524== by 0x1B95D963: PQgetResult (in /usr/lib/libpq.so.3.0) I never call PQmakeEmptyPGresult and PQgetResult function in my program! Then i searched information about PQmakeEmptyPGresult:PQmakeEmptyPGresult Constructs an empty PGresult object with the given status.it is libpqs internal routine to allocate and initialize an empty PGresult object. It is exported because some applications find it useful to generate result objects (particularly objects with error status)themselves. If conn is not NULL and status indicates an error, the connections current error message is copied into the PGresult. Note that PQclear should eventually be called on the object, just as with a PGresult returned by libpq itself. It seems some connection errors happened at the time I do some query or update operations on database. Now I was puzzled, because program runs normally and returns correct result every time. And I am sure I called PQclear() after everytime called PQexec(). There is a example of my program:Bool bTest(void){PGresult *res;char acQryBuf[200];sprintf(acQryBuf, "%s", "select count(*) from testable ");res =PQexec(conn,acQryBuf); if ( PQresultStatus(res)!= PGRES_COMMAND_OK){ printf("\n %s\n",PQresultErrorMessage(res)); PQclear(res); return false; }PQclear(res);return true;} If PQclear() don't clear PGresult successfully, there will be some memory leaks.Is it right? Why PQclear () run unsuccessfully evrytime? Would that have anything to do with it? Any other ideas of things to try?Thanks! Ann2004.10.12 pm 17:00Do You Yahoo!? 150MP3 1G1000
[GENERAL] Change primary key in Postgres 7.3?
Greetings. We're running Postgres 7.3 on an Intel linux box (Redhat Enterprise server, version 3.0). We find ourselves in an awkward position: we have a database of attributes relating to students that uses as its primary key the ID number of the student. This is awkward for the following reasons. Our university used to use social-security numbers for student ID's. They stopped doing that a few years ago, but didn't force the change on existing students. Recently they've made a forced, retroactive change such that ALL students, past and present, now have a student ID that's not related to social-security number. I think this a well-justified change, but, unfortunately for us, it makes many of the primary keys in our database invalid. This problem is compounded by the fact that the programmer that set up our Postgres databases has moved on to another job. Our current programmer would like to start from scratch, redefine the schema, rebuild the database, etc. Unfortunately, there are a number of high-profile applications that depend on the database, and many of them would surely get broken by this kind of transition. We expect that we WILL eventually rebuild the database, but right now we're looking for a quick fix. Our current programmer tells me that he can't find a way to simply change the primary key in place in Postgres. Is there a way to do this? Thanks. - Mike -- Michael Hannonmailto:[EMAIL PROTECTED] Dept. of Physics 530.752.4966 University of California 530.752.4717 FAX Davis, CA 95616-8677 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Change primary key in Postgres 7.3?
DROP CONSTRAINT should be able to drop your pkey and as long as your data supports your new key... you should be set Gavin Michael Hannon wrote: Greetings. We're running Postgres 7.3 on an Intel linux box (Redhat Enterprise server, version 3.0). We find ourselves in an awkward position: we have a database of attributes relating to students that uses as its primary key the ID number of the student. This is awkward for the following reasons. Our university used to use social-security numbers for student ID's. They stopped doing that a few years ago, but didn't force the change on existing students. Recently they've made a forced, retroactive change such that ALL students, past and present, now have a student ID that's not related to social-security number. I think this a well-justified change, but, unfortunately for us, it makes many of the primary keys in our database invalid. This problem is compounded by the fact that the programmer that set up our Postgres databases has moved on to another job. Our current programmer would like to start from scratch, redefine the schema, rebuild the database, etc. Unfortunately, there are a number of high-profile applications that depend on the database, and many of them would surely get broken by this kind of transition. We expect that we WILL eventually rebuild the database, but right now we're looking for a quick fix. Our current programmer tells me that he can't find a way to simply change the primary key in place in Postgres. Is there a way to do this? Thanks. - Mike ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Reusable pl/pgsql samples ?
The link is dead. I've googled the cookbook but can't find any trace of it. No luck. Thanks anyway. -armen -Original Message- From: Thomas F.O'Connell [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 13, 2004 12:15 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] Reusable pl/pgsql samples ? Roberto Mello used to maintain a PL/PgSQL Cookbook, but this link is dead, and I don't know if it's still around: http://techdocs.postgresql.org/redir.php?link=http:// www.brasileiro.net/postgres/cookbook -tfo On Oct 11, 2004, at 4:05 AM, Armen Rizal wrote: Hello all, Is there anybody know where I can find reusable pl/pgsql samples or function library ? Thanks, Armen ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Application user login/management
Thank you, both Scott and Jason, for your responses. You both brought up things I hadn't thought about. I've included snippets of their posts below. On Sun, 2004-10-03 at 22:23, Michael Glaesemann wrote: Recently I've been thinking about different methods of managing users that log into a PostgreSQL-backed application. The users I'm thinking of are not necessarily DBAs: they're application users that really shouldn't even be aware that they are being served by the world's most advanced open source database server. On Oct 4, 2004, at 1:48 PM, Scott Marlowe wrote: We built an OpenLDAP server and wrote some scripts to maintain that and allow for group editing. This structure existed completely outside of the either the database or application. Then, apache handled all the authentication through ldap authentication. snip / This allows you to scale your authentication and group management independently of any scaling issues with your application servers. Since single master / multi slave OpenLDAP is a pretty easy thing to implement, the only applications that need to access the master can be set to the ldap editing applications (group editor, update scripts, etc...) while standard old authentication can be pointed at one or more slaves. Method 2: Store username/password information as data in tables, using pgcrypto for authentication On Oct 4, 2004, at 1:53 PM, Jason Sheets wrote: If you are confident that (a.) you will either run the database server or (b.) have the authority to require that pgcrypto be installed on the database for all installations this may be a good solution. Keep in mind you are limited to the encryption types supported by pgcrypto and moving to another database solution may be difficult. I also can't comment on the availability of pgcrypto on Win32 but with PostgreSQL 8 just around the corner the desire might be there to run the DB on Windows at some point. libmcrypt is currently available in win32 but I've occasionally seen behavior differences with it on win32 v.s. Unix. Also keep in mind that if you are not using encrypted database connections (using PostgreSQL's built in SSL support or SSH tunneling or another technique) you may be sending user's passwords across the network in plain text for the database to use. I would either insure that all connections will be encrypted or preferably at hash the password with at least SHA-1 on the application side and pass that as the password to the back-end, SHA-1 is available in almost all languages these days; this technique may also remove the requirement of using pgcrypto on the back-end. As with many things, there are tradeoffs. As I'm going to be running the database server, I think I'm going to push the pgcrypto solution as far as it will go. Thanks again, to both of you, for your comments. Much appreciated! Michael ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Change primary key in Postgres 7.3?
On Tue, Oct 12, 2004 at 06:10:12PM -0700, Michael Hannon wrote: We expect that we WILL eventually rebuild the database, but right now we're looking for a quick fix. Our current programmer tells me that he can't find a way to simply change the primary key in place in Postgres. Does ALTER TABLE not work? http://www.postgresql.org/docs/7.3/static/sql-altertable.html Whatever you decide to do, hopefully you have a development system on which to test your changes. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Change primary key in Postgres 7.3?
Michael Fuhr wrote: On Tue, Oct 12, 2004 at 06:10:12PM -0700, Michael Hannon wrote: We expect that we WILL eventually rebuild the database, but right now we're looking for a quick fix. Our current programmer tells me that he can't find a way to simply change the primary key in place in Postgres. Does ALTER TABLE not work? http://www.postgresql.org/docs/7.3/static/sql-altertable.html Whatever you decide to do, hopefully you have a development system on which to test your changes. I'm not sure what the original poster is asking precisely, but if they have declared all foreign keys referencing the primary table's primary key with ON UPDATE CASCADE, then all they need to do is update the primary table's primary key. [EMAIL PROTECTED] create table foo (key integer not null primary key); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index foo_pkey for table foo CREATE TABLE [EMAIL PROTECTED] create table bar(other integer not null primary key, foo_key integer not null references foo(key) on delete cascade on update cascade); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index bar_pkey for table bar CREATE TABLE [EMAIL PROTECTED] insert into foo values (1); INSERT 2433708 1 [EMAIL PROTECTED] insert into bar values (100, 1); INSERT 2433709 1 [EMAIL PROTECTED] update foo set key = 2; UPDATE 1 [EMAIL PROTECTED] select * from bar; other | foo_key ---+- 100 | 2 (1 row) Is that what the original poster is trying to achieve? Mike Mascari ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] memory leak of PQmakeEmptyPGresult??
I found the reason of this question and fixed the bug :))xiaoling he [EMAIL PROTECTED] wrote: I try to detect potential memory management bugs of my program with valgrind. (PostgreSQL is at version 8.0 beta2. Operating System is Red Hat Enterprise Linux 3. Valgrind is at version 2.2.0.) After Program terminated, Valgrind reports a memory lost error information as follows:==13524== 208 bytes in 2 blocks are definitely lost in loss record 4 of 12==13524== at 0x1B903D1C: malloc (vg_replace_malloc.c:131)==13524== by 0x1B95C8B2: PQmakeEmptyPGresult (in /usr/lib/libpq.so.3.0)==13524== by 0x1B95D14A: (within /usr/lib/libpq.so.3.0)==13524== by 0x1B95D963: PQgetResult (in /usr/lib/libpq.so.3.0) I never call PQmakeEmptyPGresult and PQgetResult function in my program! Then i searched information about PQmakeEmptyPGresult:PQmakeEmptyPGresult Constructs an empty PGresult object with the given status.it is libpqs internal routine to allocate and initialize an empty PGresult object. It is exported because some applications find it useful to generate result objects (particularly objects with error status)themselves. If conn is not NULL and status indicates an error, the connections current error message is copied into the PGresult. Note that PQclear should eventually be called on the object, just as with a PGresult returned by libpq itself. It seems some connection errors happened at the time I do some query or update operations on database. Now I was puzzled, because program runs normally and returns correct result every time. And I am sure I called PQclear() after everytime called PQexec(). There is a example of my program:Bool bTest(void){PGresult *res;char acQryBuf[200];sprintf(acQryBuf, "%s", "select count(*) from testable ");res =PQexec(conn,acQryBuf); if ( PQresultStatus(res)!= PGRES_COMMAND_OK){ printf("\n %s\n",PQresultErrorMessage(res)); PQclear(res); return false; }PQclear(res);return true;} If PQclear() don't clear PGresult successfully, there will be some memory leaks.Is it right? Why PQclear () run unsuccessfully evrytime? Would that have anything to do with it? Any other ideas of things to try?Thanks! Ann2004.10.12 pm 17:00 Do You Yahoo!?150MP31G1000Do You Yahoo!? 150MP3 1G1000
[GENERAL] JDBC +CIDR
Hi, Ive a table containing a CIDR field. Im using an EJB to create a new record in this table. I dont know how to pass the CIDR value. I tried by String but I get this error : javax.ejb.FinderException: Find failed: java.sql.SQLException: ERROR: operator does not exist: cidr = text So how should I do? Thanks in advance