Re: [GENERAL] formatting the output of a function
Islam Hegazy wrote: Hi all I created a function that returns a set of records. This function returns an integer and a float as the record fields. I have a problem in this function, it truncates the output. e.g. 1342 is displayed as 134, 456.46 is displayed as 456. In other words, it displays the first 3 digits only of a number, whether it is integer of float. I traced the function and the results are computed correctly. I use PostgreSQL 8.2.4. Following is a piece of my code. output[0] = (char*)palloc(sizeof(int)); //allocate space for a string that accepts an integer output[1] = (char*)palloc(sizeof(double)); //allocate space for a string that accepts an integer I don't really do C any more, but I don't think these do what you want. I think here you're allocating space for an integer/double, not for the string representation of them (i.e. 4/8 bytes). I'm surprised you're not just getting a crash. snprintf(output[0], sizeof(int), %d, counter); snprintf(output[1], sizeof(float), %.5f, result); -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Looking for Graphical people for PostgreSQL tradeshow signage
On 6/7/07, Joshua D. Drake [EMAIL PROTECTED] wrote: Joshua D. Drake wrote: Hello, Is anyone going to try for this? If not I can have our designer do it, but I would prefer it be a community deal. We are looking to have new signage for the shows that PostgreSQL attends. The signage that we have decided on is here: I tried to find the original vector graphics for the PostgreSQL elephant logo, but couldn't find it anywhere. I think that's a prerequisite. I would not mind having a stab at it. Alexander. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Inheritance and shared sequence
Hi, as I understood, when you create a table which inherits some other table, the constraints and indexes do not go to the child table hence you have to create a separate ones in there. That means you cannot depend that you won't have duplicate IDs in both tables. Right? BUT...what if child table and parent table both use the same sequence for the primary key. Would the duplication still be an issue? Thanks, Sebastjan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: pl/pgsql debuging, was Re: [GENERAL] debugging C functions
David Gardner wrote: As someone who would greatly benefit from this feature, is there something I can do to help out in development/testing of this feature? I have a test server I could install a cvs release of pgsql and know my way around a makefile. Hi David, At the moment it's just a case of us finding some time to pull the appropriate code from EnterpriseDB and rebundle it as a PostgreSQL add on. If you wish to test the debugger client though, you can always download a copy of EnterpriseDB to test against (get the latest 8.2 build). There will be some more changes to the client in the next few days though, so look out for an update to pgAdmin. Regards, Dave. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Inheritance and shared sequence
Sebastjan Trepca wrote: Hi, as I understood, when you create a table which inherits some other table, the constraints and indexes do not go to the child table hence you have to create a separate ones in there. That means you cannot depend that you won't have duplicate IDs in both tables. Right? BUT...what if child table and parent table both use the same sequence for the primary key. Would the duplication still be an issue? Well, if you *always* use the sequence you'll be OK (until you run out of numbers), but it won't stop you manually supplying your own values. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Suppress checking of chmod 700 on data-dir?
Hi postgresql-listmembers, for a backup-scenario I need to have access to the pgdata-directory as a different shell-user, but postgresqul refuses to start if chmod is not 700 on the directory. Is there a way to prevent postgres to check the data-dirs chmod 700 on startup (and while running) ? Thanks for your short replies. I could not figure it out in the documentation. Regards Johannes postgresql 8.2.4 on ubuntu dapper (if this question came 100times, I apologize for being unable to find it) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Suppress checking of chmod 700 on data-dir?
On fim, 2007-06-07 at 10:38 +0200, Johannes Konert wrote: Hi postgresql-listmembers, for a backup-scenario I need to have access to the pgdata-directory as a different shell-user, but postgresqul refuses to start if chmod is not 700 on the directory. Is there a way to prevent postgres to check the data-dirs chmod 700 on startup (and while running) ? use sudo in your backup scenario, or run you backup as postgres gnari ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Suppress checking of chmod 700 on data-dir?
use sudo in your backup scenario, or run you backup as postgres Thanks for your quick reply. Unfortunaltelly runing backup via sudo is not an option due to sercurity issues and using postgres-user is not feasable because other data as well is backuped where postgres-user should not have access to. So your answer means that there is definitelly NO way to circumwent the chmod 700 thing? Its hard to believe that. Each and evera thing is configurable in postgres, but I cannot disable or relax directory-permissions checking? Even not with a compile-option or something like that? Anyway thanks for your help. I'll keep searching for a solution. Regards Johannes ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] plperl and/or insert trigger problem
Richard Huxton wrote: I'm no guru myself... Don't underestimate yourself, after all you found the code where it goes wrong Looks OK to me, except you'll not get any error message on the last row - the insert will be called after the fetch. I do get an error message on the last row (assuming that it caused an error of course) There are [number of rows to fetch] + 1 fetches done by execute for fetch As the POD says: The execute_for_fetch() method calls $fetch_tuple_sub ... until it returns a false value So I get the error caused by the last record when execute_for_fetch fetches again and sees that there's nothing more to fetch. What happens if you change the code to take a copy of sth-errstr too: Nothing. Same result. Which made me wonder why they take a copy of the error code anyway. So I dropped that replacing my $err = $sth-err; push @$tuple_status, [ $err, $errstr_cache{$err} ||= $sth-errstr, $sth-state ]; by push @$tuple_status, [ $sth-err, $errstr_cache{$err} ||= $sth-errstr, That gave me the same (but still unexpected) result. Martijn van Oosterhout wrote: The reference to erstr_cache seems to infer that the code assumes there can be only one error string for any particular. Looking at the code I can't work out why that variable even exists. And he was right! There lies the real problem. It seems to me like a (faulty) way to try to return each different error message only once. But that wouldn't be the behaviour as described in the POD. So finally I replaced my $err = $sth-err; push @$tuple_status, [ $err, $errstr_cache{$err} ||= $sth-errstr, $sth-state ]; by push @$tuple_status, [ $sth-err, $sth-errstr, $sth-state ]; That gives the result I would expect when reading the POD Thanks to both of you for solving this problem! Bart
Re: [GENERAL] Suppress checking of chmod 700 on data-dir?
Ragnar wrote: are you planning a filesystem-level backup? are you aware that you cannot just backup the postgres data directories fro under a running server, and expect the backup to be usable? gnari As war as I understood the docu of psql 8.2.4 (http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html section 23.3.2) you can copy the files while postgres is running (respecting pg_start_backup and pg_stop_backup) But that is not my point. The question is where I can change the enforced chmod 700 postgresql always wants me to set. Regards Johannes :) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Auto Vacuum question?
i set the auto vacuum option to enable. but my database size(hard-disk space) still increased from 420MB to 440MB in 8 hours. most of the operations in this database are the Select query command, just few of Update or Insert. why it can be increased so strongly? after i executed the command vaccumdb -f -z testdb, but the size just decreased 1 or 2MB... what's the problem? how to know which command(select,insert) takes the hurge loading to database in this time? computer environment: System: Red Hat Enterprise 4 Postgresql Server: 8.0.13 count of client connected : about 100 pc ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Limitations on 7.0.3?
I have been using postgres 7.0.3 (Solaris 2.9) for the past 4 years without any major problems, until about a month ago. We are now experiencing crashes on the backend (connection lost to backend) while running queries (inserts, etc). Anyway I don't want to make this too technical but I'd just like to understand if maybe I'm running into some sort of limit on the size of the database. My biggest table is currently at 1.5B tuples. Would appreciate if anyone could let me know or is aware of any limits with 7.0 version. Thx, Jose ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Large Database \d: ERROR: cache lookup failed for relation ...
Thomas F. O'Connell tf ( at ) o ( dot ) ptimized ( dot ) com writes: I'm dealing with a database where there are ~150,000 rows in information_schema.tables. I just tried to do a \d, and it came back with this: ERROR: cache lookup failed for relation [oid] Is this indicative of corruption, or is it possibly a resource issue? Greetings, This message is a follow-up to Thomas's message quoted above (we're working together on the same database). He received one response when he sent the above message which was from Tom Lane and can be easily summarized as him having said that that could happen tables were being created or dropped while running the \d in psql. Unfortunately, that wasn't the case, we have now determined that there is some corruption in our database and we are hoping some of you back-end gurus might have some suggestions. How we verified that there is corruption was simply to reindex all of our tables in addition to getting the same errors when running a dump this past weekend. We so far have a list of five tables for which reindex fails with the error: ERROR: could not open relation with OID (sub with the five different #s). After dropping all of the indexes on these tables (a couple didn't have any to begin with), we still cannot run reindex on them. In addition, we can't drop the tables either. We can however run alter table statements on them. So, we have scheduled a downtime for an evening later this week wherein we plan on bringing the database down for a REINDEX SYSTEM. Is there anything else anyone can think of that we can do to narrow down where the actual corruption is or how to fix it? -- Erik Jones [EMAIL PROTECTED]
[GENERAL] Jumping Weekends
Hello, (sorry for my poor english) It's my first post here, and my doubt is very simple (I guess). I have a function to populate a table, into WHILE I have the follow piece of code: --Jump Weekend IF (SELECT TO_CHAR(CAST(PRODUCTION_DATE as date),'Day')) = 'Saturday' THEN PRODUCTION_DATE := PRODUCTION_DATE + 2; END IF; It's to jump weekend adding 2 days on the variable PRODUCTION_DATE, the type of the variable is DATE. But, I don't know why, it's doesn't work properly, it's doesn't jump weekends, it's continues normally, someone knows what's happen or what am I doing wrong? I appreciate any help. Thanks Ranieri Mazili ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] insane index scan times
This is just an example isolating the problem. Actual queries contain more tables and more joins and return reasonable amount of data. Performance of big indices however is appalling, with planner always reverting to seqscan with default settings. I tried to pre-filter the data as much as possible in preceding joins to put less strain on the offending join (less rows to join by?) but it doesn't help. I wonder what exactly makes index perform 100+ times slower than seqscan - I mean even if it's perfromed on the HD which it should not be given the index size, index and table are on the same HD and index is smaller and also sorted, isn't it? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] NULLS and User Input WAS Re: multimaster
Richard Huxton wrote: PFC wrote: NULL usually means unknown or not applicable Andrew Sullivan wrote: Aaaargh! No, it doesn't. It means NULL. Nothing else. If it meant unknown or not applicable or anything else, then SELECT * FROM nulltbl a, othernulltbl b WHERE a.nullcol = b.nullcol would return rows where a.nullcol contained NULL and b.nullcol contained NULL. But it doesn't, because !(NULL = NULL). (a == b) = ( (a - b) AND (b - a)) | a | b | a-b | b-a | a==b | ||-|--|--|--| | F | F | T | T | T | | F | T | T | F | F | | F | U | U | U | U | | T | F | F | T | F | | T | T | T | T | T | | T | U | U | T | U | | U | F | U | U | U | | U | T | T | U | U | | U | U | U | U | U | Ergo, (UNKNOWN = UNKNOWN) is UNKNOWN. Similarly for (UNKNOWN != UNKNOWN). Where NULL differs is that (NULL = NULL) is FALSE, and (NULL != NULL) is FALSE. The similarity is that with NULL, SQL is not exactly saying (NULL = NULL) is FALSE so much as that it's not TRUE. NULL follows Zen-valued logic, not 3-valued, and that seems somehow appropriate to me. -- Lew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] failing to start posgresql.
hie i am a newbie to Postgresql. I installed Postgresql using an rpm. my problem is that when i issue the service postgresql start command the database is failing to start. Where can i get the error logs for the database so that i can see what is the problem. Is there a another way of starting it Thanks
[GENERAL] Postgresql 7.4.3/8.2.4 out of memory
Dear all, With the below transaction we got an out of memory error. BEGIN; ANALYZE referenceAuthorLnkTemp; INSERT INTO referenceAuthor (author) SELECT DISTINCT ON (author) author FROM referenceAuthorLnkTemp; ANALYZE referenceAuthor; UPDATE referenceAuthorLnkTemp SET idAuthor = referenceAuthor.id FROM referenceAuthor WHERE referenceAuthorLnkTemp.author = referenceAuthor.author; INSERT INTO referenceAuthorLnk (idAuthor, idReference) SELECT DISTINCT ON (referenceAuthorLnkTemp.idAuthor, referenceAuthorLnkTemp.idReference) referenceAuthorLnkTemp.idAuthor, referenceAuthorLnkTemp.idReference FROM referenceAuthorLnkTemp; END; This transaction belongs to a file with many other transactions executed through a java program and JDBC (postgresql-8.2-505.jdbc3.jar). Launching this transaction by hand in psql works. Server has 8GB of RAM and OS is RHEL 3 update 6. Linux 2.4.21-37.ELsmp #1 SMP Wed Sep 7 13:28:55 EDT 2005 i686 i686 i386 GNU/Linux The number of rows in tables are: 43360793 tuples referenceauthorlnktemp 43360791 tuples referenceauthorlnk 56990 tuples referenceauthor Thanks for help. TopMemoryContext: 49152 total in 5 blocks; 7576 free (23 chunks); 41576 used unnamed prepared statement: 24576 total in 2 blocks; 13672 free (1 chunks); 10904 used TopTransactionContext: 8192 total in 1 blocks; 4736 free (0 chunks); 3456 used AfterTriggerEvents: 3170885632 total in 397 blocks; 12200 free (384 chunks); 3170873432 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used SPI Plan: 3072 total in 2 blocks; 800 free (0 chunks); 2272 used SPI Plan: 3072 total in 2 blocks; 744 free (0 chunks); 2328 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used SPI Plan: 3072 total in 2 blocks; 760 free (0 chunks); 2312 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 752 free (0 chunks); 2320 used SPI Plan: 3072 total in 2 blocks; 752 free (0 chunks); 2320 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 752 free (0 chunks); 2320 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used SPI Plan: 3072 total in 2 blocks; 1088 free (0 chunks); 1984 used SPI Plan: 3072 total in 2 blocks; 800 free (0 chunks); 2272 used SPI Plan: 3072 total in 2 blocks; 808 free (0 chunks); 2264 used SPI Plan: 3072 total in 2 blocks; 784 free (0 chunks); 2288 used SPI Plan: 3072 total in 2 blocks; 744 free (0 chunks); 2328 used SPI Plan: 3072 total in 2 blocks; 800 free (0 chunks); 2272 used SPI Plan: 3072 total in 2 blocks; 744 free (0 chunks); 2328 used SPI Plan: 3072 total in 2 blocks; 744 free (0 chunks); 2328 used SPI Plan: 7168 total in 3 blocks; 4056 free (0 chunks); 3112 used RI query cache: 24576 total in 2 blocks; 14136 free (5 chunks); 10440 used Operator class cache: 8192 total in 1 blocks; 4872 free (0 chunks); 3320 used Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 used MessageContext: 8192 total in 1 blocks; 7840 free (0 chunks); 352 used smgr relation table: 24576 total in 2 blocks; 16080 free (4 chunks); 8496 used TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16 used Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used ExecutorState: 24576 total in 2 blocks; 19088 free (12 chunks); 5488 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used TupleSort: 1417712 total in 22 blocks; 853304 free (13188 chunks); 564408 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used Unique: 0 total in 0 blocks; 0 free (0 chunks); 0 used Relcache by OID: 8192 total in 1 blocks; 2336 free (0 chunks); 5856 used CacheMemoryContext: 659000 total in 19
[GENERAL] using subselects
Hello everyone - I'm moving from MySQL to Postgres and I am in a situation where I am trying to use subselects properly. I have a table of projects, users, and user_projects. The table user_projects creates a many-to-many relationship between users and projects. I'm creating a select list on a web form for adding a new user to a project. I want to select users from the user table, who aren't already listed in the join table under that project. Here's my select: SELECT * FROM users WHERE user_id $current_user_id AND user_id ( SELECT user_id FROM user_projects WHERE project_id = $project_id ) This query returns no rows, even on projects that have no records in the user_projects table! I am certain that I am not the $current_user_id. If I run this query: SELECT * FROM users WHERE user_id $current_user_id I get all the user records besides myself. What am I doing wrong? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] last analyze time in 8.1?
Is there any way of getting at the last time a table was analyzed (by autovacuum) in 8.1 or is that only recorded (in pg_stat_*_tables) since 8.2? TIA Julian ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Record Types Structure in PL/pgSQL
Hi there. Is there any way of determining the actual structure of a record variable? E. g. I've written a small script to do some calculations over some fields with a dinamically generated query. It looks like this: create function foo(text) returns void as $$ declare a_record record; my_query alias for $1; begin for a_record in execute my_query loop -- Do some calculations end loop; return; end; $$ language plpgsql; The question is: how could I possibly get the field names and other information about the record a_record? I appreciate any suggestions or tips about this. Best regards.
Re: [GENERAL] failing to start posgresql.
check under $PGHOME/data/pg_log directory. Usually $PGHOME=/usr/local/pgsql if u have not changed this path during installation With Reagrds Ashish... - Original Message - From: phillip To: pgsql-general@postgresql.org Sent: Thursday, June 07, 2007 2:21 PM Subject: [GENERAL] failing to start posgresql. hie i am a newbie to Postgresql. I installed Postgresql using an rpm. my problem is that when i issue the service postgresql start command the database is failing to start. Where can i get the error logs for the database so that i can see what is the problem. Is there a another way of starting it Thanks
Re: [GENERAL] using subselects
[EMAIL PROTECTED] wrote: I'm creating a select list on a web form for adding a new user to a project. I want to select users from the user table, who aren't already listed in the join table under that project. Here's my select: SELECT * FROM users WHERE user_id $current_user_id AND user_id ( SELECT user_id FROM user_projects WHERE project_id = $project_id ) This query returns no rows, even on projects that have no records in the user_projects table! I suppose you meant to use: AND user_id NOT IN (SELECT user_id FROM ...) Regards, -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] failing to start posgresql.
take a look inside the rpm and see what it installed: rpm -ql rpmname On Thu, Jun 07, 2007 at 10:51:59AM +0200, phillip wrote: hie i am a newbie to Postgresql. I installed Postgresql using an rpm. my problem is that when i issue the service postgresql start command the database is failing to start. Where can i get the error logs for the database so that i can see what is the problem. Is there a another way of starting it Thanks -- Lost time is when we learn nothing from the experiences of life. Time gained is when we grow to have a wisdom that is tested in the reality of life. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Record Types Structure in PL/pgSQL
Hello it isn't possible in plpgsql. Try other plperl or plpython Regards Pavel Stehule 2007/6/6, Diego Sanchez [EMAIL PROTECTED]: Hi there. Is there any way of determining the actual structure of a record variable? E. g. I've written a small script to do some calculations over some fields with a dinamically generated query. It looks like this: create function foo(text) returns void as $$ declare a_record record; my_query alias for $1; begin for a_record in execute my_query loop -- Do some calculations end loop; return; end; $$ language plpgsql; The question is: how could I possibly get the field names and other information about the record a_record? I appreciate any suggestions or tips about this. Best regards. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Limitations on 7.0.3?
ARTEAGA Jose wrote: I have been using postgres 7.0.3 (Solaris 2.9) for the past 4 years without any major problems, until about a month ago. We are now experiencing crashes on the backend (connection lost to backend) while running queries (inserts, etc). Anyway I don't want to make this too technical but I'd just like to understand if maybe I'm running into some sort of limit on the size of the database. My biggest table is currently at 1.5B tuples. Would appreciate if anyone could let me know or is aware of any limits with 7.0 version. I don't remember any specific size limitations on the 7.0 series. For more detailed help you'll have to provide some specific error messages. Is there any chance you could move to a more recent version in the near future? You'll get much better support with a more recent version (there's just that many more users). You should also notice some substantial performance improvements compared to 7.0. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Limitations on 7.0.3?
Richard Huxton wrote: ARTEAGA Jose wrote: I have been using postgres 7.0.3 (Solaris 2.9) for the past 4 years without any major problems, until about a month ago. We are now experiencing crashes on the backend (connection lost to backend) while running queries (inserts, etc). Anyway I don't want to make this too technical but I'd just like to understand if maybe I'm running into some sort of limit on the size of the database. My biggest table is currently at 1.5B tuples. Would appreciate if anyone could let me know or is aware of any limits with 7.0 version. I don't remember any specific size limitations on the 7.0 series. For more detailed help you'll have to provide some specific error messages. 7.0 didn't have any protection against Xid wraparound. As soon as you hit the 4 billion transactions mark, your data suddenly disappeared. That's what I heard at least -- I didn't have much experience with such an old beast. We switched rather quickly to 7.1. -- Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4 Before you were born your parents weren't as boring as they are now. They got that way paying your bills, cleaning up your room and listening to you tell them how idealistic you are. -- Charles J. Sykes' advice to teenagers ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] setting login database
Hi, I am very new to postgres. So, if I am asking stupid question please forgive me. I installed the postgres, imported the required databases and created the necessary users. When users login using psql, they are required to supply the database name. We are trying to access a postgres database using a data integration tool. This data integration tool have options to specify the hostname, port, userid, and passwd of a database. But there is no way to supply the database name. So, I am thinking to set the database name to each user (default login database) so that they do not need to supply that info when they access data using data integration tool. How can I accomplish that? Thank you. Cheers, Samatha -- Samatha Kottha Zentrum für Informationsdienste und Hochleistungsrechnen (ZIH) Technische Universität Dresden Tel: (+49) 351 463-38776 Room 1019 Fax: (+49) 351 463-38245 Noethnitzer Straße 46 01187 Dresden Germany ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] NULLS and User Input WAS Re: multimaster
Lew [EMAIL PROTECTED] writes: Where NULL differs is that (NULL = NULL) is FALSE, and (NULL != NULL) is FALSE. No, that's not true. NULL=NULL is NULL. And NULL!=NULL is NULL as well. Ie, it's exactly as your table describes. The confusion comes because WHERE clauses treat NULL the same as they treat FALSE, ie, they exclude the row. But unless you can come up with a way for a SELECT clause to not tell you whether it's including a row or not (ie, whether it includes the row is unknown) then it's got to pick one or the other. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] setting login database
On Thu, Jun 07, 2007 at 03:38:15PM +0200, Samatha Kottha wrote: We are trying to access a postgres database using a data integration tool. This data integration tool have options to specify the hostname, port, userid, and passwd of a database. But there is no way to supply the database name. What data integration tool is it? Are you sure it doesn't have an option to specify the database? So, I am thinking to set the database name to each user (default login database) so that they do not need to supply that info when they access data using data integration tool. In libpq applications a role's default database has the same name as the role; you can override that with the PGDATABASE environment variable. http://www.postgresql.org/docs/8.2/interactive/libpq-envars.html See also PGSERVICE, PGSYSCONFDIR, and the connection service file. http://www.postgresql.org/docs/8.2/interactive/libpq-pgservice.html -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [pgsql-advocacy] [GENERAL] Looking for Graphical people for PostgreSQL tradeshow signage
Alexander, I tried to find the original vector graphics for the PostgreSQL elephant logo, but couldn't find it anywhere. I think that's a prerequisite. I would not mind having a stab at it. www.pgfoundry.org/projects/graphics look under docs, there are AI files. --Josh ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] insane index scan times
On Sun, Jun 03, 2007 at 11:29:07PM -0700, Sergei Shelukhin wrote: I wonder what exactly makes index perform 100+ times slower than seqscan - I mean even if it's perfromed on the HD which it should not be given the index size, index and table are on the same HD and index is smaller and also sorted, isn't it? Um, because if you scan the whole index you also have to scan the whole table, and you're going to scan the table and the index in random order, which is slower again. An index is faster for selecting a *portion* of the table, it's useless once you get to a significant percentage. However, recent versions have Bitmap index scans which are a middle ground, linear index scan, linear heap scan which has a much better worst case. So perhaps you're running a really old version of postgres, you didn't actually say what version you were running. Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
[GENERAL] cube for real[]
Hi, I have a table with one column as real[]. Now if I want to make cubes out of each of these arrays, is there a way in postgre I can do it. I guess cube operator is not defined for real[] but can I create cubes if the column was integer[]. If yes please may I know how. Thanks Abhang ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Jumping Weekends
On Sun, Jun 03, 2007 at 11:11:01PM -0300, Ranieri Mazili wrote: Hello, (sorry for my poor english) It's my first post here, and my doubt is very simple (I guess). I have a function to populate a table, into WHILE I have the follow piece of code: To get a proper answer I think you're going to have to show the actual loop, not just these two lines. But in any case, what you're trying to do can be done more easily as: IF date_part(PRODUCTION_DATE, 'dow') = 6 THE PRODUCTION_DATE := PRODUCTION_DATE + 2; END IF (Please check calling convention, I wrote this of the top of my head). Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Auto Vacuum question?
On Mon, Jun 04, 2007 at 04:15:12AM -0700, [EMAIL PROTECTED] wrote: after i executed the command vaccumdb -f -z testdb, but the size just decreased 1 or 2MB... what's the problem? vacuumdb -f does a FULL vacuum, which is blocking and compacts the tables. If it only compacted 1 or 2 M, then there are two possibilities: 1. You really do have that much data. You haven't told us anything about the data, what it looks like, how wide the tables are, whether you have large objects c., whether the character set is multibyte. . . 2. You have at least one long-running transaction that is perhaps doing nothing, but that is preventing VACUUM from recovering space. What does ps -auxww | grep postgres (or something equivalant) show you? A -- Andrew Sullivan | [EMAIL PROTECTED] Users never remark, Wow, this software may be buggy and hard to use, but at least there is a lot of code underneath. --Damien Katz ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Limitations on 7.0.3?
On Wed, Jun 06, 2007 at 02:40:08PM -0500, ARTEAGA Jose wrote: I have been using postgres 7.0.3 (Solaris 2.9) for the past 4 years without any major problems, until about a month ago. We are now experiencing crashes on the backend (connection lost to backend) while running queries (inserts, etc). Anyway I don't want to make this too technical but I'd just like to understand if maybe I'm running into some sort of limit on the size of the database. My biggest table is currently at 1.5B tuples. I don't believe it's a size limit. But 7.0 is pre-WAL. Are you running with the -F switch turned on, for performance? It could well be that you are running into data corruption that crashes the database. (The lack of WAL is not the only reason you might be running into that. 7.0 is a long time ago, and there are a lot of bugs that have been squished since then.) A -- Andrew Sullivan | [EMAIL PROTECTED] However important originality may be in some fields, restraint and adherence to procedure emerge as the more significant virtues in a great many others. --Alain de Botton ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] cube for real[]
On Jun 7, 2007, at 10:27 , ABHANG RANE wrote: I have a table with one column as real[]. Now if I want to make cubes out of each of these arrays, is there a way in postgre I can do it. I guess cube operator is not defined for real[] but can I create cubes if the column was integer[]. If yes please may I know how. There is a cube contrib module (see contrib/cube). I don't know if this does what you want as I don't really understand what you're doing with it, but you might want to take a look. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] the perfect mail archival system
Hi, Check this out: http://thread.gmane.org/gmane.comp.db.postgresql.bugs/14175 It's a thread that started on pgsql-bugs and that I later moved to pgsql-patches and later pgsql-hackers (and pgsql-bugs wasn't copied). The cool thing is that it kept the link properly, so you can see the whole thread regardless of the fact that it happened on three separate lists. How cool is that? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] querying the age of a row
Greetings, I've got a PostgreSQL-8.1.x database on a Linux box. I have a need to determine which rows in a specific table are less than 24 hours old. I've tried (and failed) to do this with the age() function. From what I can tell, age() only has granularity down to days, and seems to assume that anything matching today's date is less than 24 hours old, even if there are rows from yesterday's date that existed less than 24 hours ago. I've googled on this off and on for a few days, and have come up dry. At any rate, is there a reliable way of querying a table for rows which have existed for a specific period of time? -- ~ L. Friedman[EMAIL PROTECTED] LlamaLand http://netllama.linux-sxs.org ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] should the postgres user have a password?
Hi, I have never given postgres a password before, but one of my clients put one on his postgres user - I suppose so he could use the postgres user from phppgadmin (not my choice !). But now I can't see how to get my backup scripts to work... can I put the password for tools like pg_dumpall (or psql, or whatever) on the CL. There is no option in the docs... Would it be better to remove the password (if so, any pointers, I couldn't find that either!) and make postgres only able to connect via pipes? Thanks again, Anton ps. I did google but obviously couldn't find the right combo of keywords... ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] should the postgres user have a password?
Sorry .pgpass :-( Anton ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] querying the age of a row
Interesting issue -- I have usually solved this by adding a specific field to each table with a default timestamp of NOW()... When you: CREATE TABLE tbl ( blah... blah create_dt TIMESTAMP NOT NULL DEFAULT NOW() ); each and every record now has a timestamp of exactly when the row was created -- then it is a simple query to select, update, or delete WHERE create_dt (NOW() - interval '1 day')... HTH Lonni J Friedman [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Greetings, I've got a PostgreSQL-8.1.x database on a Linux box. I have a need to determine which rows in a specific table are less than 24 hours old. I've tried (and failed) to do this with the age() function. From what I can tell, age() only has granularity down to days, and seems to assume that anything matching today's date is less than 24 hours old, even if there are rows from yesterday's date that existed less than 24 hours ago. I've googled on this off and on for a few days, and have come up dry. At any rate, is there a reliable way of querying a table for rows which have existed for a specific period of time? -- ~ L. Friedman[EMAIL PROTECTED] LlamaLand http://netllama.linux-sxs.org ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] should the postgres user have a password?
Anton Melser wrote: Hi, I have never given postgres a password before, but one of my clients put one on his postgres user - I suppose so he could use the postgres user from phppgadmin (not my choice !). But now I can't see how to get my backup scripts to work... can I put the password for tools like pg_dumpall (or psql, or whatever) on the CL. There is no option in the docs... Would it be better to remove the password (if so, any pointers, I couldn't find that either!) and make postgres only able to connect via pipes? Thanks again, Anton ps. I did google but obviously couldn't find the right combo of keywords... ALso, you can create a user for backups, give them superuser powers, and make an entry in your pg_hba.conf file for that user from a specific machine / IP can connect without a password. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] querying the age of a row
Unfortunately, its too late now. The database (and its tables) have been around for a while, so even if I added this column, it wouldn't help me for the thousands of pre-existing rows. Thanks though. On 6/7/07, codeWarrior [EMAIL PROTECTED] wrote: Interesting issue -- I have usually solved this by adding a specific field to each table with a default timestamp of NOW()... When you: CREATE TABLE tbl ( blah... blah create_dt TIMESTAMP NOT NULL DEFAULT NOW() ); each and every record now has a timestamp of exactly when the row was created -- then it is a simple query to select, update, or delete WHERE create_dt (NOW() - interval '1 day')... HTH Lonni J Friedman [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Greetings, I've got a PostgreSQL-8.1.x database on a Linux box. I have a need to determine which rows in a specific table are less than 24 hours old. I've tried (and failed) to do this with the age() function. From what I can tell, age() only has granularity down to days, and seems to assume that anything matching today's date is less than 24 hours old, even if there are rows from yesterday's date that existed less than 24 hours ago. I've googled on this off and on for a few days, and have come up dry. At any rate, is there a reliable way of querying a table for rows which have existed for a specific period of time? -- ~ L. Friedman[EMAIL PROTECTED] LlamaLand http://netllama.linux-sxs.org ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Generate random password
Can anyone suggest how one might be able to do this? I want to be able to generate an 8 character random password for users in their password field. Perhaps through the default setting of the field or a trigger function. I found the following, but is there anything that can be used on both Windows and *nix or can this be used on Windows somehow? http://pgfoundry.org/forum/forum.php?forum_id=994 -- Robert ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Generate random password
Robert Fitzpatrick wrote: Can anyone suggest how one might be able to do this? I want to be able to generate an 8 character random password for users in their password field. Perhaps through the default setting of the field or a trigger function. I found the following, but is there anything that can be used on both Windows and *nix or can this be used on Windows somehow? http://pgfoundry.org/forum/forum.php?forum_id=994 If you don't need something that's actually secure, you ca ndo it trivially in PL/pgsql.Here's what I use, for example: CREATE FUNCTION generate_random_password() RETURNS text AS $$ DECLARE j int4; result text; allowed text; allowed_len int4; BEGIN allowed := '23456789abcdefghjkmnpqrstuvwxyzABCDEFGHJKMNPQRSTUVWXYZ#%@'; allowed_len := length(allowed); result := ''; WHILE length(result) 16 LOOP j := int4(random() * allowed_len); result := result || substr(allowed, j+1, 1); END LOOP; RETURN result; END; $$ LANGUAGE plpgsql; It's not fast (but how many thousands are you generating per second anyway), it's not really secure, but it works :) (Note that the function explicitly excludes characters like I, 1 and l because they look too similar) //Magnus ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Generate random password
Robert Fitzpatrick wrote: Can anyone suggest how one might be able to do this? I want to be able to generate an 8 character random password for users in their password field. Perhaps through the default setting of the field or a trigger function. I found the following, but is there anything that can be used on both Windows and *nix or can this be used on Windows somehow? http://pgfoundry.org/forum/forum.php?forum_id=994 Here's a simple function I've used so I can do it in the database. Written with help from this very list ;-) CREATE OR REPLACE FUNCTION gen_password() RETURNS text AS $$ DECLARE password text; chars text; BEGIN password := ''; chars := 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; FOR i IN 1..8 LOOP password := password || SUBSTRING(chars, ceil(random()*LENGTH(chars))::integer, 1); END LOOP; return password; END; $$ LANGUAGE plpgsql; Then you can do stuff like: update people set pp_password = gen_password() where pp_password is null; Jeff ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] querying the age of a row
Lonni J Friedman wrote: Unfortunately, its too late now. The database (and its tables) have been around for a while, so even if I added this column, it wouldn't help me for the thousands of pre-existing rows. Thanks though. Please don't top-post. Assuming that the majority of the pre-existing rows are more than 24 hours old, there probably wouldn't be any harm in altering the table with the new column and assigning all present rows the timestamp for the moment you do this. Then just wait at least 24 hours before doing whatever it is you wish to do. Not perfect, but it doesn't seem like it would be a problem. brian ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
On Tue, 2007-06-05 at 16:51 -0400, Andrew Sullivan wrote: On Tue, Jun 05, 2007 at 07:29:02PM +0100, Peter Childs wrote: Unfortunately you still need to store them somewhere, and all systems can be hacked. Yes. I agree, in principle, that don't store them is the best advice -- this is standard _Translucent Databases_ advice, too. For the least-stealable data is the data you don't have. But if there is a business case, you have to do the trade off. And security is always a tradeoff (to quote Schneier); just do it well. (Someone else's advice about hiring a security expert to audit this sort of design is really a good idea.) A Have you thought about setting up an account with PayPal, and having people pay through PayPal? Let PayPal deal with the security, and credit card info, after all it's what they do. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] querying the age of a row
Lonni J Friedman escribió: Unfortunately, its too late now. The database (and its tables) have been around for a while, so even if I added this column, it wouldn't help me for the thousands of pre-existing rows. Thanks though. The answer to your original question is you can't. That info isn't stored by the database. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] querying the age of a row
Lonni J Friedman wrote: I have a need to determine which rows in a specific table are less than 24 hours old. I've tried (and failed) to do this with the age() function. And on the suggestion of a timestamp column with DEFAULT NOW(): Unfortunately, its too late now. The database (and its tables) have been around for a while, so even if I added this column, it wouldn't help me for the thousands of pre-existing rows. Er, if you have no such column, what are you testing the AGE() of? Can you take the age of a row? I can't find that in the docs. In any event, you say you need to know when a row is less than 24 hours old - that is presumably not an issue for these old rows. I would add the column as suggested, but set it to some time in the past for the existing rows. Or, you can set it to NULL, appropriately if you interpret NULL as unknown, and test the age with something like this: where (age(coalesce(ts, '-infinity'::timestamp)) '24 hours'::interval - John Burger MITRE ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
Guy Fraser wrote: On Tue, 2007-06-05 at 16:51 -0400, Andrew Sullivan wrote: Yes. I agree, in principle, that don't store them is the best advice -- this is standard _Translucent Databases_ advice, too. For the least-stealable data is the data you don't have. But if there is a business case, you have to do the trade off. And security is always a tradeoff (to quote Schneier); just do it well. (Someone else's advice about hiring a security expert to audit this sort of design is really a good idea.) Have you thought about setting up an account with PayPal, and having people pay through PayPal? Let PayPal deal with the security, and credit card info, after all it's what they do. at the day job, when we switched from paypal (who we found very undependable) to authorize.net, we were very pleased to discover that authorize.net would take care of the credit card numbers for us, so we didn't have to try to secure them beyond the usual requirements while the numbers are in transit. i would definitely recommend outsourcing for this if at all possible. richard ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Join field values
veejar [EMAIL PROTECTED] writes: Hello! I have such field in my table: field1 --- 1 2 3 4 5 I want to get such result from select: '1,2,3,4,5' - join all values in field1 with ',' // result must be varchar. No sense in writing your own func for this; the feature is already provided. select array_to_string(array(select * from generate_series(1,5)), ','); array_to_string - 1,2,3,4,5 (1 row) Help to write SELECT-query for this task. -- --- Jerry Sievers 732 365-2844 (work) Production Database Administrator 305 321-1144 (mobil WWW E-Commerce Consultant ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Inheritance and shared sequence
On Thu, 2007-06-07 at 09:44 +0100, Richard Huxton wrote: Sebastjan Trepca wrote: Hi, as I understood, when you create a table which inherits some other table, the constraints and indexes do not go to the child table hence you have to create a separate ones in there. That means you cannot depend that you won't have duplicate IDs in both tables. Right? BUT...what if child table and parent table both use the same sequence for the primary key. Would the duplication still be an issue? Well, if you *always* use the sequence you'll be OK (until you run out of numbers), but it won't stop you manually supplying your own values. If you must keep the primary key unique across a set of tables, you need to create another table to index the keys and record which table each key is in. Use triggers to keep the index table up to date. -- Oliver Elphick [EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver GPG: 1024D/A54310EA 92C8 39E7 280E 3631 3F0E 1EC0 5664 7A2F A543 10EA Do you want to know God? http://www.lfix.co.uk/knowing_god.html ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Suppress checking of chmod 700 on data-dir?
On Thu, 2007-06-07 at 12:57 +0200, Johannes Konert wrote: use sudo in your backup scenario, or run you backup as postgres Thanks for your quick reply. Unfortunaltelly runing backup via sudo is not an option due to sercurity issues and using postgres-user is not feasable because other data as well is backuped where postgres-user should not have access to. You could run the backup as postgres and pipe the output to another program owned by the other user and with suid set in its permissions. The suid means that the receiving program would have access where you don't want postgres to go. -- Oliver Elphick [EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver GPG: 1024D/A54310EA 92C8 39E7 280E 3631 3F0E 1EC0 5664 7A2F A543 10EA Do you want to know God? http://www.lfix.co.uk/knowing_god.html ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Suppress checking of chmod 700 on data-dir?
Johannes Konert wrote: Ragnar wrote: are you planning a filesystem-level backup? As war as I understood the docu of psql 8.2.4 (http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html section 23.3.2) you can copy the files while postgres is running (respecting pg_start_backup and pg_stop_backup) But that is not my point. The question is where I can change the enforced chmod 700 postgresql always wants me to set. You can't. You can however change the postgresql.conf to put look for files somewhere besides $PGDATA and thus you would be able to back them up. Anything else in there you should be grabbing via pg_dump anyway. Joshua D. Drake Regards Johannes :) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] list all columns in db
Does anyone have a trick to list all columns in a db ? I need to audit a few dbs to make sure column table names are adhering to our standard semantic syntax. i figure there has to be an old pg-admin trick out there to display a db like %(tname)s . %(cname) or some similar format // Jonathan Vanasco | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | CEO/Founder SyndiClick Networks | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | FindMeOn.com - The cure for Multiple Web Personality Disorder | Web Identity Management and 3D Social Networking | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | RoadSound.com - Tools For Bands, Stuff For Fans | Collaborative Online Management And Syndication Tools | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] index vs. seq scan choice?
George Pavlov wrote: From: Tom Lane [mailto:[EMAIL PROTECTED] George Pavlov [EMAIL PROTECTED] writes: to 977ms! If I go the other way and SET STATISTICS 1 (or 0) I can bring down the list to one entry (setting to 0 seems equivalent and still keeps the one most common entry!?) and I will get the Index scan for all states except for that one most common state. But, of course, I don't want to undermine the whole stats mechanism, I just want the system to use the index that is so helpful and brings runtimes down by a factor of 4-8! What am I missing here? In those rare cases wouldn't it make more sense to just set enable_seqscan to off; run query; set enable_seqscan to on; ?? Joshua D. Drake George ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] index vs. seq scan choice?
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] In those rare cases wouldn't it make more sense to just set enable_seqscan to off; run query; set enable_seqscan to on; 1. these cases are not that rare (to me); 2. setting enable_seqscan (in JDBC, say) from the application makes the whole thing quite a mess (need to do a batch of statements: each query wrapped in its enable/disable seq scan?) -- ideally, one would like to issue mostly SQL statements, not config parameters from the application; 3. if this is the recommended suggestion on how to run queries then why don't we just add HINTS to the system and be done with it... ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] index vs. seq scan choice?
From: Tom Lane [mailto:[EMAIL PROTECTED] George Pavlov [EMAIL PROTECTED] writes: I am curious what could make the PA query to ignore the index. What are the specific stats that are being used to make this decision? you don't have the column's statistics target set high enough to track all the interesting values --- or maybe just not high enough to acquire sufficiently accurate frequency estimates for them. Take a look at the pg_stats row for the column ... (The default statistics target is 10, which is widely considered too low --- you might find 100 more suitable.) Well, it seems that it would be more beneficial for me to set it LOWER than the default 10. I get better performance if the stats are less accurate because then the optimizer seems more likely to choose the index! States that are in pg_stats.most_common_vals most often result in a Seq Scan, whereas ones that are not in it definitely get the Index Scan. For all states, even the largest ones (15% of the data), the Index Scan performs better. So, for example, with SET STATISTICS 10 my benhcmark query in a state like Indiana (2981 rows, ~3% of total) runs in 132ms. If I SET STATISTICS 100, Indiana gets on the most_common_vals list for the column and the query does a Seq Scan and its run time jumps to 977ms! If I go the other way and SET STATISTICS 1 (or 0) I can bring down the list to one entry (setting to 0 seems equivalent and still keeps the one most common entry!?) and I will get the Index scan for all states except for that one most common state. But, of course, I don't want to undermine the whole stats mechanism, I just want the system to use the index that is so helpful and brings runtimes down by a factor of 4-8! What am I missing here? George ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] index vs. seq scan choice?
George Pavlov wrote: From: Joshua D. Drake [mailto:[EMAIL PROTECTED] In those rare cases wouldn't it make more sense to just set enable_seqscan to off; run query; set enable_seqscan to on; 1. these cases are not that rare (to me); I find that surprising. 2. setting enable_seqscan (in JDBC, say) from the application makes the whole thing quite a mess (need to do a batch of statements: each query wrapped in its enable/disable seq scan?) -- ideally, one would like to issue mostly SQL statements, not config parameters from the application; Uh no. You do it at the beginning of the transaction, run your queries then reset it right before (or after) commit. 3. if this is the recommended suggestion on how to run queries then why don't we just add HINTS to the system and be done with it... I suggest you read the archives, twice, before suggesting hints. -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] list all columns in db
Jonathan Vanasco ha escrito: Does anyone have a trick to list all columns in a db ? SELECT * FROM INFORMATION_SCHEMA.COLUMNS ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] index vs. seq scan choice?
George Pavlov [EMAIL PROTECTED] writes: From: Joshua D. Drake [mailto:[EMAIL PROTECTED] In those rare cases wouldn't it make more sense to just set enable_seqscan to off; run query; set enable_seqscan to on; 1. these cases are not that rare (to me); It strikes me that you probably need to adjust the planner cost parameters to reflect reality on your system. Usually dropping random_page_cost is the way to bias the thing more in favor of index scans. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] list all columns in db
Jonathan Vanasco wrote: Does anyone have a trick to list all columns in a db ? No trickery, just exploit the availability of the SQL standard information_schema views: select table_schema, table_name, column_name from information_schema.columns where table_schema not in ('pg_catalog','information_schema') order by 1,2,3 If you want an equivalent that uses pg_catalog (non-portable outside of PostgreSQL) you could instead do: select n.nspname as table_schema, c.relname as table_name, a.attname as column_name from pg_catalog.pg_attribute a join pg_catalog.pg_class c on (a.attrelid = c.oid) join pg_catalog.pg_namespace n on (c.relnamespace = n.oid) where c.relkind in ('r','v') and a.attnum 0 and n.nspname not in ('pg_catalog','information_schema') order by 1,2,3 -Jon -- Senior Systems Developer Media Matters for America http://mediamatters.org/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] index vs. seq scan choice?
From: Tom Lane George Pavlov [EMAIL PROTECTED] writes: From: Joshua D. Drake [mailto:[EMAIL PROTECTED] In those rare cases wouldn't it make more sense to just set enable_seqscan to off; run query; set enable_seqscan to on; 1. these cases are not that rare (to me); It strikes me that you probably need to adjust the planner cost parameters to reflect reality on your system. Usually dropping random_page_cost is the way to bias the thing more in favor of index scans. Thanks, Tom, I will try that. Seems better than fiddling with enable_seqscan around every query/transaction. Joshua, I fail to understand why setting and unsetting enable_seqscan on a per query/transaction basis is in any way preferable to query hints? Don't get me wrong, I don't like the idea of hints, and I have read the archives on the subject and I agree with the philosophy, but if the optimization toolkit for routine application queries is going to include setting config parameters that just smacks of hints by another name... George ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] index vs. seq scan choice?
George Pavlov wrote: From: Tom Lane George Pavlov [EMAIL PROTECTED] writes: From: Joshua D. Drake [mailto:[EMAIL PROTECTED] In those rare cases wouldn't it make more sense to just set enable_seqscan to off; run query; set enable_seqscan to on; 1. these cases are not that rare (to me); It strikes me that you probably need to adjust the planner cost parameters to reflect reality on your system. Usually dropping random_page_cost is the way to bias the thing more in favor of index scans. Thanks, Tom, I will try that. Seems better than fiddling with enable_seqscan around every query/transaction. Joshua, I fail to understand why setting and unsetting enable_seqscan on a per query/transaction basis is in any way preferable to query hints? Don't get me wrong, I don't like the idea of hints, and I have read the archives on the subject and I agree with the philosophy, but if the optimization toolkit for routine application queries is going to include setting config parameters that just smacks of hints by another name... I actually have zero opinion on hints, my comment was more about opening the wasps nest of the hints discussion more than anything :) Joshua D. Drake George -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] [ANNOUNCE] Advisory on possibly insecure security definer functions
Thanks for the info Ron Coffin, Lab Manager School of Computer and Engineering Technologies Miami Dade College, North Campus 11380 N.W. 27th Avenue Miami, Florida 33167 Email: [EMAIL PROTECTED] Phone: 305 237-1054 Fax: 305 237-1531 Please Note: Due to Florida's very broad public records law, most written communications to or from College employees regarding College business are public records, available to the public and media upon request. Therefore, this e-mail communication may be subject to public disclosure -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Peter Eisentraut Sent: Tuesday, February 13, 2007 6:46 PM To: [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Subject: [ANNOUNCE] Advisory on possibly insecure security definer functions It has come to the attention of the core team of the PostgreSQL project that insecure programming practice is widespread in SECURITY DEFINER functions. Many of these functions are exploitable in that they allow users that have the privilege to execute such a function to execute arbitrary code with the privileges of the owner of the function. The SECURITY DEFINER property of functions is a special non-default property that causes such functions to be executed with the privileges of their owner rather than with the privileges of the user invoking the function (the default mode, SECURITY INVOKER). Thus, this mechanism is very similar to the setuid mechanism in Unix operating systems. Because SQL object references in function code are resolved at run time, any references to SQL objects that are not schema qualified are resolved using the schema search path of the session at run time, which is under the control of the calling user. By installing functions or operators with appropriate signatures in other schemas, users can then redirect any function or operator call in the function code to implementations of their choice, which, in case of SECURITY DEFINER functions, will still be executed with the function owner privileges. Note that even seemingly innocent invocations of arithmetic operators are affected by this issue, so it is likely that a large fraction of all existing functions are exploitable. The proper fix for this problem is to insert explicit SET search_path commands into each affected function to produce a known safe schema search path. Note that using the default search path, which includes a reference to the $user schema, is not safe when unqualified references are intended to be found in the public schema and $user schemas exist or can be created by other users. It is also not recommended to rely on rigorously schema-qualifying all function and operator invocations in function source texts, as such measures are likely to induce mistakes and will furthermore make the source code harder to read and maintain. This problem affects all existing PostgreSQL releases since version 7.3. Because this situation is a case of poor programming practice in combination with a design mistake and inadequate documentation, no security releases of PostgreSQL will be made to address this problem at this time. Instead, all users are urged to hastily correct their code as described above. Appropriate technological fixes for this problem are being investigated for inclusion with PostgreSQL 8.3. ---(end of broadcast)--- -To unsubscribe from this list, send an email to: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
Richard P. Welty wrote: Guy Fraser wrote: Have you thought about setting up an account with PayPal, and having people pay through PayPal? Let PayPal deal with the security, and credit card info, after all it's what they do. at the day job, when we switched from paypal (who we found very undependable) to authorize.net, we were very pleased to discover that authorize.net would take care of the credit card numbers for us, so we didn't have to try to secure them beyond the usual requirements while the numbers are in transit. i would definitely recommend outsourcing for this if at all possible. Paypal has a perception issue - they are perceived as being tightly linked with eBay. That's a problem in the corporate arena. If my stock broker were to tell me they do all their financial transactions through Paypal, I'd probably wonder if they were a legitimate corporation. Do any of these outsourcers indemnify corporate customers against fraud or data loss? -- Guy Rouillier ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
Guy Rouillier wrote: Richard P. Welty wrote: Guy Fraser wrote: Paypal has a perception issue - they are perceived as being tightly linked with eBay. That's a problem in the corporate arena. If my stock broker were to tell me they do all their financial transactions through Paypal, I'd probably wonder if they were a legitimate corporation. Do any of these outsourcers indemnify corporate customers against fraud or data loss? The problem with paypal is it is NOT a bank and is not held by the same regulations , insurance standards or liabilities as banks. Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
On Jun 7, 2007, at 4:03 PM, Richard P. Welty wrote: at the day job, when we switched from paypal (who we found very undependable) to authorize.net, we were very pleased to discover that authorize.net would take care of the credit card numbers for us, so we didn't have to try to secure them beyond the usual requirements while the numbers are in transit. Can you provide a pointer to the documentation where you set things up for repeated future transactions against the same card? Thanks, John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] cube problem
Hi, I have a table with one column as real[]. Now if I want to make cubes out of each of these arrays, is there a way in postgre I can do it. I guess cube operator is not defined for real[] but can I create cubes if the column was integer[]. If yes please may I know how. Actually I would pass an real[] array to a function. I want to create a cube out of this so that, somehow I can calculate the distance between this cube and each row in the table which has a column of real[]. So I would have to create a cube for each row in the table. I can write a function which accepts array and calculate such distance in C but thats too slow since there are 10 million rows. Thanks Abhang ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] query log corrupted-looking entries
I tried the patch and it has no effect whatsoever -- even with the patch, under the correct load the corrupted entries are coming fast and furious (I found a load profile on my app that reproduces these very clearly). Here are a few other observations for what they are worth: The problem seems very easy to reproduce on my production-like environment: 16GB memory, 4 CPUs, RedHat, only DB running on that machine, the DB is accessed by 4 appservers, running on 2 other machines, each of the 4 appservers configured with up to 20 connections in the pool, incoming connections load balanced among the appservers. Conversely, the problem is very hard (but not impossible) to reproduce on a lesser environment: 4GB mem, 2 CPUs, Fedora Core, DB and 1 appserver running on same machine (and competing for resources), appserver still configured for up to 20 connections. The problem only happens when I put a bit of a load on the application, not necessarily a lot of connections, but a steady amount of requests per second -- a few simulated users hammering on it without pauses results in at least one corrupted line every couple of seconds. So it seems to me related to how many processes are writing at the same time uninterrupted. Anything else I can do to diagnose? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of George Pavlov Sent: Saturday, June 02, 2007 11:33 AM To: Tom Lane Cc: Ed L.; pgsql-general@postgresql.org Subject: Re: [GENERAL] query log corrupted-looking entries From: Tom Lane [mailto:[EMAIL PROTECTED] George Pavlov [EMAIL PROTECTED] writes: ... Also redirect_stderr = on. Hm. Well, that's the bit that ought to get you into the PIPE_BUF exception. There's been some speculation that a change like the attached would help. I've found that it makes no difference with my libc, but maybe yours is different --- want to try it? I will. I may need some time though, since I first need to find a way to reproduce the problem reliably on my test environments and right now I cannot seem to. I have seen the problem mostly under production loads (also under certain kinds of stress-testing, but I forget exactly which kind...) In the meantime I went and looked at the logs in more detail and the corrupted entries seem much more prevalent than what I originally thought. Apart from the ones pgfouine complains about there are many more such lines. For example out of a (average-load) day's log file with 17+ million lines pgfouine complains about 8 lines, but there are in fact 1400+ lines with these kinds of entries. George ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
John DeSoi wrote: On Jun 7, 2007, at 4:03 PM, Richard P. Welty wrote: at the day job, when we switched from paypal (who we found very undependable) to authorize.net, we were very pleased to discover that authorize.net would take care of the credit card numbers for us, so we didn't have to try to secure them beyond the usual requirements while the numbers are in transit. Can you provide a pointer to the documentation where you set things up for repeated future transactions against the same card? i will ask the developer who set it up. i do know that we're handling monthly recurring credit card billing via authorize.net in a very reliable way with no card numbers on any of our systems. richard ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] query log corrupted-looking entries
George Pavlov [EMAIL PROTECTED] writes: I tried the patch and it has no effect whatsoever -- even with the patch, under the correct load the corrupted entries are coming fast and furious (I found a load profile on my app that reproduces these very clearly). What are the total lengths of the log entries in which you see the failure? (The length here includes all the lines belonging to a single logical entry, eg, ERROR, DETAIL, HINT.) I believe that this shouldn't be happening except in the case in which the entry-interpolated-into exceeds PIPE_BUF bytes. I'm not entirely sure what PIPE_BUF is on Linux machines, but IIRC the Posix spec says it has to be at least 512, and on older Unix kernels it tends to be 8K. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
Joshua D. Drake wrote: Guy Rouillier wrote: Richard P. Welty wrote: Guy Fraser wrote: Paypal has a perception issue - they are perceived as being tightly linked with eBay. That's a problem in the corporate arena. If my stock broker were to tell me they do all their financial transactions through Paypal, I'd probably wonder if they were a legitimate corporation. Do any of these outsourcers indemnify corporate customers against fraud or data loss? The problem with paypal is it is NOT a bank and is not held by the same regulations , insurance standards or liabilities as banks. My bank is also not a bank (they say they are not FDIC insured which I think is the actual problem at hand). Do I have to be worried? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] query log corrupted-looking entries
Tom Lane wrote: George Pavlov [EMAIL PROTECTED] writes: I tried the patch and it has no effect whatsoever -- even with the patch, under the correct load the corrupted entries are coming fast and furious (I found a load profile on my app that reproduces these very clearly). What are the total lengths of the log entries in which you see the failure? (The length here includes all the lines belonging to a single logical entry, eg, ERROR, DETAIL, HINT.) I believe that this shouldn't be happening except in the case in which the entry-interpolated-into exceeds PIPE_BUF bytes. I'm not entirely sure what PIPE_BUF is on Linux machines, but IIRC the Posix spec says it has to be at least 512, and on older Unix kernels it tends to be 8K. 4096, although for some archs it may be bigger (it's the page size). -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
Alvaro Herrera wrote: My bank is also not a bank (they say they are not FDIC insured which I think is the actual problem at hand). Do I have to be worried? Depends what you use it for. If this is an online bank that you use only for online transactions and you maintain a balance of say $800, you probably don't have to worry. If on the other hand this is your only bank and you have your life savings in there, you most definitely should consider the risks to which you are exposing your finances. -- Guy Rouillier ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] list all columns in db
On Thu, Jun 07, 2007 at 06:36:07PM -0400, Jon Sime wrote: select n.nspname as table_schema, c.relname as table_name, a.attname as column_name from pg_catalog.pg_attribute a join pg_catalog.pg_class c on (a.attrelid = c.oid) join pg_catalog.pg_namespace n on (c.relnamespace = n.oid) where c.relkind in ('r','v') and a.attnum 0 and n.nspname not in ('pg_catalog','information_schema') order by 1,2,3 Don't forget and not a.attisdropped else you might get something like table_schema | table_name | column_name --++-- public | foo| pg.dropped.2 public | foo| col1 public | foo| col3 (3 rows) -- Michael Fuhr ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [SQL] subtract a day from the NOW function
At 5:57p -0400 on 07 Jun 2007, Michael Glaesemann wrote: It is a bit tricky. Datetime math is inherently so. So one wonders why the whole world doesn't migrate to a single timezone. There would be no more confusion between EST, CEST, GMT, +1100, etc. The trade off, of course, would be that now you'd have to know the daylight hours of a particular part of the world or business with whom you wanted to interact, but the math sure would be easier all-round. Kevin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] [SQL] subtract a day from the NOW function
Kevin Hunter [EMAIL PROTECTED] writes: At 5:57p -0400 on 07 Jun 2007, Michael Glaesemann wrote: It is a bit tricky. Datetime math is inherently so. So one wonders why the whole world doesn't migrate to a single timezone. Or at least get rid of daylight savings, which has to be one of the worst ideas of the last 200 years ... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] [SQL] subtract a day from the NOW function
Tom Lane wrote: Kevin Hunter [EMAIL PROTECTED] writes: At 5:57p -0400 on 07 Jun 2007, Michael Glaesemann wrote: It is a bit tricky. Datetime math is inherently so. So one wonders why the whole world doesn't migrate to a single timezone. Or at least get rid of daylight savings, which has to be one of the worst ideas of the last 200 years ... You mean we don't have to worry about defeating Jerry and the Kaiser anymore? Boggles the mind. Seriously, as long as everybody's in agreement, stick to UTC and let the local software developers worry about formatting conversions to and from for display. -- The NCP Revue -- http://www.ncprevue.com/blog ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: Creditcard Number Security was Re: [GENERAL] Encrypted column
Joshua D. Drake wrote: Guy Rouillier wrote: Richard P. Welty wrote: Guy Fraser wrote: Paypal has a perception issue - they are perceived as being tightly linked with eBay. That's a problem in the corporate arena. If my stock broker were to tell me they do all their financial transactions through Paypal, I'd probably wonder if they were a legitimate corporation. Do any of these outsourcers indemnify corporate customers against fraud or data loss? The problem with paypal is it is NOT a bank and is not held by the same regulations , insurance standards or liabilities as banks. You should update - paypal is a bank now. But anyway CC processors are in fact liable with high penalties for CC data. You must be carefull to fullfill PCI and other requirements or you loose your contracts with CC aquirers and banks. Tino ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] What's the correct way to use FunctionCallInvoke()?
Hi, I have a problem calling a C function from another C function - DirectFunctionCall* cannot be used since some parameters may be NULL. This dies: FunctionCallInfoData fcinfo1; InitFunctionCallInfoData(fcinfo1, NULL, 7, NULL, NULL); /* arg[] and arnull[] are filled with correct values */ result = myfunc(fcinfo1); Also this also: FunctionCallInfoData fcinfo1; FmgrInfo flinfo1; MemSet(flinfo1, 0, sizeof(flinfo1)); flinfo1.fn_addr = myfunc; flinfo1.fn_nargs = 7; InitFunctionCallInfoData(fcinfo1, flinfo1, 7, NULL, NULL); /* arg[] and arnull[] are filled with correct values */ result = FunctionCallInvoke(fcinfo1); How to do it correctly? I am on PostgreSQL 8.1.9 currently. -- -- Zoltán Böszörményi Cybertec Geschwinde Schönig GmbH http://www.postgresql.at/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] What's the correct way to use FunctionCallInvoke()?
Zoltan Boszormenyi [EMAIL PROTECTED] writes: FmgrInfo flinfo1; MemSet(flinfo1, 0, sizeof(flinfo1)); This is certainly not the approved way to set up an FmgrInfo. Use fmgr_info(). regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/