Re: [HACKERS] Problem on PG7.2.2
Roberto Fichera [EMAIL PROTECTED] writes: At 10.40 23/09/02 -0400, you wrote: I think you've got *serious* hardware problems. Hard to tell if it's disk or memory, but get out those diagnostic programs now ... database=# DROP TABLE TS; ERROR: cannot find attribute 1 of relation ts_pkey database=# DROP INDEX TS_PKEY; ERROR: cannot find attribute 1 of relation ts_pkey Now you've got corrupted system indexes (IIRC this is a symptom of problems in one of the indexes for pg_attribute). You might be able to recover from this using a REINDEX DATABASE operation (read the man page carefully, it's a bit tricky), but I am convinced that you've got hardware problems. I would suggest that you first shut down the database and then find and fix your hardware problem --- otherwise, things will just get worse and worse. After you have a stable platform again, you can try to restore consistency to the database. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] ECPG
On Mon, Sep 23, 2002 at 09:56:59AM -0400, Tom Lane wrote: What about removing this feature that used to exist: being able to build ecpg with reasonably-standard tools? How many people do use bison themselves? Most people I talked to use the precompiled preproc.c. I think you should be setting more weight on that concern than on supporting obscure backend commands (some of which didn't even exist in 7.2, and therefore are certainly not depended on by any ecpg user...) Which of course would also mean spending quite some time to remove features that have to be added again once bison is released. I will try to get some info from the bison people about the release date. Michael -- Michael Meskes [EMAIL PROTECTED] Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL! ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] ECPG
Michael Meskes [EMAIL PROTECTED] writes: On Mon, Sep 23, 2002 at 09:56:59AM -0400, Tom Lane wrote: What about removing this feature that used to exist: being able to build ecpg with reasonably-standard tools? How many people do use bison themselves? *Everyone* who checks out from our CVS needs to build the bison output files. There seem to be quite a few such people; they will all be forced to upgrade their local bison installations when ecpg starts requiring a newer bison. I will try to get some info from the bison people about the release date. I just this morning got this response from Akim Demaille concerning a portability problem in bison 1.49b: | Thanks for the report, this is addressed in 1.49c. We should upload | the latter soon. So I'm guessing that a full release is not just around the corner :-( regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] making use of large TLB pages
Rohit Seth recently added support for the use of large TLB pages on Linux if the processor architecture supports them (I believe the SPARC, IA32, and IA64 have hugetlb support, more archs will probably be added). The patch was merged into Linux 2.5.36, so it will more than likely be in Linux 2.6. For more information on large TLB pages and why they are generally viewed to improve database performance, see here: http://lwn.net/Articles/6535/ (the patch this refers to is an earlier implementation, I believe, but the idea is the same) http://lwn.net/Articles/10293/ (item #4) I'd like to enable PostgreSQL to use large TLB pages, if the OS and processor support them. In talking to the author of the TLB patches for Linux (Rohit Seth), he described the current API: == 1) Only two system calls. These are: sys_alloc_hugepages(int key, unsigned long addr, unsigned long len, int prot, int flag) sys_free_hugepages(unsigned long addr) Key will be equal to zero if user wants these huge pages as private. A positive int value will be used for unrelated apps to share the same physical huge pages. addr is the user prefered address. The kernel may decide to allocate a different virtual address (depending on availability and alignment factors). len is the requested size of memory wanted by user app. prot could get the value of PROT_READ, PROT_WRITE, PROT_EXEC flag: The only allowed value right now is IPC_CREAT, which in case of shred hugepages (across processes) tells the kernel to create a new segment if none is already created. If this flag is not provided and there is no hugepage segment corresponding to the key then ENOENT is returned. More like on the lines of IPC_CREAT flag for shmget routine. On success sys_alloc_hugepages returns the virtual address allocated by kernel. = So as I understand it, we would basically replace the calls to shmget(), shmdt(), etc. with these system calls. The behavior will be slightly different, however -- I'm not sure if this API supports everything we expect the SysV IPC API to support (e.g. telling the # of clients attached to a given segment). Can anyone comment on exactly what functionality we expect when dealing with the storage mechanism of the shared buffer? Any comments would be appreciated. Cheers, Neil -- Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Default privileges for 7.3
Hello! On Mon, 23 Sep 2002, Tom Lane wrote: I am thinking that the better course might be to have newly created languages default to USAGE PUBLIC, at least for a release or two. We might also consider letting newly created functions default to EXECUTE PUBLIC. I think this is less essential, but a case could still be made for it on backwards-compatibility grounds. Hm...M$ had proven this way is false. See BUGTRAQ about sp_* stories every quarter.;) If you don't want to hard-wire that behavior, what about a GUC variable that could be turned on while loading old dumps? Comments? That seems to be more reasonable. -- WBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group. Phone: +7 (3832) 106228, ext.140, E-mail: [EMAIL PROTECTED] Unix is like a wigwam -- no Gates, no Windows, and an Apache inside. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Web site
On Tue, Sep 24, 2002 at 03:59:33AM -0400, Vince Vielhaber wrote: On Tue, 24 Sep 2002, Gavin Sherry wrote: Hi all, It occurs to me that opening web page on www.postgresql.org, asking the user to select the mirror, is rather unprofessional. I am sure this has been discussed before but I thought I would bring it up again anyway. Your point? So, why not just redirect people to one of the mirrors listed? This could be done based on IP (yes it is inaccurate but it is close enough and has the same net effect: pushing people off the main web server) or it could be done by simply redirecting to a random mirror. Been there, done that, didn't work. Too much of a job to keep track of that many IP blocks too. I'd suggest setting a cookie, so I only see the 'pick a mirror' the first time. And provide a link to 'pick a different mirror' that resets or ignores the cookie. Ross ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Web site
On Tue, 24 Sep 2002, Ross J. Reedstrom wrote: On Tue, Sep 24, 2002 at 03:59:33AM -0400, Vince Vielhaber wrote: On Tue, 24 Sep 2002, Gavin Sherry wrote: Hi all, It occurs to me that opening web page on www.postgresql.org, asking the user to select the mirror, is rather unprofessional. I am sure this has been discussed before but I thought I would bring it up again anyway. Your point? So, why not just redirect people to one of the mirrors listed? This could be done based on IP (yes it is inaccurate but it is close enough and has the same net effect: pushing people off the main web server) or it could be done by simply redirecting to a random mirror. Been there, done that, didn't work. Too much of a job to keep track of that many IP blocks too. I'd suggest setting a cookie, so I only see the 'pick a mirror' the first time. And provide a link to 'pick a different mirror' that resets or ignores the cookie. Or choose the mirror that works best for you and bookmark it. Vince. -- == Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net 56K Nationwide Dialup from $16.00/mo at Pop4 Networking http://www.camping-usa.com http://www.cloudninegifts.com http://www.meanstreamradio.com http://www.unknown-artists.com == ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Web site
On Tue, Sep 24, 2002 at 11:26:55AM -0400, Vince Vielhaber wrote: On Tue, 24 Sep 2002, Ross J. Reedstrom wrote: I'd suggest setting a cookie, so I only see the 'pick a mirror' the first time. And provide a link to 'pick a different mirror' that resets or ignores the cookie. Or choose the mirror that works best for you and bookmark it. Of course, that's what _I_ do, but the dicussion was how to make the frontpage 'user friendly' and 'professional'. Lots of global corps. do the 'pick your geographical region' thing, but mainly for sales reasons, so it must be professional, right? Ross ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] subselect bug (was Re: [GENERAL] DBLink: interesting issue)
Joe Conway [EMAIL PROTECTED] writes: replica=# create table foo(f1 int); CREATE TABLE replica=# SELECT * FROM foo t WHERE NOT EXISTS (SELECT remoteid FROM (SELECT f1 as remoteid FROM foo WHERE f1 = t.f1) AS t1); server closed the connection unexpectedly Ick. I'm just starting to dig in to this, but was hoping for any thoughts or guidance I can get. I can look at this, unless you really want to solve it yourself ... p.s. Below is a backtrace: The debug output: TRAP: FailedAssertion(!(var-varlevelsup 0 var-varlevelsup PlannerQueryLevel), File: subselect.c, Line: 81) suggests that the problem is with variable depth --- I'm guessing that we're not adjusting varlevelsup correctly at some step of the planning process. Offhand I'd expect the innermost select to be pulled up into the parent select (the argument of EXISTS) and probably something is going wrong with that. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] subselect bug (was Re: [GENERAL] DBLink: interesting
Tom Lane wrote: I'm just starting to dig in to this, but was hoping for any thoughts or guidance I can get. I can look at this, unless you really want to solve it yourself ... I'll look into it a bit for my own edification, but if you have the time to solve it, I wouldn't want to get in the way. In any case, if you think it should be fixed before beta2, I'd give you better odds than me ;-) Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Web site
Gavin Sherry [EMAIL PROTECTED] writes: It occurs to me that opening web page on www.postgresql.org, asking the user to select the mirror, is rather unprofessional. I agree; not only that, it has advertisements on it. What's the justification for that, considering that none of the mirror sites (AFAIK) have ads on them? Cheers, Neil -- Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Problem on PG7.2.2
At 09.31 24/09/02 -0400, Tom Lane wrote: Roberto Fichera [EMAIL PROTECTED] writes: At 10.40 23/09/02 -0400, you wrote: I think you've got *serious* hardware problems. Hard to tell if it's disk or memory, but get out those diagnostic programs now ... database=# DROP TABLE TS; ERROR: cannot find attribute 1 of relation ts_pkey database=# DROP INDEX TS_PKEY; ERROR: cannot find attribute 1 of relation ts_pkey Now you've got corrupted system indexes (IIRC this is a symptom of problems in one of the indexes for pg_attribute). I'll run some memory checker. You might be able to recover from this using a REINDEX DATABASE operation (read the man page carefully, it's a bit tricky), but I am convinced that you've got hardware problems. I would suggest that you first shut down the database and then find and fix your hardware problem --- otherwise, things will just get worse and worse. After you have a stable platform again, you can try to restore consistency to the database. I'll try it. Roberto Fichera. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: Beta2 on Friday Morning (Was: Re: [HACKERS] Open 7.3 items)
I have to say that during beta testing I ALWAYS do an initdb and a reload just to make sure the pg_dumpall and pg_restore stuff works right. Plus to make sure problems that might only pop up with a new initdb are found as well. I probably burn it to the ground several times on a single beta just to test different data sets and I prefer a clean database when doing that so I'm sure the problems I see are from just that one dataset. So, Requiring an initdb for every beta wouldn't bother me one bit. To me you test a beta by migrating to it just like it was a production version, and that means a new build from the ground up, initdb and all. On 18 Sep 2002, Neil Conway wrote: Marc G. Fournier [EMAIL PROTECTED] writes: On Wed, 18 Sep 2002, Bruce Momjian wrote: We should get _all_ the known initdb-related issues into the code before we go beta2 or beta3 is going to require another initdb. Right, and? How many times in the past has it been the last beta in the cycle that forced the initdb? Are you able to guarantee that there won't* be another initdb required if we wait until mid-next week? I completely agree with Bruce here. Requiring an initdb for every beta release significantly reduces the number of people who will be willing to try it out -- so initdb's between betas are not disasterous, but should be avoided if possible. Since waiting till next week significantly reduces the chance of an initdb for beta3 and has no serious disadvantage that I can see, it seems the right decision to me. Cheers, Neil ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] contrib/earthdistance missing regression test files
The Makefile for contrib/earthdistance indicates that there should be a regression test, but the files seem to be missing from CVS. The change to the Makefile was made here: http://developer.postgresql.org/cvsweb.cgi/contrib/earthdistance/Makefile.diff?r1=1.11r2=1.12 Was the Makefile change a mistake, or are there files missing? Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Problem on PG7.2.2
At 09.31 24/09/02 -0400, Tom Lane wrote: Roberto Fichera [EMAIL PROTECTED] writes: At 10.40 23/09/02 -0400, you wrote: I think you've got *serious* hardware problems. Hard to tell if it's disk or memory, but get out those diagnostic programs now ... database=# DROP TABLE TS; ERROR: cannot find attribute 1 of relation ts_pkey database=# DROP INDEX TS_PKEY; ERROR: cannot find attribute 1 of relation ts_pkey Now you've got corrupted system indexes (IIRC this is a symptom of problems in one of the indexes for pg_attribute). You might be able to recover from this using a REINDEX DATABASE operation (read the man page carefully, it's a bit tricky), but I am convinced that you've got hardware problems. I would suggest that you first shut down the database and then find and fix your hardware problem --- otherwise, things will just get worse and worse. After you have a stable platform again, you can try to restore consistency to the database. Below there is the first try session and as you can see there is the same problem :-(! bash-2.05a$ postgres -D /var/lib/pgsql/data -O -P database DEBUG: database system was shut down at 2002-09-24 18:39:24 CEST DEBUG: checkpoint record is at 0/2AE97110 DEBUG: redo record is at 0/2AE97110; undo record is at 0/0; shutdown TRUE DEBUG: next transaction id: 366635; next oid: 1723171 DEBUG: database system is ready POSTGRES backend interactive interface $Revision: 1.245.2.2 $ $Date: 2002/02/27 23:17:01 $ backend reindex table detail; FATAL 2: open of /var/lib/pgsql/data/pg_clog/0504 failed: No such file or directory DEBUG: shutting down DEBUG: database system is shut down bash-2.05a$ Roberto Fichera. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [HACKERS] PGXLOG variable worthwhile?
On 19 Sep 2002, Greg Copeland wrote: I think Marc made a pretty good case about the use of command line arguments but I think I have to vote with Tom. Many of the command line arguments you seem to be using do sorta make sense to have for easy reference or to help validate your runtime environment for each instance. The other side of that is, I completely agree with Tom in the it's a very dangerous option. It would be begging for people to shoot themselves with it. Besides, just as you can easily parse the command line, you can also parse the config file to out that information. Plus, it really should be a very seldom used option. When it is used, it's doubtful that you'll need the same level of dynamic control that you get by using command line options. As a rule of thumb, if an option is rarely used or is very dangerous if improperly used, I do think it should be in a configuration file to discourage adhoc use. Let's face it, specify XLOG location is hardly something people need to be doing on the fly. My vote is config file it and no command line option! I'd go one step further, and say that it should not be something a user should do by hand, but there should be a script to do it, and it would work this way: If there is a DIRECTORY called pg_xlog in $PGDATA, then use that. If there is a FILE called pg_xlog in $PGDATA, then that file will have the location of the directory stored in it. That file will be created when the move_pgxlog script is run, and that script will be have all the logic inside it to determine how to move the pg_xlog directory safely, i.e. making sure there's room on the destination, setting permissions, etc... that way, if you're dumb as a rock or smart as a rocket scientist, you do it the same way, and the script makes sure you don't scram your database in a not too bright moment. No postgresql.conf var, no command line switch, a file or directory, and a script. Seem workable? Or am I on crack? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Web site
Neil Conway wrote: Gavin Sherry [EMAIL PROTECTED] writes: It occurs to me that opening web page on www.postgresql.org, asking the user to select the mirror, is rather unprofessional. I agree; not only that, it has advertisements on it. What's the justification for that, considering that none of the mirror sites (AFAIK) have ads on them? I wondered that myself. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] subselect bug (was Re: [GENERAL] DBLink: interesting issue)
Joe Conway [EMAIL PROTECTED] writes: replica=# create table foo(f1 int); CREATE TABLE replica=# SELECT * FROM foo t WHERE NOT EXISTS (SELECT remoteid FROM (SELECT f1 as remoteid FROM foo WHERE f1 = t.f1) AS t1); server closed the connection unexpectedly Got it --- this bug has been there awhile :-(, ever since we had the pull-up-subquery logic, which was in 7.1 IIRC. The pullup code neglected to adjust references to uplevel Vars. Surprising that no one reported this sooner. The attached patch is against CVS tip. It will not apply cleanly to 7.2 because pull_up_subqueries() has been modified since then, but if anyone's desperate for a fix in 7.2 it could probably be adapted. regards, tom lane *** src/backend/optimizer/plan/planner.c.orig Wed Sep 4 17:30:30 2002 --- src/backend/optimizer/plan/planner.cTue Sep 24 14:02:54 2002 *** *** 337,352 /* * Now make a modifiable copy of the subquery that we can run !* OffsetVarNodes on. */ subquery = copyObject(subquery); /* !* Adjust varnos in subquery so that we can append its * rangetable to upper query's. */ rtoffset = length(parse-rtable); OffsetVarNodes((Node *) subquery, rtoffset, 0); /* * Replace all of the top query's references to the subquery's --- 337,358 /* * Now make a modifiable copy of the subquery that we can run !* OffsetVarNodes and IncrementVarSublevelsUp on. */ subquery = copyObject(subquery); /* !* Adjust level-0 varnos in subquery so that we can append its * rangetable to upper query's. */ rtoffset = length(parse-rtable); OffsetVarNodes((Node *) subquery, rtoffset, 0); + + /* +* Upper-level vars in subquery are now one level closer to +their +* parent than before. +*/ + IncrementVarSublevelsUp((Node *) subquery, -1, 1); /* * Replace all of the top query's references to the subquery's ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Default privileges for 7.3
Tom Lane writes: How do you feel about allowing functions to default to EXECUTE PUBLIC? Less excited, but if it gets us to the point of no known problems during upgrade we might as well do it. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] PGXLOG variable worthwhile?
Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: You don't :vote: on stuff like this ... Why not, exactly? I wasn't aware that any of core had a non-vetoable right to apply any patch we liked regardless of the number and strength of the objections. AFAIK, we resolve differences of opinion by discussion, followed by a vote if the discussion doesn't produce a consensus. It was pretty clear that Thomas' original patch lost the vote, or would have lost if we'd bothered to hold a formal vote. I don't see anyone arguing against the notion of making XLOG location more easily configurable --- it was just the notion of making it depend on environment variables that scared people. And AFAICS it is scary only because screwing that up will simply corrupt your database. Thus, a simple random number (okay, and a timestamp of initdb) in two files, one in $PGDATA and one in $PGXLOG would be a totally sufficient safety mechanism to prevent starting with the wrong XLOG directory. Can we get that instead of ripping out anything? Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] PGXLOG variable worthwhile?
Jan Wieck wrote: Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: You don't :vote: on stuff like this ... Why not, exactly? I wasn't aware that any of core had a non-vetoable right to apply any patch we liked regardless of the number and strength of the objections. AFAIK, we resolve differences of opinion by discussion, followed by a vote if the discussion doesn't produce a consensus. It was pretty clear that Thomas' original patch lost the vote, or would have lost if we'd bothered to hold a formal vote. I don't see anyone arguing against the notion of making XLOG location more easily configurable --- it was just the notion of making it depend on environment variables that scared people. And AFAICS it is scary only because screwing that up will simply corrupt your database. Thus, a simple random number (okay, and a timestamp of initdb) in two files, one in $PGDATA and one in $PGXLOG would be a totally sufficient safety mechanism to prevent starting with the wrong XLOG directory. Can we get that instead of ripping out anything? Well, the problem is that Thomas stopped communicating, perhaps because some were too aggressive in criticizing the patch. Once that happened, there was no way to come up with a solution, and that's why it was removed. Also, we are in the process of removing args and moving them to GUC so I don't see why we would make WAL an exception. It isn't changed that often. FYI, I am about to do the same removal for the SSL stuff too. Bear is no longer responding. It is on the open items list now. If I can't find someone who can review the good/bad parts of our SSL changes, it might all be yanked out. --- P O S T G R E S Q L 7 . 3 O P E NI T E M S Current at ftp://momjian.postgresql.org/pub/postgresql/open_items. Source Code Changes --- Schema handling - ready? interfaces? client apps? Drop column handling - ready for all clients, apps? Fix BeOS, QNX4 ports Fix AIX large file compile failure of 2002-09-11 (Andreas) Get bison upgrade on postgresql.org for ecpg only (Marc) Fix vacuum btree bug (Tom) Fix client apps for autocommit = off Fix clusterdb to be schema-aware Change log_min_error_statement to be off by default (Gavin) Fix return tuple counts/oid/tag for rules Loading 7.2 pg_dumps functions no longer public executable languages no longer public usable Add schema dump option to pg_dump Make SET not start a transaction with autocommit off, document it Add GRANT EXECUTE to all /contrib functions Revert or fix SSL change On Going Security audit Documentation Changes - Document need to add permissions to loaded functions and languages Move documation to gborg for moved projects -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PGXLOG variable worthwhile?
Jan Wieck [EMAIL PROTECTED] writes: And AFAICS it is scary only because screwing that up will simply corrupt your database. Thus, a simple random number (okay, and a timestamp of initdb) in two files, one in $PGDATA and one in $PGXLOG would be a totally sufficient safety mechanism to prevent starting with the wrong XLOG directory. Can we get that instead of ripping out anything? Sure, if someone wants to do that it'd go a long way towards addressing the safety issues. But given that, I think a GUC variable is the most appropriate control mechanism; as someone else pointed out, we've worked long and hard to make GUC useful and feature-ful, so it seems silly to invent new configuration items that bypass GUC. The safety concerns were the main reason I liked a symlink or separate file, but if we attack the safety problem directly then we might as well go for convenience in how you actually set the configuration value. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] contrib/earthdistance missing regression test files
On Tue, Sep 24, 2002 at 10:43:51 -0700, Joe Conway [EMAIL PROTECTED] wrote: The Makefile for contrib/earthdistance indicates that there should be a regression test, but the files seem to be missing from CVS. The change to the Makefile was made here: http://developer.postgresql.org/cvsweb.cgi/contrib/earthdistance/Makefile.diff?r1=1.11r2=1.12 Was the Makefile change a mistake, or are there files missing? There is supposed to be a regression test. I may have forgotten to use -N or -r on the diff. If it is confirmed that the files needed for the regression test didn't make it into the submitted diff file, I can send in a diff versus current cvs. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Web site
Ross J. Reedstrom wrote: On Tue, Sep 24, 2002 at 03:59:33AM -0400, Vince Vielhaber wrote: On Tue, 24 Sep 2002, Gavin Sherry wrote: Hi all, It occurs to me that opening web page on www.postgresql.org, asking the user to select the mirror, is rather unprofessional. I am sure this has been discussed before but I thought I would bring it up again anyway. Your point? So, why not just redirect people to one of the mirrors listed? This could be done based on IP (yes it is inaccurate but it is close enough and has the same net effect: pushing people off the main web server) or it could be done by simply redirecting to a random mirror. Been there, done that, didn't work. Too much of a job to keep track of that many IP blocks too. I'd suggest setting a cookie, so I only see the 'pick a mirror' the first time. And provide a link to 'pick a different mirror' that resets or ignores the cookie. If I had a vote, I would vote +1 for this option. I think it's easy to implement and shouldn't have negative effects on performance. Regards, Michael Paesold ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] contrib/earthdistance missing regression test files
On Tue, Sep 24, 2002 at 15:02:20 -0500, Bruno Wolff III [EMAIL PROTECTED] wrote: There is supposed to be a regression test. I may have forgotten to use -N or -r on the diff. If it is confirmed that the files needed for the regression test didn't make it into the submitted diff file, I can send in a diff versus current cvs. I still have a copy of the diff file (at least I think it is the one I sent in) and it has the regression sql and output files defined. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] contrib/earthdistance missing regression test files
Bruno Wolff III wrote: On Tue, Sep 24, 2002 at 15:02:20 -0500, Bruno Wolff III [EMAIL PROTECTED] wrote: There is supposed to be a regression test. I may have forgotten to use -N or -r on the diff. If it is confirmed that the files needed for the regression test didn't make it into the submitted diff file, I can send in a diff versus current cvs. I still have a copy of the diff file (at least I think it is the one I sent in) and it has the regression sql and output files defined. Yep, I missed adding earthdistance.out. Is that the only file. I usually do a 'gmake distclean' and 'cvs update' after a batch of patches to see that there aren't any new files in my CVS tree. I missed it this time. Does that fix the problem? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] contrib/earthdistance missing regression test files
On Tue, Sep 24, 2002 at 16:10:19 -0400, Bruce Momjian [EMAIL PROTECTED] wrote: Bruno Wolff III wrote: On Tue, Sep 24, 2002 at 15:02:20 -0500, Bruno Wolff III [EMAIL PROTECTED] wrote: There is supposed to be a regression test. I may have forgotten to use -N or -r on the diff. If it is confirmed that the files needed for the regression test didn't make it into the submitted diff file, I can send in a diff versus current cvs. I still have a copy of the diff file (at least I think it is the one I sent in) and it has the regression sql and output files defined. Yep, I missed adding earthdistance.out. Is that the only file. No. The new files are: expected/earthdistance.out (which you metion above) sql/earthdistance.sql ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] contrib/earthdistance missing regression test files
Bruno Wolff III wrote: On Tue, Sep 24, 2002 at 16:10:19 -0400, Bruce Momjian [EMAIL PROTECTED] wrote: Bruno Wolff III wrote: On Tue, Sep 24, 2002 at 15:02:20 -0500, Bruno Wolff III [EMAIL PROTECTED] wrote: There is supposed to be a regression test. I may have forgotten to use -N or -r on the diff. If it is confirmed that the files needed for the regression test didn't make it into the submitted diff file, I can send in a diff versus current cvs. I still have a copy of the diff file (at least I think it is the one I sent in) and it has the regression sql and output files defined. Yep, I missed adding earthdistance.out. Is that the only file. No. The new files are: expected/earthdistance.out (which you metion above) sql/earthdistance.sql OK, here's what I see now in CVS: #$ pwd /pgtop/contrib/earthdistance #$ lf CVS/README.earthdistanceearthdistance.out Makefileearthdistance.c earthdistance.sql.in What should be changed? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] contrib/earthdistance missing regression test files
Bruce Momjian [EMAIL PROTECTED] writes: No. The new files are: expected/earthdistance.out (which you metion above) sql/earthdistance.sql OK, here's what I see now in CVS: #$ pwd /pgtop/contrib/earthdistance #$ lf CVS/README.earthdistanceearthdistance.out Makefileearthdistance.c earthdistance.sql.in What should be changed? The earthdistance.out file should be in an expected/ subdirectory, not directly in the contrib/earthdistance directory. Also, there is a missing regression input script file earthdistance.sql (this is not related to earthdistance.sql.in), which should be in a sql/ subdirectory. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] pltcl.so patch
In answer to the question posed at the end of the message below: Yes, I do get the similar results. A quick investigation shows that the SPI_freetuptable at the end of pltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64 (which looks sensible to me) but which has a memory context of 0x7f7f7f7f (the unallocated marker). Briefly following through to check this value shows that as long as I have CLOBBER_FREED_MEMORY defined, which I presume I do having configured with --debug, this value is also consistent with the tuptable having been freed before this faulting invocation. I haven't looked too closely yet but at a glance I can't see what could be going wrong with the exception that the tuptable is freed even if zero rows are returned by SPI_exec. That and I'm not sure what that $T(id) thing is doing in the SQL submited to pltcl_SPI_exec. Oh 'eck, I've been reading that test function wrong, it's got a level of nesting. Unfortunately, I am currently trying to throw together a quick demo of something at the moment so can't investigate too fully for the next day or so. If someone wants to pick this up feel free otherwise I'll look into it later. -- Nigel J. Andrews On Tue, 24 Sep 2002, Ian Harding wrote to me: First, thank you very much for working on this issue. Pltcl is extremely important to me right now, and this memory leak is cramping my style a bit. I applied the patch you sent to my pltcl.c (I am at version 7.2.1, but it seems to apply fine...) It builds fine, psql starts fine, but my test function still blows up dramatically. Here is the script I am using: drop function memleak(); create function memleak() returns int as ' for {set i 1} {$i 100} {incr i} { set sql select ''foo'' spi_exec $sql } ' language 'pltcl'; drop table testable; create table testable ( id int, data text); insert into testable values (1, 'foobar'); insert into testable values (2, 'foobar'); insert into testable values (3, 'foobar'); insert into testable values (4, 'foobar'); insert into testable values (5, 'foobar'); insert into testable values (6, 'foobar'); drop function memleak(int); create function memleak(int) returns int as ' set sql select * From testable spi_exec -array T $sql { for {set i 1} {$i 100} {incr i} { set sql select * from testable where id = $T(id) spi_exec $sql } } ' language 'pltcl'; Here is what happens: bash-2.05# psql -U iharding test testfunction DROP CREATE ERROR: table testable does not exist CREATE INSERT 118942676 1 INSERT 118942677 1 INSERT 118942678 1 INSERT 118942679 1 INSERT 118942680 1 INSERT 118942681 1 DROP CREATE bash-2.05# psql -U iharding test Welcome to psql, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit test=# select memleak(); memleak - 0 (1 row) test=# select memleak(1); server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !# Here is the end of the log: DEBUG: server process (pid 1992) was terminated by signal 11 DEBUG: terminating any other active server processes DEBUG: all server processes terminated; reinitializing shared memory and semaphores IpcMemoryCreate: shmget(key=5432001, size=29769728, 03600) failed: Cannot allocate memory This error usually means that PostgreSQL's request for a shared ... Do you have similar results? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] pg_dump and inherited attributes
Hi, I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can be more or less rewritten completely, and probably should to get rigth all the cases mentioned in the past attisinherited discussion. Is this desirable for 7.3? It can probably be hacked around and the rewrite kept for 7.4, but I think it will be much simpler after the rewrite. What do people think about this? -- Alvaro Herrera (alvherre[a]atentus.com) Siempre hay que alimentar a los dioses, aunque la tierra este seca (Orual) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] pg_dump and inherited attributes
Hi, I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can be more or less rewritten completely, and probably should to get rigth all the cases mentioned in the past attisinherited discussion. Is this desirable for 7.3? It can probably be hacked around and the rewrite kept for 7.4, but I think it will be much simpler after the rewrite. What do people think about this? -- Alvaro Herrera (alvherre[a]atentus.com) Siempre hay que alimentar a los dioses, aunque la tierra este seca (Orual) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance
On Mon, 2002-09-23 at 18:41, Tom Lane wrote: Hannu Krosing [EMAIL PROTECTED] writes: Alvaro Herrera kirjutas E, 23.09.2002 kell 10:30: The former drops f1 from c, while the latter does not. It's inconsistent. But this is what _should_ happen. On what grounds do you claim that? I agree with Alvaro: it's inconsistent to have ONLY produce different effects depending on the order in which you issue the commands. Sorry it took some time thin down my thoughts ;) As the three following sets of commands ( should ) yield exactly the same database schema (as visible to user): 1) create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c () inherits(p1, p2); drop column p2.f1; -- this DROP is in fact implicitly ONLY 2) create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c () inherits(p1, p2); drop only column p2.f1; 3) create table p1 (f1 int, g1 int); create table p2 (h1 int); create table c () inherits(p1, p2); --- For this schema, no matter how we arrived at it DROP COLUMN p1.f1; should be different from DROP ONLY COLUMN p1.f1; But the ONLY modifier was implicit for all the _non-final_ DROPs We could carve it out for users by _requiring_ ONLY if the column dropped is multiply inherited, but that would cut off the possibility that it is multiply inherited in some children and not in some other, i.e you could not have drop column automatically remove c13.f1 but keep c12.f1 for the following schema. create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c12 () inherits(p1, p2); create table p3 (i1 int); create table c13 () inherits(p1, p3); So I'd suggest we just postulate that for multiple inheritance dropping any columns still inherited from other peers will be implicitly DROP ONLY _as far as it concerns this child_ . then it would be clear why we have different behaviour for drop ONLY column p1.f1; drop column p2.f1; and drop ONLY column p2.f1; -- this ONLY is implicit for c by virtue of p1.f1 being still around drop ONLY column p1.f1; It is quite unreasonable to expect that order of commands makes no difference. Why? I'll agree that it's not an overriding argument, but it is something to shoot for if we can. And I'm not seeing the argument on the other side. Just to reiterate: 1. All ALTER TABLE MyTable DROP COLUMN commands assume implicit ONLY when dropping columns multiply inherited from MyTable. 2. Making the final DROP implicitly NOT-ONLY in case there have been other DROPs of same column from other parents would make it non-deterministic if columns from child tables will be dropped when using DROP ONLY on a schema you dont know the full history for. 2.a It will probably also not be pg_dump-transparent, ie doing dump/reload between first and second drop column will get you different results. - Hannu ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance
Hannu Krosing [EMAIL PROTECTED] writes: 1) create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c () inherits(p1, p2); drop column p2.f1; -- this DROP is in fact implicitly ONLY Surely not? At least, I don't see why it should be thought of that way. There's always a difference between DROP and DROP ONLY. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Default privileges for 7.3
Peter Eisentraut [EMAIL PROTECTED] writes: Tom Lane writes: How do you feel about allowing functions to default to EXECUTE PUBLIC? Less excited, but if it gets us to the point of no known problems during upgrade we might as well do it. Okay, I've changed the defaults for both languages and functions; if we think of something better we can change it again ... regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance
On Wed, 2002-09-25 at 04:13, Tom Lane wrote: Hannu Krosing [EMAIL PROTECTED] writes: 1) create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c () inherits(p1, p2); drop column p2.f1; -- this DROP is in fact implicitly ONLY Surely not? At least, I don't see why it should be thought of that way. There's always a difference between DROP and DROP ONLY. What will be the difference in the user-visible schema ? Hannu ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] news.postgresql.org down
news.postgresql.org seems to be down (and has been for a while -- I think I tried a day or so ago and found it down then also) Joe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance
Hannu Krosing dijo: On Wed, 2002-09-25 at 04:13, Tom Lane wrote: Hannu Krosing [EMAIL PROTECTED] writes: 1) create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c () inherits(p1, p2); drop column p2.f1; -- this DROP is in fact implicitly ONLY Surely not? At least, I don't see why it should be thought of that way. There's always a difference between DROP and DROP ONLY. What will be the difference in the user-visible schema ? If I understand the issue correctly, this is the key point to this discussion. The user will not see a difference in schemas, no matter which way you look at it. But to the system catalogs there are two ways of representing this situation: f1 being defined locally by c (and also inherited from p1) or not (and only inherited from p1). I think the difference is purely phylosophical, and there are no arguments that can convince either party that it is wrong. Anyway, there's always a set of commands that can make the user go from one representation to the other. He just has to be careful and know exactly which way the system will work. Whichever way it works, it should be clearly and carefully documented in the ALTER TABLE reference. -- Alvaro Herrera (alvherre[a]atentus.com) Cuando no hay humildad las personas se degradan (A. Christie) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance
On Wed, 2002-09-25 at 04:33, Alvaro Herrera wrote: Hannu Krosing dijo: On Wed, 2002-09-25 at 04:13, Tom Lane wrote: Hannu Krosing [EMAIL PROTECTED] writes: 1) create table p1 (f1 int, g1 int); create table p2 (f1 int, h1 int); create table c () inherits(p1, p2); drop column p2.f1; -- this DROP is in fact implicitly ONLY Surely not? At least, I don't see why it should be thought of that way. There's always a difference between DROP and DROP ONLY. What will be the difference in the user-visible schema ? If I understand the issue correctly, this is the key point to this discussion. The user will not see a difference in schemas, no matter which way you look at it. But to the system catalogs there are two ways of representing this situation: f1 being defined locally by c (and also inherited from p1) or not (and only inherited from p1). Ok, I think I'm beginning to see Tom's point. So what Tom wants is that doing DROP ONLY will push the definition down the hierarchy on first possibility only as a last resort. For me it feels assymmetric (unless we will make attislocal also int instead of boolean ;). This assymetric nature will manifest itself when we will have ADD COLUMN which can put back the DROP ONLY COLUMN and it has to determine weather to remove the COLUMN definition from the child. What does the current model do in the following case: create table p (f1 int, g1 int); create table c (f1 int) inherits(p); drop column c.f1; Will it just set attisinh = 1 on c.f1 ? what would drop column p.f1; have done - would it have left c.f1 intact? I think the difference is purely phylosophical, and there are no arguments that can convince either party that it is wrong. There seem to be actually 3 different possible behaviours for DROP COLUMN for hierarchies. 1. DROP ONLY - the weakest - drops the column and moves the original (or explicit, defined-here) definition down to all children if not already found there too. 2. DROP - midlevel - drops the column and its inherited definitions in children but stops at first foreign definition (defined locally or inherited from other parents). 3. DROP FORCE - strongest ( more or less what current drop seems to do.) - walks down the hierarchy and removes all definitions, weather inherited or local, only leaves definitions inherited from other parents. Perhaps it should just fail in case of multiply inherited field ? Maybe it was too early to put the DROP ONLY functionality in ? Anyway, there's always a set of commands that can make the user go from one representation to the other. He just has to be careful and know exactly which way the system will work. Whichever way it works, it should be clearly and carefully documented in the ALTER TABLE reference. Amen. -- Hannu ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance
Hannu Krosing dijo: For me it feels assymmetric (unless we will make attislocal also int instead of boolean ;). This assymetric nature will manifest itself when we will have ADD COLUMN which can put back the DROP ONLY COLUMN and it has to determine weather to remove the COLUMN definition from the child. Well, the ADD COLUMN thing is something I haven't think about. Let's see: if I have a child with a local definition of the column I'm adding, I have to add one to its inhcount, that's clear. But do I have to reset its attislocal? What does the current model do in the following case: create table p (f1 int, g1 int); create table c (f1 int) inherits(p); drop column c.f1; Will it just set attisinh = 1 on c.f1 ? No, it will forbid you to drop the column. That was the intention on the first place: if a column is inherited, you shouldn't be allowed to drop or rename it. You can only do so at the top of the inheritance tree, either recursively or non-recursively. And when you do it non-recursively, the first level is marked non-inherited. There seem to be actually 3 different possible behaviours for DROP COLUMN for hierarchies. Well, I'm not too eager to discuss this kind of thing: it's possible that multiple inheritance goes away in a future release, and all these issues will possibly vanish. But I'm not sure I understand the implications of interfaces (a la Java multiple inheritance). -- Alvaro Herrera (alvherre[a]atentus.com) Acepta los honores y aplausos y perderas tu libertad ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PGXLOG variable worthwhile?
On Tue, 24 Sep 2002, Jan Wieck wrote: And AFAICS it is scary only because screwing that up will simply corrupt your database. Thus, a simple random number (okay, and a timestamp of initdb) in two files, one in $PGDATA and one in $PGXLOG would be a totally sufficient safety mechanism to prevent starting with the wrong XLOG directory. But still, why set up a situation where your database might not start? Why not set it up so that if you get just *one* environment or command-line variable right, you can't set another inconsistently and screw up your start anyway? Why store configuration information outside of the database data directory in a form that's not easily backed up, and not easily found by other utilities? It's almost like people *don't* want to put this in the config file or something cjs -- Curt Sampson [EMAIL PROTECTED] +81 90 7737 2974 http://www.netbsd.org Don't you know, in this new Dark Age, we're all light. --XTC ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Postgresql Automatic vacuum
As far as getting into base postgresql distro. I don't mind it rewriting but I have some reservations. 1) As it is postgresql source code is huge. Adding functions to it which directly taps into it's nervous system e.g. cache, would take far more time to perfect in all conditions. It doesn't have to make its way into the postgresql daemon itself -- in fact since some people like tuning the vacuuming, it makes more sense to make this a daemon. No, my suggestion is simple that some sort of auto-vacuumer be compiled as a stand-alone app and included in the standard postgresql tar.gz file, and the install instructions recommend the site adding it as a cron job. On linux, it'd be good if the RPM install it automatically (or else it ran as a mostly-asleep daemon) because so many of the Linux/Postgresql users we see have just no clue about Postgresql, and no intention of reading anything. Just an FYI, a message I received today from the postmaster at a major telco about their postgresql experience: We have experienced some problems but they have generally cleared up after a database vacuum. However, sometimes I have found that the vacuum itself (especially a vacuum analyze) will go into the CPU consumption loop. -john ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] contrib/earthdistance missing regression test files
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: No. The new files are: expected/earthdistance.out (which you metion above) sql/earthdistance.sql OK, here's what I see now in CVS: #$ pwd /pgtop/contrib/earthdistance #$ lf CVS/README.earthdistanceearthdistance.out Makefileearthdistance.c earthdistance.sql.in What should be changed? The earthdistance.out file should be in an expected/ subdirectory, not directly in the contrib/earthdistance directory. Also, there is a missing regression input script file earthdistance.sql (this is not related to earthdistance.sql.in), which should be in a sql/ subdirectory. OK, done. I thought the earthdistance.sql file was derived from earthdistance.sql.in, and I thought the out was just a test file. I got them fixed now, in their proper directory. How do I run the regression tests for /contrib stuff? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] contrib/earthdistance missing regression test files
Bruce Momjian [EMAIL PROTECTED] writes: How do I run the regression tests for /contrib stuff? make make install make installcheck AFAICT, earthdistance is nowhere near passing yet :-(. It looks to me like the regression test is depending on the cube-based features that we decided to hold off for 7.4. Bruno, is that right? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] pg_dump and inherited attributes
Alvaro Herrera [EMAIL PROTECTED] writes: I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can be more or less rewritten completely, and probably should to get rigth all the cases mentioned in the past attisinherited discussion. Is this desirable for 7.3? It can probably be hacked around and the rewrite kept for 7.4, but I think it will be much simpler after the rewrite. If it's a bug then it's fair game to fix in 7.3. But keep in mind that pg_dump has to behave at least somewhat sanely when called against older servers ... will your rewrite behave reasonably if the server does not offer attinhcount values? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] making use of large TLB pages
Neil Conway [EMAIL PROTECTED] writes: I'd like to enable PostgreSQL to use large TLB pages, if the OS and processor support them. Hmm ... it seems interesting, but I'm hesitant to do a lot of work to support something that's only available on one hardware-and-OS combination. (If we were talking about a Windows-specific hack, you'd already have lost the audience, no? But I digress.) So as I understand it, we would basically replace the calls to shmget(), shmdt(), etc. with these system calls. The behavior will be slightly different, however -- I'm not sure if this API supports everything we expect the SysV IPC API to support (e.g. telling the # of clients attached to a given segment). I trust it at least supports inheriting the page mapping over a fork()? Can anyone comment on exactly what functionality we expect when dealing with the storage mechanism of the shared buffer? The only thing we use beyond the obvious here's some memory accessible by both parent and child processes is the #-of-clients functionality you mentioned. The reason that that is interesting is it provides a safety interlock against the case where a postmaster has crashed but left child backends running. If a new postmaster is started and starts its own collection of children then we are in very bad hot water, because the old and new backend sets will be modifying the same database files without any mutual awareness or interlocks. This *will* lead to serious, possibly unrecoverable database corruption. The SysV API provides a reliable interlock to prevent this scenario: we read the old shared memory block ID from the old postmaster's postmaster.pid file, and look to see if that block (a) still exists and (b) still has attached processes (presumably backends). If it's gone or has no attached processes, it's safe for the new postmaster to continue startup. I have little love for the SysV shmem API, but I haven't thought of an equivalently reliable interlock for this scenario without it. (For example, something along the lines of requiring each backend to write its PID into a file isn't very reliable at all: it leaves a window at each backend start where the backend hasn't yet written its PID, and it increases by a large factor the risk we've already seen wherein stale PID entries in lockfiles might by chance match the PIDs of other, unrelated processes.) Any ideas for better answers? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] pltcl.so patch
Nigel J. Andrews [EMAIL PROTECTED] writes: Yes, I do get the similar results. A quick investigation shows that the SPI_freetuptable at the end of pltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64 (which looks sensible to me) but which has a memory context of 0x7f7f7f7f (the unallocated marker). Attached is a patch against CVS HEAD which fixes this, I believe. The problem appears to be the newly added free of the tuptable at the end of pltcl_SPI_exec(). I've added a comment to that effect: /* * Do *NOT* free the tuptable here. That's because if the loop * body executed any SQL statements, it will have already free'd * the tuptable itself, so freeing it twice is not wise. We could * get around this by making a copy of SPI_tuptable-vals and * feeding that to pltcl_set_tuple_values above, but that would * still leak memory (the palloc'ed copy would only be free'd on * context reset). */ At least, I *think* that's the problem -- I've only been looking at the code for about 20 minutes, so I may be wrong. In any case, this makes both memleak() and memleak(1) work on my machine. Let me know if it works for you, and/or if someone knows of a better solution. I also added some SPI_freetuptable() calls in some places where Nigel didn't, and added some paranoia when dealing with statically sized buffers (snprintf() rather than sprintf(), and so on). I also didn't include Nigel's changes to some apparently unrelated PL/Python stuff -- this patch includes only the PL/Tcl changes. Cheers, Neil -- Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC Index: src/pl/tcl/pltcl.c === RCS file: /var/lib/cvs/pgsql-server/src/pl/tcl/pltcl.c,v retrieving revision 1.62 diff -c -r1.62 pltcl.c *** src/pl/tcl/pltcl.c 21 Sep 2002 18:39:26 - 1.62 --- src/pl/tcl/pltcl.c 25 Sep 2002 04:54:57 - *** *** 39,50 #include tcl.h - #include stdio.h - #include stdlib.h - #include stdarg.h #include unistd.h #include fcntl.h - #include string.h #include setjmp.h #include access/heapam.h --- 39,46 *** *** 308,313 --- 304,310 / spi_rc = SPI_exec(select 1 from pg_class where relname = 'pltcl_modules', 1); + SPI_freetuptable(SPI_tuptable); if (spi_rc != SPI_OK_SELECT) elog(ERROR, pltcl_init_load_unknown(): select from pg_class failed); if (SPI_processed == 0) *** *** 334,339 --- 331,337 if (SPI_processed == 0) { Tcl_DStringFree(unknown_src); + SPI_freetuptable(SPI_tuptable); elog(WARNING, pltcl: Module unknown not found in pltcl_modules); return; } *** *** 359,364 --- 357,363 } tcl_rc = Tcl_GlobalEval(interp, Tcl_DStringValue(unknown_src)); Tcl_DStringFree(unknown_src); + SPI_freetuptable(SPI_tuptable); } *** *** 955,963 * Build our internal proc name from the functions Oid / if (!is_trigger) ! sprintf(internal_proname, __PLTcl_proc_%u, fn_oid); else ! sprintf(internal_proname, __PLTcl_proc_%u_trigger, fn_oid); / * Lookup the internal proc name in the hashtable --- 954,964 * Build our internal proc name from the functions Oid / if (!is_trigger) ! snprintf(internal_proname, sizeof(internal_proname), ! __PLTcl_proc_%u, fn_oid); else ! snprintf(internal_proname, sizeof(internal_proname), ! __PLTcl_proc_%u_trigger, fn_oid); / * Lookup the internal proc name in the hashtable *** *** 1127,1133 prodesc-arg_is_rel[i] = 1; if (i 0) strcat(proc_internal_args, ); ! sprintf(buf, __PLTcl_Tup_%d, i + 1); strcat(proc_internal_args, buf); ReleaseSysCache(typeTup); continue; --- 1128,1134 prodesc-arg_is_rel[i] = 1; if (i 0) strcat(proc_internal_args, ); ! snprintf(buf, sizeof(buf), __PLTcl_Tup_%d, i + 1); strcat(proc_internal_args, buf); ReleaseSysCache(typeTup); continue; *** *** 1140,1146 if (i 0) strcat(proc_internal_args, ); ! sprintf(buf, %d, i + 1); strcat(proc_internal_args, buf); ReleaseSysCache(typeTup); --- 1141,1147 if (i 0) strcat(proc_internal_args, ); ! snprintf(buf, sizeof(buf), %d, i + 1); strcat(proc_internal_args, buf); ReleaseSysCache(typeTup); *** *** 1177,1183 { if (!prodesc-arg_is_rel[i]) continue; ! sprintf(buf, array set %d $__PLTcl_Tup_%d\n, i + 1, i + 1);
Re: [HACKERS] Postgresql Automatic vacuum
It doesn't have to make its way into the postgresql daemon itself -- in fact since some people like tuning the vacuuming, it makes more sense to make this a daemon. No, my suggestion is simple that some sort of auto-vacuumer be compiled as a stand-alone app and included in the standard postgresql tar.gz file, and the install instructions recommend the site adding it as a cron job. unless I missed something the point of a daemon is so that we don't need to use cron. I also think that some type of daemon should be included in the pg sources, and installed with the rest of the system, and if configured to do so, the postmaster launches the auto vac daemon. I think this still makes sense even with the proposed setup (autovac client is just special client app). ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Postgresql Automatic vacuum
On 25 Sep 2002 at 1:10, Matthew T. O'Connor wrote: It doesn't have to make its way into the postgresql daemon itself -- in fact since some people like tuning the vacuuming, it makes more sense to make this a daemon. No, my suggestion is simple that some sort of auto-vacuumer be compiled as a stand-alone app and included in the standard postgresql tar.gz file, and the install instructions recommend the site adding it as a cron job. unless I missed something the point of a daemon is so that we don't need to use cron. I also think that some type of daemon should be included in the pg sources, and installed with the rest of the system, and if configured to do so, the postmaster launches the auto vac daemon. I think this still makes sense even with the proposed setup (autovac client is just special client app). I would suggest adding it to pg_ctl. Best place to put it IMO.. I can make available rpm of pgavd but with checkinstall, I guess people will can have more distro. spcecific rpms. One rpm is obviously not going to install on all distros.. Bye Shridhar -- Moore's Constant: Everybody sets out to do something, and everybody does something, but no one does what he sets out to do. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Postgresql Automatic vacuum
Am Dienstag, 24. September 2002 08:16 schrieb Shridhar Daithankar: I will play with it more and give you some more feedback. Awaiting that. IMO there are still several problems with that approach, namely: * every database will get polluted with the autovacuum table, which is undesired * the biggest problem is the ~/.pgavrc file. I think it should work like other postgres utils do, e.g. supporting -U, -d, * it's not possible to use without activly administration the config file. it should be able to work without adminstrator assistance. When this is a daemon, why not store the data in memory? Even with several thousands of tables the memory footprint would still be small. And it should be possible to use for all databases without modifying a config file. Two weeks ago I began writing a similar daemon, but had no time yet to finish it. I've tried to avoid using fixed numbers (namely vacuum table after 1000 updates) and tried to make my own heuristic based on the statistics data and the size of the table. The reason is, for a large table 1000 entries might be a small percentage and vacuum is not necessary, while for small tables 10 updates might be sufficient. Best regards, Mario Weilguni ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Postgresql Automatic vacuum
On 24 Sep 2002 at 8:42, Mario Weilguni wrote: Am Dienstag, 24. September 2002 08:16 schrieb Shridhar Daithankar: IMO there are still several problems with that approach, namely: * every database will get polluted with the autovacuum table, which is undesired I agree. But that was the best alternative I could see. explanation follows..Besides I didn't want to touch PG meta data.. * the biggest problem is the ~/.pgavrc file. I think it should work like other postgres utils do, e.g. supporting -U, -d, Shouldn't be a problem. The config stuff is working and I can add that. I would rather term it a minor issue. On personal preference, I would just fire it without any arguments. It's not a thing that you change daily. Configure it in config file and done.. * it's not possible to use without activly administration the config file. it should be able to work without adminstrator assistance. Well. I would call that tuning. Each admin can tune it. Yes it's an effort but certainly not an active administration. When this is a daemon, why not store the data in memory? Even with several thousands of tables the memory footprint would still be small. And it should be possible to use for all databases without modifying a config file. Well. When postgresql has ability to deal with arbitrary number of rows, it seemed redundant to me to duplicate all those functionality. Why write lists and arrays again and again? Let postgresql do it. Two weeks ago I began writing a similar daemon, but had no time yet to finish it. I've tried to avoid using fixed numbers (namely vacuum table after 1000 updates) and tried to make my own heuristic based on the statistics data and the size of the table. The reason is, for a large table 1000 entries might be a small percentage and vacuum is not necessary, while for small tables 10 updates might be sufficient. Well, that fixed number is not really fixed but admin tunable, that too per database. These are just defaults. Tune it to suit your needs. The objective of whole exercise is to get rid of periodic vacuum as this app. shifts threshold to activity rather than time. Besides a table should be vacuumed when it starts affecting performance. On an installation if a table a 1M rows and change 1K rows affects performance, there will be a similar performance hit for a 100K rows table for 1K rows update. Because overhead involved would be almost same.(Not disk space. pgavd does not target vacuum full but tuple size should matter). At least me thinks so.. I plan to implement per table threshold in addition to per database thresholds. But right now, it seems like overhead to me. Besides there is an item in TODO, to shift unit of work from rows to blocks affected. I guess that takes care of some of your points.. Bye Shridhar -- Jones' Second Law: The man who smiles when things go wrong has thought of someone to blame it on. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Web site
On Tue, 24 Sep 2002, Gavin Sherry wrote: Hi all, It occurs to me that opening web page on www.postgresql.org, asking the user to select the mirror, is rather unprofessional. I am sure this has been discussed before but I thought I would bring it up again anyway. Your point? So, why not just redirect people to one of the mirrors listed? This could be done based on IP (yes it is inaccurate but it is close enough and has the same net effect: pushing people off the main web server) or it could be done by simply redirecting to a random mirror. Been there, done that, didn't work. Too much of a job to keep track of that many IP blocks too. Vince. -- == Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net 56K Nationwide Dialup from $16.00/mo at Pop4 Networking http://www.camping-usa.com http://www.cloudninegifts.com http://www.meanstreamradio.com http://www.unknown-artists.com == ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Problem on PG7.2.2
At 10.40 23/09/02 -0400, you wrote: Roberto Fichera [EMAIL PROTECTED] writes: database=# select count(*) from detail; count 181661 (1 row) database=# select count(*) from detail; count 181660 (1 row) database=# select count(*) from detail; FATAL 2: open of /var/lib/pgsql/data/pg_clog/0303 failed: No such file or directory [ blinks... ] That's with no one else modifying the table meanwhile? I think you've got *serious* hardware problems. Hard to tell if it's disk or memory, but get out those diagnostic programs now ... This table is used to hold all the logs for our Radius authentication statistics, so we have only INSERT from the radiusd server. I had no problem at all. No crash no panic, nothing. database=# \d ts Table ts Column | Type | Modifiers +---+--- name | character(15) | ip_int | cidr | not null Primary key: ts_pkey database=# DROP TABLE TS; ERROR: cannot find attribute 1 of relation ts_pkey database=# DROP INDEX TS_PKEY; ERROR: cannot find attribute 1 of relation ts_pkey database=# and again [root@foradada pgsql]# pg_dump -d -f detail.sql -t detail database pg_dump: dumpClasses(): SQL command failed pg_dump: Error message from server: FATAL 2: open of /var/lib/pgsql/data/pg_clog/0202 failed: No such file or directory server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor [root@foradada pgsql]# ls -al totale 10464 drwx--4 postgres postgres 4096 set 23 17:44 . drwxr-xr-x 14 root root 4096 set 23 13:06 .. drwx--2 postgres postgres 4096 ago 26 20:13 backups -rw---1 postgres postgres 5519 set 5 00:53 .bash_history -rw-r--r--1 postgres postgres 107 ago 26 20:13 .bash_profile drwx--6 postgres postgres 4096 set 23 18:18 data -rw-r--r--1 root root 6221242 set 24 12:04 detail.sql -rw-r--r--1 root root 157 giu 25 14:43 initdb.i18n -rw---1 postgres postgres10088 set 5 00:14 .psql_history [root@foradada pgsql]# Roberto Fichera. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]