Re: [HACKERS] Pg7.1beta3: connect failed: The DB System is starting up.
> With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n > 10 -c 4) in apache error_log i've got: > > [Pg7.1beta3 with standard conf files.] And how many simult connections you did? > .. > [Fri Jan 12 07:48:58 2001] [error] DBI->connect(dbname=mydb) failed: The > Data Base System is starting up > > > Also messages: "DB in recovery ...". Looks like server was crashed and now is in recovery. Vadim
[HACKERS] AW: AW: AW: AW: Re: tinterval - operator problems on AIX
> > > How about having some #if BROKEN_TIMEZONE_DATABASE code which uses both > > > mktime() and localtime() to derive the correct time zone? That is, call > > > mktime to get a time_t, then call localtime() to get the time zone info, > > > but only on platforms which do not get a complete result from just the > > > call to mktime(). In fact, we *could* check for tm->tm_isdst coming back > > > "-1" for every platform, then call localtime() to make a last stab at > > > getting a good value. > > How would we construct a valid time_t from the struct tm > without mktime? > > If I understand the info you have given previously, it should be > possible to get a valid tm->tm_isdst by the following > sequence of calls: > > // call mktime() which might return a "-1" for DST > time = mktime(tm); > // time is now a correct GMT time Unfortunately the returned time is -1 for all dates before 1970 (on AIX and, as I understood, IRIX) :-( (can you send someone from IBM, that I can shout at, to releive my anger) Andreas
Re: [HACKERS] Pg7.1beta3: connect failed: The DB System is starting up.
>From: "Vadim Mikheev" To: "Valter Mazzola" , Subject: Re: [HACKERS] >Pg7.1beta3: connect failed: The DB System is starting up. Date: Fri, 12 Jan >2001 00:36:55 -0800 > > > With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n > > 10 -c 4) in apache error_log i've got: > > [Pg7.1beta3 with standard >conf files.] > >And how many simult connections you did? about 10 connections valter > > > .. > [Fri Jan 12 07:48:58 2001] [error] >DBI->connect(dbname=mydb) failed: The > Data Base System is starting up > > > > Also messages: "DB in recovery ...". > >Looks like server was crashed and now is in recovery. > >Vadim > > _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
RE: AW: [HACKERS] Re: GiST for 7.1 !!
On Thu, 11 Jan 2001, Mikheev, Vadim wrote: > > Um, you do realize that a contrib module that gets used as part of the > > regress tests may as well be mainstream? At least in terms of the > > portability requirements it will have to meet? > > > > I'm unhappy again. Bad enough we accepted a new feature during beta; > > now we're going to expect an absolutely virgin contrib module to work > > everywhere in order to pass regress tests? > > Ops, agreed. > And I fear that in current code there is no one GiST index > implementation -:( Should we worry about regress tests? -:) Yes, we had to write contrib module even to test GiST. People, I'm really confused after reading all of messages. GiST is just an interface and to test any interface you need 2 sides. In current code there is only one side. old GiST code live untested for years. What's the problem ? It's the problem of current regression test, mostly. Ok. We could rewrite R-Tree to use GiST and make regression test which will not make people nervous. But this certainly not for 7.1 and most probable without us. Author of R-Tree could write this easily. I read Bruce's interview and was really relaxed - how everything is going well. Bruce, we need your opinion. Oleg > > Vadim > Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
Thomas Lockhart writes: > I haven't yet actually fixed the code, but will post patches when > I've done so (assuming that a fix is possible). The normalization in this example program which subtracts 34 years seems to work OK. I've run it on AIX, IRIX, Linux and Solaris. Some examples follow. AIX: $ TZ=PST8PDT ago34 851995921 Local Mon Dec 30 17:32:01 1996 PST PST 1996-12-30T17:32:01, wday=1, yday=364, isdst = 0 UTC Tue Dec 31 01:32:01 1996 PST PST 1996-12-31T01:32:01, wday=2, yday=365, isdst = 0 Local Sun Dec 30 17:32:01 1962 PST PST 1962-12-30T17:32:01, wday=0, yday=363, isdst = 0 UTC Mon Dec 31 01:32:01 1962 PST PST 1962-12-31T01:32:01, wday=1, yday=364, isdst = 0 Linux: $ TZ=America/Los_Angeles ago34 426475921 Local Thu Jul 07 18:32:01 1983 PDT -0700 1983-07-07T18:32:01, wday=4, yday=187, isdst = 1 UTC Fri Jul 08 01:32:01 1983 GMT + 1983-07-08T01:32:01, wday=5, yday=188, isdst = 0 Local Thu Jul 07 18:32:01 1949 PST -0800 1949-07-07T18:32:01, wday=4, yday=187, isdst = 0 UTC Fri Jul 08 02:32:01 1949 GMT + 1949-07-08T02:32:01, wday=5, yday=188, isdst = 0 Here is the program. The call to localtime(&t_ago) is redundant and hence the adjustment of t_ago can be skipped. It is in this program as a sanity check. As it stands, this program assumes that the input and resulting date are in the usual UNIX range of [1901, 2038]. I presume that there is code in place that checks the range of dates. #include #include #include #include void print_date(const char *label, const struct tm *tm); int date_equals(const struct tm *a, const struct tm *b); int main(int argc, char **argv) { time_t t_in = 0, t_ago; struct tm local, utc, local2; if (argc == 2) { t_in = (time_t) strtol(argv[1], NULL, 0); } local = *localtime(&t_in); print_date("Local", &local); utc = *gmtime(&t_in); print_date("UTC", &utc); /* subtract an interval of 34 years */ local.tm_year -= 34; /* Normalize */ local.tm_isdst = -1; t_ago = mktime(&local); if (t_ago == -1 && local.tm_isdst == -1) { int n = (70 - local.tm_year + 27) / 28; local.tm_year += n * 28; t_ago = mktime(&local); if (t_ago == -1 && local.tm_isdst == -1) { printf("Warning, time may be wrong\n"); /* or call elog() */ } local.tm_year -= n * 28; t_ago -= (365*4+1)*7*24*60*60*n; /* assumes 1901 <= year <= 2099 */ } print_date("Local", &local); local2 = *localtime(&t_ago); /* this should be redundant */ if (!date_equals(&local, &local2)) { print_date("ERROR", &local2); } utc = *gmtime(&t_ago); print_date("UTC", &utc); return EXIT_SUCCESS; } void print_date(const char *label, const struct tm *tm) { char buffer[80]; strftime(buffer, sizeof buffer, "%a %b %d %H:%M:%S %Y %Z %z", tm); printf("%5s %s\n", label, buffer); printf(" %4d-%02d-%02dT%02d:%02d:%02d, wday=%d, yday=%d, isdst = %d\n", 1900 + tm->tm_year, 1 + tm->tm_mon, tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec, tm->tm_wday, tm->tm_yday, tm->tm_isdst); } /* return 1 if all fields of a are equal to those of b, otherwise 0 */ int date_equals(const struct tm *a, const struct tm *b) { return a->tm_year == b->tm_year && a->tm_mon == b->tm_mon && a->tm_mday == b->tm_mday && a->tm_hour == b->tm_hour && a->tm_min == b->tm_min && a->tm_sec == b->tm_sec && a->tm_wday == b->tm_wday && a->tm_yday == b->tm_yday && a->tm_isdst == b->tm_isdst; } -- Pete Forman -./\.- Disclaimer: This post is originated WesternGeco -./\.- by myself and does not represent [EMAIL PROTECTED] -./\.- opinion of Schlumberger, Baker http://www.crosswinds.net/~petef -./\.- Hughes or their divisions.
AW: [HACKERS] CRCs (was Re: [GENERAL] Re: Loading optimization)
> A disk-block CRC would detect partially written blocks (ie, power drops > after disk has written M of the N sectors in a block). The disk's own > checks will NOT consider this condition a failure. But physical log recovery will rewrite every page that was changed after last checkpoint, thus this is not an issue anymore. > I'm not convinced > that WAL will reliably detect it either (Vadim?). Certainly WAL will > not help for corruption caused by external agents, away from any updates > that are actually being performed/logged. The external agent (if malvolent) could write a correct CRC anyway. If on the other hand the agent writes complete garbage, vacuum will notice. Andreas
RE: AW: [HACKERS] Re: GiST for 7.1 !!
OK. We found an old implementation of R-Tre using GiST (Pg95) and we'll try to implement regression test using R-Tree it's anyway will be a good test. Regards, Oleg On Fri, 12 Jan 2001, Oleg Bartunov wrote: > On Thu, 11 Jan 2001, Mikheev, Vadim wrote: > > > > Um, you do realize that a contrib module that gets used as part of the > > > regress tests may as well be mainstream? At least in terms of the > > > portability requirements it will have to meet? > > > > > > I'm unhappy again. Bad enough we accepted a new feature during beta; > > > now we're going to expect an absolutely virgin contrib module to work > > > everywhere in order to pass regress tests? > > > > Ops, agreed. > > And I fear that in current code there is no one GiST index > > implementation -:( Should we worry about regress tests? -:) > > > Yes, we had to write contrib module even to test GiST. People, > I'm really confused after reading all of messages. > GiST is just an interface and to test any interface you need 2 sides. > In current code there is only one side. old GiST code live > untested for years. What's the problem ? It's the problem of > current regression test, mostly. > Ok. We could rewrite R-Tree to use GiST and make regression test which > will not make people nervous. But this certainly not for 7.1 and most > probable without us. Author of R-Tree could write this easily. > I read Bruce's interview and was really relaxed - > how everything is going well. Bruce, we need your opinion. > > > Oleg > > > > Vadim > > > > Regards, > Oleg > _ > Oleg Bartunov, sci.researcher, hostmaster of AstroNet, > Sternberg Astronomical Institute, Moscow University (Russia) > Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ > phone: +007(095)939-16-83, +007(095)939-23-83 > Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
[HACKERS] pgaccess: russian fonts && SQL window???
Two questions about pgaccess. I use tkl/Tk 8.2, Postgresql 7.1, FreeBSD 4.0 . 1. I cannot view russian text in russian when I use pgaccess. I set all the fonts in 'Preferences' to -cronyx-helvetica-*-*-*-*-17-*-*-*-*-*-koi8-* , but don't see russian letters in 'tables' and others windows. The texts are really in russian, DBENCODING is KOI8. 2. How can I manage with 'SQL window'? I can type text, but no more. How can I send my queries to server? Thanks to any help. -- Anatoly K. Lasareff Email:[EMAIL PROTECTED] http://tolikus.hq.aaanet.ru:8080Phone: (8632)-710071
Re: [HACKERS] Suggested fix for pg_dump
At 13:29 7/01/01 -0500, Tom Lane wrote: >The Hermit Hacker <[EMAIL PROTECTED]> writes: >> Essentially, worst case scenario, we are going from 'broken->broken' ... > >No, I don't think so. The current pg_dump code is only broken if >you've renamed a column involved in a foreign-key dependency (if I >understood the thread correctly). But Philip is proposing to change >pg_dump to rely on alter table add constraint for *all* PRIMARY KEY >constructs. I've got a version of this working now, but it raises a further question: should I also put the CHECK constraints into ALTER TABLE statements? Eventually they should go there, but maybe not for 7.1 given the concerns over ALTER TABLE ADD CONSTRAINT. Reasons for: - It's nicer - It's more consistent with PK - Can use pg_restore to restore tables without constraints Reasons against: - ALTER TABLE ADD CONSTRAINT is untested - We're in beta, and the code ain't broke I suspect the last point says it all... Philip Warner| __---_ Albatross Consulting Pty. Ltd. |/ - \ (A.B.N. 75 008 659 498) | /(@) __---_ Tel: (+61) 0500 83 82 81 | _ \ Fax: (+61) 0500 83 82 82 | ___ | Http://www.rhyme.com.au |/ \| |---- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
Re: [HACKERS] Suggested fix for pg_dump
At 00:03 13/01/01 +1100, Philip Warner wrote: >At 13:29 7/01/01 -0500, Tom Lane wrote: >>The Hermit Hacker <[EMAIL PROTECTED]> writes: >>> Essentially, worst case scenario, we are going from 'broken->broken' ... >> >>No, I don't think so. The current pg_dump code is only broken if >>you've renamed a column involved in a foreign-key dependency (if I >>understood the thread correctly). But Philip is proposing to change >>pg_dump to rely on alter table add constraint for *all* PRIMARY KEY >>constructs. > >I've got a version of this working now, but it raises a further question: Apart from the fact that ALTER TABLE ADD CONSTRAINT name PRIMARY KEY(fields); is not supported in 7.1. Oh well. Philip Warner| __---_ Albatross Consulting Pty. Ltd. |/ - \ (A.B.N. 75 008 659 498) | /(@) __---_ Tel: (+61) 0500 83 82 81 | _ \ Fax: (+61) 0500 83 82 82 | ___ | Http://www.rhyme.com.au |/ \| |---- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
RE: AW: [HACKERS] Re: GiST for 7.1 !!
On Fri, 12 Jan 2001, Oleg Bartunov wrote: > On Thu, 11 Jan 2001, Mikheev, Vadim wrote: > > > > Um, you do realize that a contrib module that gets used as part of the > > > regress tests may as well be mainstream? At least in terms of the > > > portability requirements it will have to meet? > > > > > > I'm unhappy again. Bad enough we accepted a new feature during beta; > > > now we're going to expect an absolutely virgin contrib module to work > > > everywhere in order to pass regress tests? > > > > Ops, agreed. > > And I fear that in current code there is no one GiST index > > implementation -:( Should we worry about regress tests? -:) > > > Yes, we had to write contrib module even to test GiST. People, > I'm really confused after reading all of messages. > GiST is just an interface and to test any interface you need 2 sides. > In current code there is only one side. old GiST code live > untested for years. What's the problem ? It's the problem of > current regression test, mostly. > Ok. We could rewrite R-Tree to use GiST and make regression test which > will not make people nervous. But this certainly not for 7.1 and most > probable without us. Author of R-Tree could write this easily. The problem is that there is inadequate amount of time to fully test the the regression tests among all of the platforms that we support ... therefore, we can't test GiST among all of the platforms we support ... for instance, can you guarantee that _int.c will compile on *every* platform that PostgreSQL supports? That it will operate properly? Have you tested that?
Re: [HACKERS] Pg7.1beta3: connect failed: The DB System is startingup.
On Fri, 12 Jan 2001, Vadim Mikheev wrote: > > With Apache Mod Perl, Apache::DBI, stress test with apache bench (ab -n > > 10 -c 4) in apache error_log i've got: > > > > [Pg7.1beta3 with standard conf files.] > > And how many simult connections you did? > > > .. > > [Fri Jan 12 07:48:58 2001] [error] DBI->connect(dbname=mydb) failed: The > > Data Base System is starting up > > > > > > Also messages: "DB in recovery ...". > > Looks like server was crashed and now is in recovery. I'm confused about this "recovery" thing ... is it supposed to eventually start working again? I've had it happen in the past, and the only way to get things started again is to kill off the postmaster and restart it :(
Re: AW: [HACKERS] Re: GiST for 7.1 !!
Tom Lane wrote: > > Um, you do realize that a contrib module that gets used as part of the > regress tests may as well be mainstream? At least in terms of the > portability requirements it will have to meet? _If_ we want to have a tested GiST (and not the "probably works but really has some nasty known bugs" one) we need to write _tests_. To test it we need something that makes use of it. As the only things that make use of it are extensions we need to make use of them in tests. So I propose the following : 1. Keep the fixed (new) gist.c in the main codebase 2. make use of the RD-index and/or Gene's tests in contrib in regression tests 3. Tellpeople beforehand that it is not the end of the world if GiST _tests_ fail on their platform > I'm unhappy again. Bad enough we accepted a new feature during beta; > now we're going to expect an absolutely virgin contrib module to work > everywhere in order to pass regress tests? There can be always "expected" discrepancies in regress tests, and failing GiST test just tells people that if they want to use GiST on their platform they must probably fix things in core code as well. Currently they have to find it out the hard way - first lot of work trying to "fix" their own code and only then the bright idea that maybe it is actually broken in the core. IMHO, giving out real test results, even negative, instead of leaving things untested would be a honest thing to do. - Hannu
Re: AW: [HACKERS] Re: GiST for 7.1 !!
Oleg Bartunov wrote: > > OK. We found an old implementation of R-Tre using GiST (Pg95) > and we'll try to implement regression test using R-Tree > it's anyway will be a good test. How is it different than using RD-tree for tests ? Can you do it usin already compiled-in functions and modifying things only at SQL level ? Or is it just much simpler ? - Hannu
Re: AW: [HACKERS] Re: GiST for 7.1 !!
On Fri, 12 Jan 2001, Hannu Krosing wrote: > Oleg Bartunov wrote: > > > > OK. We found an old implementation of R-Tre using GiST (Pg95) > > and we'll try to implement regression test using R-Tree > > it's anyway will be a good test. > > How is it different than using RD-tree for tests ? > No difference at all ! It's just another implemetation of R-Tree. > Can you do it usin already compiled-in functions and modifying > things only at SQL level ? > unfortunately not ! Current postgres code has nothing connected with GiST and this is a problem ! How to test interface code without having two sides ? I understand we don't want to have another reason for complaints about non-working regression test. I never got regression test passed 100% on my Linux box with almost all versions of PostgreSQL but I could live with that. What's wrong with warning message if GiST test not passed ? > Or is it just much simpler ? > I'm interesting to test performance of built-in R-Tree and R-Tree + GiST. Oleg > - > Hannu > Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
> Here is the program. The call to localtime(&t_ago) is redundant and > hence the adjustment of t_ago can be skipped. It is in this program > as a sanity check. > As it stands, this program assumes that the input and resulting date > are in the usual UNIX range of [1901, 2038]. I presume that there is > code in place that checks the range of dates. Interesting idea. I'm not sure that assuming that timezones from 1943 are the same as timezones from 2013 (they are not, at least in the US) is any more valid than just accepting the result from your system. I'd like to explore more possibilities before we settle on a solution. Perhaps I should just add checks to assume an unspecified time zone wrt output formatting if the tm_isdst flag comes back as "-1"? I'll have to look at the ramifications for input times and for dump/restore operations. Does you system respect the TZ or PGTZ environment variable? - Thomas
Re: AW: [HACKERS] Re: GiST for 7.1 !!
Oleg Bartunov <[EMAIL PROTECTED]> writes: > What's wrong with > warning message if GiST test not passed ? You're being *way* too optimistic. An output discrepancy in a test of GIST we could live with. But think about other scenarios: 1. GIST test coredumps on some platforms. This corrupts other tests (at least through the "system is starting up" failure mode), thus masking problems that we actually care about. 2. GIST test code does not compile on some platforms, causing "make check" to fail completely. At this point my vote is to leave the GIST test in contrib for 7.1. Anyone who actually cares about GIST (to be blunt: all three of you) can run it as a separate step. I don't want it in the standard regress tests until 7.2, when we will have a reasonable amount of time to test and debug the test. regards, tom lane
[HACKERS] Re: AW: Re: GiST for 7.1 !!
> IMHO, giving out real test results, even negative, instead of leaving > things untested would be a honest thing to do. afaict there are several concerns or decisions, and we've made a few already: Re: gist.c patches... 1) Oleg and Hannu are committed to testing the repaired GiST as soon as it is in the main tree. They are both testing already with the patched version. 2) They will try to contact Gene to encourage testing with Gene's application, though they have no reason to suspect from their own testing that Gene's stuff will break. 3) There is a consensus that the gist.c patches should appear in the 7.1 release, to allow useful work with GiST and to enable further development. So it is OK to commit the gist.c patches based on Oleg's and Hannu's existing and future test plan. Re: regression tests... 4) We all would like to see some regression tests of GiST. Tom Lane has (rightly) expressed concern over unforeseen breakage in the regression flow when done on other platforms. 5) Oleg has some regression-capable test code available for contrib, but has indicated that fully (re)writing the regression tests will take too much time. 6) We have at least two committed testers for the 7.1 release for the GiST features. That is two more than we've ever had before (afacr Gene didn't participate in the end-stage beta cycle, but I may not be remembering correctly) so the risks that something is not right are greatly reduced, to below the risks of same on the day of release for previous versions. 8) Additional regression testing is required asap, but may not be allowed into the default 7.1 test sequence. How about adding an optional test a la "bigtest" for GiST for this release? It could go mainstream for 7.1.x or for 7.2 as we get more experience with it. This is just a suggestion and I'm sure there are other possibilities. I'm pretty sure we agree on most of points 1-8, and that 1-3 are resolved. Comments? - Thomas
Re: AW: [HACKERS] Re: GiST for 7.1 !!
On Fri, 12 Jan 2001, Oleg Bartunov wrote: > On Fri, 12 Jan 2001, Hannu Krosing wrote: > > > Oleg Bartunov wrote: > > > > > > OK. We found an old implementation of R-Tre using GiST (Pg95) > > > and we'll try to implement regression test using R-Tree > > > it's anyway will be a good test. > > > > How is it different than using RD-tree for tests ? > > > > No difference at all ! It's just another implemetation of R-Tree. > > > Can you do it usin already compiled-in functions and modifying > > things only at SQL level ? > > > > unfortunately not ! Current postgres code has nothing connected with > GiST and this is a problem ! How to test interface code without > having two sides ? I understand we don't want to have another reason > for complaints about non-working regression test. I never got > regression test passed 100% on my Linux box with almost all versions > of PostgreSQL but I could live with that. What's wrong with > warning message if GiST test not passed ? It has *nothing* to do with passing or not, it has to do with timing of hte patches ... had they come in before we went beta, this would all have been a no-brainer ... because they didn't, the problem arises ... GiST changes are included ... testing of GiST changes aren't integrated ... can we *please* drop this whole thing already, as its really detracting from getting *real* work done with very little, to no, benefit ...
Beta4 for GiST? (Was: Re: AW: [HACKERS] Re: GiST for 7.1 !! )
On Fri, 12 Jan 2001, Tom Lane wrote: > Oleg Bartunov <[EMAIL PROTECTED]> writes: > > What's wrong with > > warning message if GiST test not passed ? > > You're being *way* too optimistic. An output discrepancy in a test of > GIST we could live with. But think about other scenarios: > > 1. GIST test coredumps on some platforms. This corrupts other tests > (at least through the "system is starting up" failure mode), thus > masking problems that we actually care about. > > 2. GIST test code does not compile on some platforms, causing "make check" > to fail completely. > > At this point my vote is to leave the GIST test in contrib for 7.1. > Anyone who actually cares about GIST (to be blunt: all three of you) > can run it as a separate step. I don't want it in the standard regress > tests until 7.2, when we will have a reasonable amount of time to test > and debug the test. Agreed ... now let's move onto more important things, cause we've spent much too long on this as it is ... Namely, should we bundle up a beta4 this weeekend, so that the GiST changes are in place for further testing, or hold off for ... ?
Re: [HACKERS] Bruce Momjian's interview in LWN.
Bruce Momjian wrote: > I announced this on Announce/General a few hours ago. > I wanted to mention that all general PostgreSQL news goes to those two > lists, on the assumption that all people are subscribed to either of > those two lists. > I don't post to hackers by default because I don't want to duplicate > these postings. Sorry to duplicate, but I had not received the post to general or announce (to both of which I am subscribed) before posting. But I will keep in mind the new postings. I also put it on the 'In The News' page on the website (thanks Vince). Now that I look through my inbox, I don't see the post anywhere. Hmmm Not in trash either, which I didn't empty yesterday. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
[HACKERS] Re: AW: Re: GiST for 7.1 !!
On Fri, 12 Jan 2001, Thomas Lockhart wrote: > How about adding an optional test a la "bigtest" for GiST for this > release? It could go mainstream for 7.1.x or for 7.2 as we get more > experience with it. This is just a suggestion and I'm sure there are > other possibilities. I'm pretty sure we agree on most of points 1-8, and > that 1-3 are resolved. Comments? make GIST_TEST=yes to include GiST testing would be cool, if it can be done ... this way Tom's worry about non-GiST users having bad regress tests is appeased ... but I do agree with Tom that mainstreaming the GiST testing would be a bad idea ... if we could somehow include it as an optional test (as you say, ala bigtest), then, if nothing else, it saves having to cd to the contrib directory and run it there ... ala one stop shopping ... *But*, for the 3 ppl we've pointed out as users of GiST, this is definitely not a priority issue ... if we can do it, great, if not, no sweat either ...
[HACKERS] Re: AW: Re: GiST for 7.1 !!
Thomas Lockhart <[EMAIL PROTECTED]> writes: > How about adding an optional test a la "bigtest" for GiST for this > release? We could do that, but it seems like rather pointless effort, compared to just telling people "go run the tests in these contrib modules if you want to test GIST". I have no objection to fully integrating some GIST test(s) for 7.2. I just don't want to deal with it at this late stage of the 7.1 cycle. We have a long list of considerably more mainstream to-do items yet to deal with ... regards, tom lane
Re: Beta4 for GiST? (Was: Re: AW: [HACKERS] Re: GiST for 7.1 !! )
The Hermit Hacker <[EMAIL PROTECTED]> writes: > Namely, should we bundle up a beta4 this weeekend, so that the GiST > changes are in place for further testing, or hold off for ... ? First I'd like to finish a couple of open items I have, like fixing the CRIT_SECTION code so that SIGTERM response will not occur when we are holding a spinlock. Should be able to get this stuff done in a day or two, if I quit arguing about GIST and get back to work... regards, tom lane
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
Thomas Lockhart writes: > > Here is the program. The call to localtime(&t_ago) is redundant > > and hence the adjustment of t_ago can be skipped. It is in this > > program as a sanity check. > > As it stands, this program assumes that the input and resulting > > date are in the usual UNIX range of [1901, 2038]. I presume that > > there is code in place that checks the range of dates. > > Interesting idea. I'm not sure that assuming that timezones from > 1943 are the same as timezones from 2013 (they are not, at least in > the US) is any more valid than just accepting the result from your > system. I'd like to explore more possibilities before we settle on > a solution. As far as AIX and IRIX are concerned the timezones _are_ the same. No variation of rules from year to year is possible. You are not going to work out DST rules for earlier years without incorporating third party libraries. As I understand it PostgreSQL undertakes to calculate dates only as accurately as the underlying OS allows. The result of mktime for year < 1970 is always -1 and the struct tm is not normalized; tm_isdst, tm_wday and tm_yday are not calculated. I can't see that being a useful result. > Perhaps I should just add checks to assume an unspecified time zone > wrt output formatting if the tm_isdst flag comes back as "-1"? I'm letting the system functions worry about applying time zone and DST offsets. At no point do I use tm_isdst, except to set it to and test it for -1. Thinking about that a bit more, I think that tm_isdst should not be written into. It would be better to set, say, tm_wday to -1 and change the test for failure to (t_ago == -1 && local.tm_wday == -1). tm_isdst should be left as 0 or 1 to help out around the times of transition to or from DST. > I'll have to look at the ramifications for input times and for > dump/restore operations. Does you system respect the TZ or PGTZ > environment variable? My code uses localtime and mktime which depend on TZ. There is no dependency on PGTZ, unless somewhere else in postgres there is an equivalent of setenv(TZ=getenv(PGTZ)). -- Pete Forman -./\.- Disclaimer: This post is originated WesternGeco -./\.- by myself and does not represent [EMAIL PROTECTED] -./\.- opinion of Schlumberger, Baker http://www.crosswinds.net/~petef -./\.- Hughes or their divisions.
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
Pete Forman <[EMAIL PROTECTED]> writes: > Thinking about that a bit more, I think that tm_isdst should not be > written into. IIRC, setting isdst to -1 was necessary to get the right behavior across DST boundaries on more-mainstream systems. I do not think it's acceptable to do worse on systems with good time libraries in order to improve behavior on fundamentally broken ones. regards, tom lane
AW: AW: [HACKERS] Re: tinterval - operator problems on AIX
> Pete Forman <[EMAIL PROTECTED]> writes: > > Thinking about that a bit more, I think that tm_isdst should not be > > written into. > > IIRC, setting isdst to -1 was necessary to get the right > behavior across > DST boundaries on more-mainstream systems. I do not think it's > acceptable to do worse on systems with good time libraries in order to > improve behavior on fundamentally broken ones. Yes, the annoyance is, that localtime works for dates before 1970 but mktime doesn't. Best would probably be to assume no DST before 1970 on AIX and IRIX. Andreas
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
> As far as AIX and IRIX are concerned the timezones _are_ the same. No > variation of rules from year to year is possible. You are not going > to work out DST rules for earlier years without incorporating third > party libraries. As I understand it PostgreSQL undertakes to > calculate dates only as accurately as the underlying OS allows. Yes. Hence my reluctance to have code which does time-shifting to get time zones for all platforms. Perhaps it could/should be a configure test? And then we can have a "HAVE_SIMPLISTIC_TZ" (or whatever) #define in the code to enable hacks around the problem? The concern led to my suggestion that we should omit timezone fields from output -- basically do the equivalent of pre-1901 handling using GMT -- if DST is not resolved correctly (but I'm still not sure if this will pan out). > > I'll have to look at the ramifications for input times and for > > dump/restore operations. Does you system respect the TZ or PGTZ > > environment variable? > My code uses localtime and mktime which depend on TZ. There is no > dependency on PGTZ, unless somewhere else in postgres there is an > equivalent of setenv(TZ=getenv(PGTZ)). Yes there is. - Thomas
Re: AW: AW: [HACKERS] Re: tinterval - operator problems on AIX
Zeugswetter Andreas SB <[EMAIL PROTECTED]> writes: > Yes, the annoyance is, that localtime works for dates before 1970 > but mktime doesn't. Best would probably be to assume no DST before > 1970 on AIX and IRIX. That seems like a reasonable answer to me, especially since we have other platforms that behave that way. How can we do this --- just test for isdst = -1 after the call, and assume that means failure? regards, tom lane
Re: [HACKERS] Bruce Momjian's interview in LWN.
Bruce Momjian wrote: > And here I was thinking it was post''-gre-see'-quel :-)
Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!
Thomas Lockhart writes: > How about adding an optional test a la "bigtest" for GiST for this > release? An optional test is like no test at all. No one runs optional tests. If the test is supposed to work then it should be mainstream. If the test might not work then you better go back and figure out what you're testing. If the test might not *compile* (which is probably the more severe problem that people are concerned about) then this idea won't help that at all unless you want to rework the regression test driver framework as well. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
AW: AW: AW: [HACKERS] Re: tinterval - operator problems on AIX
> > Yes, the annoyance is, that localtime works for dates before 1970 > > but mktime doesn't. Best would probably be to assume no DST before > > 1970 on AIX and IRIX. > > That seems like a reasonable answer to me, especially since we have > other platforms that behave that way. How can we do this --- just > test for isdst = -1 after the call, and assume that means failure? Since we have tests of the form (tm_isdst > 0) ? ... in other parts of the code this would imho be sufficient. BTW, I would use the above check for DST in all parts of the code. Currently we eighter have (tm_isdst ? ) or the above form. Andreas
Re: [HACKERS] PostgreSQL v7.1BETA3 Bundled and Available ...
Tom Lane writes: > I think it's reasonable to make the changelog available on the > website (broken down into per-version segments, probably). I just > have doubts about forcing people to download it whether they > want it or not. I agree that the complete changelog is probably too long, I'm just against the per-version segmenting. Changelogs are usually used (well, by me) to get an overview when and how segments of code were worked on. The primary key here is not what (sub-)version the change happened in. The people that look into this sort of thing probably use cvs or snapshots, and even if not it makes the assumption that they downloaded and used *all* intermediate releases and only those, otherwise reading the beta1-to-beta3 changelog is pretty pointless in terms of finding out what happened to the code. How about maintaining a file ChangeLog that goes back one year? Another issue with ChangeLogs is that there's an implicit assumption that they are up to date. These won't be. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: [HACKERS] Re: Beta2 ... ?
On Wed, 10 Jan 2001, Peter Eisentraut wrote: > Michael J Schout writes: > > > We would definately beta test 7.1 beta releases on our test machines if RPMS > > were made available. However, if rpms are not made available, its unlikely > > that anyone around here will get time to build the sources from scratch. > > Building from source takes five minutes. Reading the installation > instructions takes maybe ten minutes. Don't tell me you don't have that > amount of time but you still want to beta test. *shrug* Yes, building the sources isn't that difficult, but it definately takes longer than: rpm -ivh ftp://ftp.postgresql/pub/whatever/postgresql-\*.rpm My feeling is that if we can make it as easy as possible for beta testers to get the beta releases up and running, the more likely they are to use the beta releases. Mike
Re: [HACKERS] Bruce Momjian's interview in LWN.
> Bruce Momjian wrote: > > I announced this on Announce/General a few hours ago. > > > I wanted to mention that all general PostgreSQL news goes to those two > > lists, on the assumption that all people are subscribed to either of > > those two lists. > > > I don't post to hackers by default because I don't want to duplicate > > these postings. > > Sorry to duplicate, but I had not received the post to general or > announce (to both of which I am subscribed) before posting. But I will > keep in mind the new postings. I also put it on the 'In The News' page > on the website (thanks Vince). > > Now that I look through my inbox, I don't see the post anywhere. > Hmmm Not in trash either, which I didn't empty yesterday. That is strange. I saw it on those lists. I want to say I didn't phrase the email correctly. Sorry. First, I wanted to say thanks for saying you liked it. Second, I wanted to make sure the people on hackers who want to see general information are subscribed to either the 'general' list or the 'announce' list. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
RPMs (was Re: [HACKERS] Re: Beta2 ... ?)
Lamar Owen writes: > Does the python build stuff use DESTDIR these days? It does not. You'd have to delve into the internals of the Python-provided makefiles. I might just have to do that, but if you want to look then let me know because this should get fixed. > The perl stuff needs some other things, unfortunately. I need to look > in the CPAN RPM spec's to get examples of how to do this portably > without major connarptions. The Perl build hasn't changed since 7.0 in dramatic ways. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: [HACKERS] PostgreSQL v7.1BETA3 Bundled and Available ...
Peter Eisentraut <[EMAIL PROTECTED]> writes: > I agree that the complete changelog is probably too long, I'm just against > the per-version segmenting. Changelogs are usually used (well, by me) to > get an overview when and how segments of code were worked on. The primary > key here is not what (sub-)version the change happened in. The people > that look into this sort of thing probably use cvs or snapshots, Well, if you have bothered to set up cvs access then you can run cvs2cl for yourself and get exactly the (subset of) the changelog you want. My understanding of what Marc had in mind was to provide info for people who only download releases and want to know what happened since the last release they had. The whole project might be a waste of effort though, since those folks are likely to only want the digested HISTORY file and not the blow-by-blow logs ... regards, tom lane
Re: [HACKERS] Re: AW: Re: GiST for 7.1 !!
> An optional test is like no test at all. No one runs optional tests. If > the test is supposed to work then it should be mainstream. If the test > might not work then you better go back and figure out what you're testing. > If the test might not *compile* (which is probably the more severe problem > that people are concerned about) then this idea won't help that at all > unless you want to rework the regression test driver framework as well. I agree completely. This is just a transition phase to get GiST into the mainstream. - Thomas
Re: [HACKERS] Bruce Momjian's interview in LWN.
On Fri, 12 Jan 2001, Keith G. Murphy wrote: > Bruce Momjian wrote: > > > And here I was thinking it was > > post''-gre-see'-quel > > :-) > It's on the website in both WAV and MP3. Vince. -- == Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking Online Campground Directoryhttp://www.camping-usa.com Online Giftshop Superstorehttp://www.cloudninegifts.com ==
Re: AW: [HACKERS] Re: GiST for 7.1 !!
> At this point my vote is to leave the GIST test in contrib for 7.1. > Anyone who actually cares about GIST (to be blunt: all three of you) > can run it as a separate step. I don't want it in the standard regress > tests until 7.2, when we will have a reasonable amount of time to test > and debug the test. Agreed. I want the GIST fixes in 7.1, but adding a new test at this point is too risky. The issue is that only the GIST people will be using the GIST fixes, while adding it to the regression test will affect all users, which is too risky at this point. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
Re: Beta4 for GiST? (Was: Re: AW: [HACKERS] Re: GiST for 7.1 !! )
> Agreed ... now let's move onto more important things, cause we've spent > much too long on this as it is ... > > Namely, should we bundle up a beta4 this weeekend, so that the GiST > changes are in place for further testing, or hold off for ... ? I would hold off. GIST people can download the snapshot. Others aren't interested in GIST. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
Re: [HACKERS] Re: Beta2 ... ?
Speaking of which.. > > rpm -ivh ftp://ftp.postgresql/pub/whatever/postgresql-\*.rpm > Is there a clearing house for packages? I have made some for OpenBSD (www.crimelabs.net/postgresql.shtml), but I wouldn't even know where to get the rpm or deb files. Should there be a folder on the ftp server for packages for the betas? - Brandon b. palmer, [EMAIL PROTECTED] pgp: www.crimelabs.net/bpalmer.pgp5
Re: [HACKERS] Bruce Momjian's interview in LWN.
On Fri, 12 Jan 2001, Bruce Momjian wrote: > > Bruce Momjian wrote: > > > I announced this on Announce/General a few hours ago. > > > > > I wanted to mention that all general PostgreSQL news goes to those two > > > lists, on the assumption that all people are subscribed to either of > > > those two lists. > > > > > I don't post to hackers by default because I don't want to duplicate > > > these postings. > > > > Sorry to duplicate, but I had not received the post to general or > > announce (to both of which I am subscribed) before posting. But I will > > keep in mind the new postings. I also put it on the 'In The News' page > > on the website (thanks Vince). > > > > Now that I look through my inbox, I don't see the post anywhere. > > Hmmm Not in trash either, which I didn't empty yesterday. > > That is strange. I saw it on those lists. So did I. But lately I've noticed that mail sent to both hackers and general or announcements and general, one will show up a long while after the first. One took two days to show up. It was sent on the 9th (and I saw it on hackers that day) then on the 11th it showed up as posted to general. I didn't bother looking at the headers tho to see where it was hung. Vince. -- == Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking Online Campground Directoryhttp://www.camping-usa.com Online Giftshop Superstorehttp://www.cloudninegifts.com ==
[HACKERS] Re: AW: Re: GiST for 7.1 !!
On Fri, 12 Jan 2001, Tom Lane wrote: > Thomas Lockhart <[EMAIL PROTECTED]> writes: > > How about adding an optional test a la "bigtest" for GiST for this > > release? > > We could do that, but it seems like rather pointless effort, compared > to just telling people "go run the tests in these contrib modules if > you want to test GIST". > > I have no objection to fully integrating some GIST test(s) for 7.2. > I just don't want to deal with it at this late stage of the 7.1 cycle. > We have a long list of considerably more mainstream to-do items yet > to deal with ... Not up to us to deal with, its up to Oleg ... Oleg, if you could work on and submit patches for this before the release, that would be appreciated ... it might also serve to increase visibility of GiST if ppl know there is a regress test for it ...
Re: Beta4 for GiST? (Was: Re: AW: [HACKERS] Re: GiST for 7.1 !! )
On Fri, 12 Jan 2001, Tom Lane wrote: > The Hermit Hacker <[EMAIL PROTECTED]> writes: > > Namely, should we bundle up a beta4 this weeekend, so that the GiST > > changes are in place for further testing, or hold off for ... ? > > First I'd like to finish a couple of open items I have, like fixing > the CRIT_SECTION code so that SIGTERM response will not occur when > we are holding a spinlock. Should be able to get this stuff done in > a day or two, if I quit arguing about GIST and get back to work... Okay, let's scheduale for Monday then if we can ... unless someone comes across something major like we did with the whole beta2/beta3 release :)
[HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
First I tried to dump out a database like: frank@limedes:~ > pg_dump mpi > dump.mpi getTables(): relation 'institute': 6 Triggers were expected, but got 0 The database mpi does contain a table 'institute' and a few foreign key constraints. Then I tried to dump another database, as in: postgres@limedes:~ > pg_dump intranet > dumptest getTables(): relation 'institute': 6 Triggers were expected, but got 0 The database intranet has _no_ table 'institute' and no foreign key constraints. Then I had a look via psql at intranet and it turns out that it shows up as the database mpi mangled into the database intranet, contentwise; i.e. it doesn't only show the tables that are in intranet but also those that belong to mpi?! Then I look at _any_ of the databases in this Postgres installation, they show up as mangled together with mpi?! When I try to vacuum any of those databases, I always get: [ . . . stuff that looks normal . . . ] Index pg_class_oid_index: Pages 2; Tuples 138: Deleted 45. CPU 0.00s/0.00u sec. NOTICE: Index pg_class_oid_index: NUMBER OF INDEX' TUPLES (138) IS NOT THE SAME AS HEAP' (205). Recreate the index. NOTICE: Index pg_class_relname_index: Pages 4; Tuples 138: Deleted 44. CPU 0.00s/0.00u sec. NOTICE: Index pg_class_relname_index: NUMBER OF INDEX' TUPLES (138) IS NOT THE SAME AS HEAP' (205). Recreate the index. ERROR: Cannot insert a duplicate key into unique index pg_class_relname_index However, if I use another client, i.e. not psql, but a web app, then I do still have access to the contents of, for instance, the intranet database. Restarting the server didn't make a difference. Does this make any sense to anyone? Regards, Frank
[HACKERS] Re: FWD: bizarre behavior of 'time' data entry
> Postgresql subtracts one minute from any times I enter into a database: > mydb=# create table test (timeval time); > mydb=# insert into test values ('08:30'); > mydb=# select * from test; > -- > 08:29:00 ... > In a later message he says he's running 7.0.2 on "Trustix Secure Linux > 1.2 (RedHat based)", whatever that is. > Thomas, did you see this thread on pg-novices? You ever seen behavior > like this? I'm baffled. Not sure about the distro, but it is hard to imagine that they got it more wrong wrt compiler options than has Mandrake (the other platform with rounding trouble in their default packages). Barry, can you give more details? How have you build PostgreSQL? We'll need some help tracking this down, but afaik it will point back to a build problem on your platform. - Thomas
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
Tom Lane writes: > Pete Forman <[EMAIL PROTECTED]> writes: > > Thinking about that a bit more, I think that tm_isdst should not > > be written into. > > IIRC, setting isdst to -1 was necessary to get the right behavior > across DST boundaries on more-mainstream systems. I do not think > it's acceptable to do worse on systems with good time libraries in > order to improve behavior on fundamentally broken ones. A footnote in the C89 (and C99) standard says: Thus, a positive or zero value for tm_isdst causes the mktime function to presume initially that Daylight Saving Time, respectively, is or is not in effect for the specified time. A negative value causes it to attempt to determine whether Daylight Saving Time is in effect for the specified time. So tm_isdst being input as 0 or 1 is not forcing the choice of what it will be on output. It can be important at the end of DST when local times repeat and the only way to distinguish them is the setting of this flag. That is borne out by my observations. Setting tm_isdst to -1 before calling mktime can make a difference to the result when the input and result have different DST flags. It is fairly arbitrary what the answer to this question is: if six months is subtracted from a to give b, should a.local.hour = b.local.hour or should a.utc.hour = b.utc.hour? If you want the former then set tm_isdst = -1 before calling mktime. I'm out of time now but I'll try and look for some guidance in the SQL standards. -- Pete Forman -./\.- Disclaimer: This post is originated WesternGeco -./\.- by myself and does not represent [EMAIL PROTECTED] -./\.- opinion of Schlumberger, Baker http://www.crosswinds.net/~petef -./\.- Hughes or their divisions.
Re: [HACKERS] Bruce Momjian's interview in LWN.
-announce is moderated, which means it has to be approved by myself ... I *try* to get in nightly and approve what is in the queue, targetting -announce items first, but sometimes time doesn't permit :( On Fri, 12 Jan 2001, Vince Vielhaber wrote: > On Fri, 12 Jan 2001, Bruce Momjian wrote: > > > > Bruce Momjian wrote: > > > > I announced this on Announce/General a few hours ago. > > > > > > > I wanted to mention that all general PostgreSQL news goes to those two > > > > lists, on the assumption that all people are subscribed to either of > > > > those two lists. > > > > > > > I don't post to hackers by default because I don't want to duplicate > > > > these postings. > > > > > > Sorry to duplicate, but I had not received the post to general or > > > announce (to both of which I am subscribed) before posting. But I will > > > keep in mind the new postings. I also put it on the 'In The News' page > > > on the website (thanks Vince). > > > > > > Now that I look through my inbox, I don't see the post anywhere. > > > Hmmm Not in trash either, which I didn't empty yesterday. > > > > That is strange. I saw it on those lists. > > So did I. But lately I've noticed that mail sent to both hackers and > general or announcements and general, one will show up a long while > after the first. One took two days to show up. It was sent on the > 9th (and I saw it on hackers that day) then on the 11th it showed up > as posted to general. I didn't bother looking at the headers tho to > see where it was hung. > > Vince. > -- > == > Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net > 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking > Online Campground Directoryhttp://www.camping-usa.com >Online Giftshop Superstorehttp://www.cloudninegifts.com > == > > > > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: [EMAIL PROTECTED] secondary: scrappy@{freebsd|postgresql}.org
Re: [HACKERS] Re: Beta2 ... ?
bpalmer writes: > Is there a clearing house for packages? I have made some for OpenBSD > (www.crimelabs.net/postgresql.shtml), but I wouldn't even know where to > get the rpm or deb files. Should there be a folder on the ftp server for > packages for the betas? The RPMs are on the FTP server. In general I feel that packaging is left up to the operating system distributor. So your OpenBSD packages should be sent to the respective port maintainer. The RPMs are treated somewhat differently because they are platform independent and a lot of people are interested in getting betas in RPM form -- and not least importantly also because someone has taken the time to do it on a regular basis. The RPMs that are on the various Linux distribution CDs are still customized by the respective vendor. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: AW: [HACKERS] Re: tinterval - operator problems on AIX
Pete Forman <[EMAIL PROTECTED]> writes: > It is fairly arbitrary what the answer to this question is: if six > months is subtracted from a to give b, should a.local.hour = > b.local.hour or should a.utc.hour = b.utc.hour? If you want the > former then set tm_isdst = -1 before calling mktime. It's not arbitrary: we want the former. Anything else generates bug reports from people who expect "9/1/2000 - six months" to produce "3/1/2000", not sometime late in the evening of 2/29/2000. regards, tom lane
[HACKERS] Re: Beta2 ... ?
> > Is there a clearing house for packages? I have made some for OpenBSD > > (www.crimelabs.net/postgresql.shtml), but I wouldn't even know where to > > get the rpm or deb files. Should there be a folder on the ftp server for > > packages for the betas? > The RPMs are on the FTP server. > In general I feel that packaging is left up to the operating system > distributor. So your OpenBSD packages should be sent to the respective > port maintainer. The RPMs are treated somewhat differently because they > are platform independent and a lot of people are interested in getting > betas in RPM form -- and not least importantly also because someone has > taken the time to do it on a regular basis. The RPMs that are on the > various Linux distribution CDs are still customized by the respective > vendor. Although packages should of course be sent to the port maintainer, I'm sure that in general that they would be welcome on ftp.postgresql.org also. The RPMs have greater visibility because we have taken the time to evolve the packaging (and because the packaging needed a good bit of work on the system I had at the time, so what the heck ;) and certainly they have benefited from Lamar's attention over the last months. Other packages would potentially be in the same circumstances: if they are packaged by someone familiar with the package itself, they will become better than if they are packaged by someone only familiar with packaging. Certainly every package done by someone active on these lists (e.g. Oliver with Debian) is of high quality and has benefited from their knowledge of PostgreSQL. I'm not sure what the case is for OpenBSD specifically, but you may want to talk more with the "official maintainer" if that isn't yourself... - Thomas
Re: [HACKERS] Re: Beta2 ... ?
bpalmer wrote: >Is there a clearing house for packages? I have made some for OpenBSD >(www.crimelabs.net/postgresql.shtml), but I wouldn't even know where to >get the rpm or deb files. Should there be a folder on the ftp server for >packages for the betas? Typically, deb files are obtained from ftp.debian.org or one of its mirrors. That's how you know you're getting an official Debian package. I'm in the process of making debs for the current beta, and will host them somewhere for people to grab for experimenting. I won't put anything into the Debian site until 7.1 is finally released. I'll announce where to get beta debs when the first set is done. However, that may be a day or two yet, because I have to redo a lot of packaging. -- Oliver Elphick[EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C "For the LORD is good; his mercy is everlasting; and his truth endureth to all generations." Psalms 100:5
Re: [HACKERS] Re: Beta2 ... ?
bpalmer writes: > This traffic does not seem necessary for the list, but here are my > thoughts. I think it is. > I don't begin to disagree with this for a second. I know that there are a > lot of RPM users out there that would like the RPM. I'm aware that there > would be a lesser demand for the OBSD packages, but it's still worth > putting up there. Definitely. > I have talked to the maintainer and am working with him on this. With > luck, if I/we keep up on the betas, when 7.1 comes out for real, we > will be able to make the changed then too. That's even better. Maintaining separate tracks of packages would be a source of confusion at best. > What I am gathering from all this conversation is that there is no > repository for packages. I would love to test the FBSD package too, but > I don't know where it is, nor if it's being worked on. If it's not, I > may be interested in working on that too! Well, in the light of the openpackages.org effort it seems you have just signed yourself up to create a BSD-independent package. ;-) Asking the relevant maintainer might be a first step, though. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/
Re: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
On Fri, Jan 12, 2001 at 06:05:09PM +0100, Frank Joerdens wrote: [ . . . ] > Does this make any sense to anyone? Are questions related to 7.1 beta versions best directed to hackers or to general? - Frank
Re: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
Frank Joerdens wrote: [ . . . ] > Restarting the server didn't make a difference. I upgraded to beta3 just now and the problem persists. I didn't do an initdb obviously cuz I cannot save the data via pg_dump. Beta3 will read beta2 data OK (I guess this means that an initdb is not required when going from beta2 to beta3?!) but I can't vacuum or dump on any database. - Frank
RE: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
> [ . . . ] > > Restarting the server didn't make a difference. > > I upgraded to beta3 just now and the problem persists. I > didn't do an initdb obviously cuz > I cannot save the data via pg_dump. Beta3 will read beta2 > data OK (I guess this means that > an initdb is not required when going from beta2 to beta3?!) > but I can't vacuum or dump on > any database. So, server doesn't restart? Could add wal_debug = 1 to postgresql.conf, start postmaster and send me stderr output? Vadim
Re: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
Frank Joerdens <[EMAIL PROTECTED]> writes: > Then I had a look via psql at intranet and it turns out that it shows > up as the database mpi mangled into the database intranet, > contentwise; i.e. it doesn't only show the tables that are in intranet > but also those that belong to mpi? I think you've been bit by the RelFileNodeEquals bug we found on Monday. One of the known possible effects of that bug is that a VACUUM can write blocks of system-catalog tables into the same catalog of a different database. You should probably write off your databases as toast ... update to beta3 and do an initdb. Sorry about that ... regards, tom lane
Re: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
Frank Joerdens <[EMAIL PROTECTED]> writes: > Are questions related to 7.1 beta versions best directed to hackers or to > general? hackers is the proper place for discussing any unreleased version, I'd say. Or you can file a bug report on pgsql-bugs, if that seems more appropriate. regards, tom lane
RE: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
> You should probably write off your databases as toast ... update to > beta3 and do an initdb. Sorry about that ... And try to reproduce bug. Sorry. Vadim
Re: [HACKERS] Bruce Momjian's interview in LWN.
On Fri, Jan 12, 2001 at 10:00:40AM -0600, Keith G. Murphy wrote: > And here I was thinking it was > > post''-gre-see'-quel I pronounce it "postgres". (I suspect that everybody else does too, whenever possible.) In English there's no problem with the spelling differing from the pronunciation. Making the name hard to pronounce doesn't help anybody but Oracle. Nathan Myers [EMAIL PROTECTED]
Re: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
"Mikheev, Vadim" wrote: > > > [ . . . ] > > > Restarting the server didn't make a difference. > > > > I upgraded to beta3 just now and the problem persists. I > > didn't do an initdb obviously cuz > > I cannot save the data via pg_dump. Beta3 will read beta2 > > data OK (I guess this means that > > an initdb is not required when going from beta2 to beta3?!) > > but I can't vacuum or dump on > > any database. > > So, server doesn't restart? Yes, it does restart, that is not the problem (did I explain that properly?). > Could add > > wal_debug = 1 > > to postgresql.conf, start postmaster and send me stderr output? I did add wal_debug = 1 to postgresql.conf. Starting up is OK, when I then try a vacuum verbose on a database, it goes: -- start log -- DEBUG: database system is shut down DEBUG: starting up DEBUG: database system was shut down at 2001-01-12 20:11:37 DEBUG: CheckPoint record at (0, 11629776) DEBUG: Redo record at (0, 11629776); Undo record at (0, 0); Shutdown TRUE DEBUG: NextTransactionId: 8284; NextOid: 98635 DEBUG: database system is in production state NOTICE: --Relation pg_type-- NOTICE: Pages 2: Changed 0, reaped 1, Empty 0, New 0; Tup 131: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 2, MinLen 106, MaxLen 109; Re-using: Free/Avail. Space 1428/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec. NOTICE: Index pg_type_oid_index: Pages 2; Tuples 131: Deleted 0. CPU 0.00s/0.00u sec. NOTICE: Index pg_type_typname_index: Pages 2; Tuples 131: Deleted 0. CPU 0.00s/0.00u sec. INSERT @ 0/11629840: prev 0/11629776; xprev 0/0; xid 8291; bkpb 1: Heap - clean: node 95464/1247; blk 1 XLogFlush: rqst 0/11638108; wrt 0/0; flsh 0/0 INSERT @ 0/11638108: prev 0/11629840; xprev 0/11629840; xid 8291: Transaction - commit: 2001-01-12 20:12:51 XLogFlush: rqst 0/11638144; wrt 0/11638108; flsh 0/11638108 NOTICE: --Relation pg_attribute-- NOTICE: Pages 9: Changed 0, reaped 1, Empty 0, New 0; Tup 649: Vac 0, Keep/VTL 0/0, Crash 0, UnUsed 18, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 5500/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec. NOTICE: Index pg_attribute_relid_attnam_index: Pages 10; Tuples 649: Deleted 0. CPU 0.01s/0.00u sec. NOTICE: Index pg_attribute_relid_attnum_index: Pages 5; Tuples 649: Deleted 0. CPU 0.00s/0.00u sec. INSERT @ 0/11638144: prev 0/11638108; xprev 0/0; xid 8292; bkpb 1: Heap - clean: node 95464/1249; blk 8 XLogFlush: rqst 0/11646412; wrt 0/11638144; flsh 0/11638144 INSERT @ 0/11646412: prev 0/11638144; xprev 0/11638144; xid 8292: Transaction - commit: 2001-01-12 20:12:52 XLogFlush: rqst 0/11646448; wrt 0/11646412; flsh 0/11646412 NOTICE: --Relation pg_class-- NOTICE: Pages 7: Changed 0, reaped 6, Empty 2, New 0; Tup 155: Vac 29, Keep/VTL 0/0, Crash 0, UnUsed 81, MinLen 115, MaxLen 160; Re-using: Free/Avail. Space 37060/30444; EndEmpty/Avail. Pages 0/5. CPU 0.00s/0.00u sec. INSERT @ 0/11646448: prev 0/11646412; xprev 0/0; xid 8293; bkpb 1: Btree - delete: node 95464/17118; tid 1/81 INSERT @ 0/11654720: prev 0/11646448; xprev 0/11646448; xid 8293: Btree - delete: node 95464/17118; tid 1/81 INSERT @ 0/11654768: prev 0/11654720; xprev 0/11654720; xid 8293: Btree - delete: node 95464/17118; tid 1/81 INSERT @ 0/11654816: prev 0/11654768; xprev 0/11654768; xid 8293: Btree - delete: node 95464/17118; tid 1/82 INSERT @ 0/11654864: prev 0/11654816; xprev 0/11654816; xid 8293: Btree - delete: node 95464/17118; tid 1/83 INSERT @ 0/11654912: prev 0/11654864; xprev 0/11654864; xid 8293: Btree - delete: node 95464/17118; tid 1/84 INSERT @ 0/11654960: prev 0/11654912; xprev 0/11654912; xid 8293: Btree - delete: node 95464/17118; tid 1/85 INSERT @ 0/11655008: prev 0/11654960; xprev 0/11654960; xid 8293: Btree - delete: node 95464/17118; tid 1/86 INSERT @ 0/11655056: prev 0/11655008; xprev 0/11655008; xid 8293: Btree - delete: node 95464/17118; tid 1/87 INSERT @ 0/11655104: prev 0/11655056; xprev 0/11655056; xid 8293: Btree - delete: node 95464/17118; tid 1/88 NOTICE: Index pg_class_oid_index: Pages 2; Tuples 88: Deleted 10. CPU 0.00s/0.00u sec. NOTICE: Index pg_class_oid_index: NUMBER OF INDEX' TUPLES (88) IS NOT THE SAME AS HEAP' (155). Recreate the index. INSERT @ 0/11655152: prev 0/11655104; xprev 0/11655104; xid 8293; bkpb 1: Btree - delete: node 95464/17121; tid 1/1 INSERT @ 0/11663424: prev 0/11655152; xprev 0/11655152; xid 8293: Btree - delete: node 95464/17121; tid 1/1 INSERT @ 0/11663472: prev 0/11663424; xprev 0/11663424; xid 8293: Btree - delete: node 95464/17121; tid 1/1 INSERT @ 0/11663520: prev 0/11663472; xprev 0/11663472; xid 8293: Btree - delete: node 95464/17121; tid 1/71 INSERT @ 0/11663568: prev 0/11663520; xprev 0/11663520; xid 8293: Btree - delete: node 95464/17121; tid 1/72 INSERT @ 0/11663616: prev 0/11663568; xprev 0/11663568; xid 8293: Btree - delete: node 95464/17121; tid 1/73 INSERT @ 0/11663664: prev 0/11663616; xprev 0/11663616; xid 8293: Btree - delete: node 95464
RE: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
> ERROR: Cannot insert a duplicate key into unique index > pg_class_oid_index > -- start log > -- > > Which makes me pause . . . are OIDs unique per database or > per PostgreSQL installation? I think per database. Therefore per installation Vadim
Re: [HACKERS] Bruce Momjian's interview in LWN.
Bruce Momjian wrote: >Lamar Owen wrote: > > Now that I look through my inbox, I don't see the post anywhere. > > Hmmm Not in trash either, which I didn't empty yesterday. > That is strange. I saw it on those lists. I've been seeing some odd e-mail propagation lately. > I want to say I didn't phrase the email correctly. Sorry. Oh, not a problem. You're famous for, er, non-verbosity. > First, I wanted to say thanks for saying you liked it. Second, I > wanted to make sure the people on hackers who want to see general > information are subscribed to either the 'general' list or the > 'announce' list. First, you're welcome. Second, understood and agreed. :-) -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] Bruce Momjian's interview in LWN.
> Bruce Momjian wrote: > >Lamar Owen wrote: > > > Now that I look through my inbox, I don't see the post anywhere. > > > Hmmm Not in trash either, which I didn't empty yesterday. > > > That is strange. I saw it on those lists. > > I've been seeing some odd e-mail propagation lately. > > > I want to say I didn't phrase the email correctly. Sorry. > > Oh, not a problem. You're famous for, er, non-verbosity. I am. Hmm... -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
Re: RPMs (was Re: [HACKERS] Re: Beta2 ... ?)
Peter Eisentraut wrote: > Lamar Owen writes: > > Does the python build stuff use DESTDIR these days? > It does not. You'd have to delve into the internals of the > Python-provided makefiles. I might just have to do that, but if you want > to look then let me know because this should get fixed. Hmmm. Then I may just keep the python build I have now, as it should still work, and it is a 'full-manual' build. > > The perl stuff needs some other things, unfortunately. I need to look > > in the CPAN RPM spec's to get examples of how to do this portably > > without major connarptions. > The Perl build hasn't changed since 7.0 in dramatic ways. Well, it's pretty dramatic to get the starred box saying that I don't have permissions to install to where I want to install it when I'm running as root. Or, to put it more tersely, the 7.0 build worked in the RPM build context -- the 7.1 build does not with the same build technique. The root cause is an if [ -w check for the intalldir, which is set to an entirely inappropriate place. So, there are differences (I think the new way is going to be smoother, personally, once I get it working again). -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] Re: Beta2 ... ?
Oliver Elphick wrote: > because I have to redo a lot of packaging. I know how you feel.:-) -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
Re: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
Frank Joerdens <[EMAIL PROTECTED]> writes: > I managed to rescue my data via COPY Oh, good. > but if this is a 7.1-related error and not > Frank-confusedness, then it looks like an evil issue indeed. Evil it was. The haste with which beta3 appeared should've tipped you off that beta2 was badly broken :-(. What's puzzling us, though, is that this bug was in the WAL code from day one, and no one noticed it till this week. Seems like someone should have reported trouble with beta1, if not before. regards, tom lane
RE: [HACKERS] Beta2 Vacuum and pg_dump failures and mangled databases
> Evil it was. The haste with which beta3 appeared should've tipped you > off that beta2 was badly broken :-(. What's puzzling us, though, is > that this bug was in the WAL code from day one, and no one noticed it Just for accuracy - this bug is not related to WAL anyhow. This bug was in new file naming code, which was committed in Oct. Vadim
Re: RPMs (was Re: [HACKERS] Re: Beta2 ... ?)
Lamar Owen wrote: > Well, it's pretty dramatic to get the starred box saying that I don't > have permissions to install to where I want to install it when I'm > running as root. You'd think that, as a native English speaker, I could structure a sentence more effectively than that Let me rephrase: It's pretty dramatic to get the 'You don't have permissions to install' message from the perl 'make install' when I am performing the build (and the make install) as root. Particularly when 7.0's perl 'make install' worked semi-properly. I say semi-properly because the packing list had to be rewritten -- but at least the install did its job to the proper build-root'ed location. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11
[HACKERS] CRCs
Vadim wrote: > Tom wrote: > > Bruce wrote: > > > ... If the CRC on > > > the WAL log checks for errors that are not checked anywhere else, > > > then fine, but I thought disk CRC would just duplicate the I/O > > > subsystem/disk. > > > > A disk-block CRC would detect partially written blocks (ie, > > power drops after disk has written M of the N sectors in a > > block). The disk's own checks will NOT consider this condition a > > failure. I'm not convinced that WAL will reliably detect it either > > (Vadim?). > > Idea proposed by Andreas about "physical log" is implemented! Now WAL > saves whole data blocks on first after checkpoint modification. This > way on recovery modified data blocks will be first restored *as a > whole*. Isn't it much better than just detection of partially writes? This seems to protect against some partial writes, but see below. > > Certainly WAL will not help for corruption caused by external agents, > > away from any updates that are actually being performed/logged. > > What do you mean by "external agents"? External agents include RAM bit drops and noise on cables when blocks are (read and re-) written. Every time data is moved, there is a chance of an undetected error being introduced. The disk only promises (within limits) to deliver the sector that was written; it doesn't promise that what was written is what you meant to write. Errors of this sort accumulate unless caught by end-to-end checks. External agents include bugs in database code, bugs in OS code, bugs in disk controller firmware, and bugs in disk firmware. Each can result in clobbered data, blocks being written in the wrong place, blocks said to be written but not, and any number of other variations. All this code is written by humans, and even the most thorough testing cannot cover even the majority of code paths. External agents include sector errors not caught by the disk CRC: the disk only promises to keep the number of errors delivered to a reasonably low (and documented) level. It's up to the user to notice the errors that slip through. and Andreas wrote: > > A disk-block CRC would detect partially written blocks (ie, power > > drops after disk has written M of the N sectors in a block). The > > disk's own checks will NOT consider this condition a failure. > > But physical log recovery will rewrite every page that was changed > after last checkpoint, thus this is not an issue anymore. No. That assumes that when the drive _says_ the block is written, it is really on the disk. That is not true for IDE drives. It is true for SCSI drives only when the SCSI spec is implemented correctly, but implementing the spec correctly interferes with favorable benchmark results. > > I'm not convinced that WAL will reliably detect it either > > (Vadim?). Certainly WAL will not help for corruption caused by > > external agents, away from any updates that are actually being > > performed/logged. > > The external agent (if malvolent) could write a correct CRC anyway > If on the other hand the agent writes complete garbage, vacuum will > notice. Vacuum does not check most of the bits in the blocks it reads. (Bad bits in metadata will cause a crash only if you're lucky. If not, they result in more corruption.) A database is unusual among computer applications in that an error introduced today can sit unnoticed on the disk, and then result in an unnoticed wrong answer six months later. We need to be able to detect bad bits as soon as possible, before the backups have been overwritten. CRCs are how we can detect cumulative corruption from all sources. Nathan Myers [EMAIL PROTECTED]
RE: [HACKERS] CRCs
> > But physical log recovery will rewrite every page that was changed > > after last checkpoint, thus this is not an issue anymore. > > No. That assumes that when the drive _says_ the block is written, > it is really on the disk. That is not true for IDE drives. It is > true for SCSI drives only when the SCSI spec is implemented correctly, > but implementing the spec correctly interferes with favorable > benchmark results. You know - this is *core* assumption. If drive lies about this then *nothing* will help you. Do you remember core rule of WAL? "Changes must be logged *before* changed data pages written". If this rule will be broken then data files will be inconsistent after crash recovery and you will not notice this, w/wo CRC in data blocks. I agreed that CRCs could help to detect other errors but probably it's too late for 7.1 Vadim
Re: [HACKERS] CRCs
On Fri, Jan 12, 2001 at 01:07:56PM -0800, Mikheev, Vadim wrote: > > > But physical log recovery will rewrite every page that was changed > > > after last checkpoint, thus this is not an issue anymore. > > > > No. That assumes that when the drive _says_ the block is written, > > it is really on the disk. That is not true for IDE drives. It is > > true for SCSI drives only when the SCSI spec is implemented correctly, > > but implementing the spec correctly interferes with favorable > > benchmark results. > > You know - this is *core* assumption. If drive lies about this then > *nothing* will help you. Do you remember core rule of WAL? > "Changes must be logged *before* changed data pages written". > If this rule will be broken then data files will be inconsistent > after crash recovery and you will not notice this, w/wo CRC in > data blocks. You can include the data blocks' CRCs in the log entries. > I agreed that CRCs could help to detect other errors but probably > it's too late for 7.1. 7.2 is not too far off. I'm hoping to see it then. Nathan Myers [EMAIL PROTECTED]
[HACKERS] CVS updates on committers list...
Has anyone ever thought of asking the FreeBSD folks for their CVS COmmit message generator? They generate ONE message with more info in it for multi-directory commits than we do with ours. Thanks... -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
Re: [HACKERS] CRCs
On Fri, Jan 12, 2001 at 12:35:14PM -0800, Nathan Myers wrote: > Vadim wrote: > > What do you mean by "external agents"? > > External agents include RAM bit drops and noise on cables when > blocks are (read and re-) written. Every time data is moved, > there is a chance of an undetected error being introduced. The > disk only promises (within limits) to deliver the sector that > was written; it doesn't promise that what was written is what > you meant to write. Errors of this sort accumulate unless > caught by end-to-end checks. > > External agents include bugs in database code, bugs in OS code, > bugs in disk controller firmware, and bugs in disk firmware. > Each can result in clobbered data, blocks being written in the > wrong place, blocks said to be written but not, and any number > of other variations. All this code is written by humans, and > even the most thorough testing cannot cover even the majority > of code paths. > > External agents include sector errors not caught by the disk CRC: > the disk only promises to keep the number of errors delivered to a > reasonably low (and documented) level. It's up to the user to > notice the errors that slip through. Interestingly, right after I posted this I noticed that cron noticed a corrupt inode in /dev on my machine. The disk is happy with it, but I'm not... Nathan Myers [EMAIL PROTECTED]
[HACKERS] SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea
I have just finished trudging through a bunch of code and trying to make it secure against being interrupted by die() at arbitrary instants. However, I am under no illusion that I have succeeded in making the world safe for SIGTERM, and you shouldn't be either. There is just way too much code that is potentially invoked during proc_exit; even if we fixed every line of our code, there's C library code that's not under our control. For example, malloc/free are not interrupt-safe on many platforms, last I heard. Do you want to put START/END_CRIT_SECTION around every memory allocation operation? I don't. I think we'd be lots better off to abandon the notion that we can exit directly from the SIGTERM interrupt handler, and instead treat SIGTERM the same way we treat QueryCancel: set a flag that is inspected at specific places where we know we are in a good state. Comments? regards, tom lane
Re: [HACKERS] CVS updates on committers list...
Well there is cvs2cl and there is a utility I use: pgsql/src/tools/pgcvslog > > Has anyone ever thought of asking the FreeBSD folks for > their CVS COmmit message generator? They generate ONE message > with more info in it for multi-directory commits than we > do with ours. > > Thanks... > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
RE: [HACKERS] CVS updates on committers list...
I'm referring to the actual commit messages. It would be in the CVS server config -Original Message- From: Bruce Momjian [mailto:[EMAIL PROTECTED]] Sent: Friday, January 12, 2001 4:03 PM To: Larry Rosenman Cc: PostgreSQL Hackers List Subject: Re: [HACKERS] CVS updates on committers list... Well there is cvs2cl and there is a utility I use: pgsql/src/tools/pgcvslog > > Has anyone ever thought of asking the FreeBSD folks for > their CVS COmmit message generator? They generate ONE message > with more info in it for multi-directory commits than we > do with ours. > > Thanks... > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
RE: [HACKERS] SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea
> I think we'd be lots better off to abandon the notion that we can exit > directly from the SIGTERM interrupt handler, and instead treat SIGTERM > the same way we treat QueryCancel: set a flag that is inspected at > specific places where we know we are in a good state. > > Comments? This will be much cleaner. Vadim
Re: [HACKERS] CVS updates on committers list...
[ Charset ISO-8859-1 unsupported, converting... ] > I'm referring to the actual commit messages. > > It would be in the CVS server config Oh, yes, I understand now. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
RE: [HACKERS] CRCs
> > You know - this is *core* assumption. If drive lies about this then > > *nothing* will help you. Do you remember core rule of WAL? > > "Changes must be logged *before* changed data pages written". > > If this rule will be broken then data files will be inconsistent > > after crash recovery and you will not notice this, w/wo CRC in > > data blocks. > > You can include the data blocks' CRCs in the log entries. How could it help? Vadim
Re: [HACKERS] CVS updates on committers list...
"Larry Rosenman" <[EMAIL PROTECTED]> writes: > I'm referring to the actual commit messages. It *would* be awfully nice if the pgsql-committers traffic were one message per commit, instead of one per directory touched per commit. This has been suggested before, but nothing got done ... regards, tom lane
Re: [HACKERS] CRCs
On Fri, Jan 12, 2001 at 02:16:07PM -0800, Mikheev, Vadim wrote: > > > You know - this is *core* assumption. If drive lies about this then > > > *nothing* will help you. Do you remember core rule of WAL? > > > "Changes must be logged *before* changed data pages written". > > > If this rule will be broken then data files will be inconsistent > > > after crash recovery and you will not notice this, w/wo CRC in > > > data blocks. > > > > You can include the data blocks' CRCs in the log entries. > > How could it help? It wouldn't help you recover, but you would be able to report that you cannot recover. To be more specific, if the blocks referenced in the log are partially written, their CRCs will (probably) be wrong. If they are not physically written at all, their CRCs will be correct but will not match what is in the log. In either case the user will know immediately that the database has been corrupted, and must fall back on a failover image or backup. It would be no bad thing to include the CRC of the block referenced wherever in the file format that a block reference lives. Nathan Myers [EMAIL PROTECTED]
Re: [HACKERS] CVS updates on committers list...
Tom Lane <[EMAIL PROTECTED]> writes: > "Larry Rosenman" <[EMAIL PROTECTED]> writes: > > I'm referring to the actual commit messages. > > It *would* be awfully nice if the pgsql-committers traffic were one > message per commit, instead of one per directory touched per commit. > This has been suggested before, but nothing got done ... This is easy to set up, for an appropriate definition of ``easy''. I have included two files below, commit_prep and log_accum. Do cvs co CVSROOT Copy the files into the CVSROOT directory. Check the definition of $MAILER near the top of log_accum. Everything else should be fine. Do this: cvs add commit_prep cvs add log_accum Edit the file checkoutlist, and add these lines: commit_prep Won't be able to do mail logging. log_accum Won't be able to do mail logging. Edit the file commitinfo, and add something like this line: DEFAULT /usr/bin/perl $CVSROOT/CVSROOT/commit_prep -r Edit the file loginfo, and add something like this line (replace MAILINGLIST with the mailing address to which you want log messages to be sent): DEFAULT /usr/bin/perl $CVSROOT/CVSROOT/log_accum -m MAILINGLIST -s %s Then: cvs commit Then test it. Good luck. I didn't write these scripts, but I've used them in a number of places. Ian commit_prep: == #!/usr/bin/perl # -*-Perl-*- # # $Id: commit_prep,v 1.1 1998/12/01 03:18:27 ian Exp $ # # Perl filter to handle pre-commit checking of files. This program # records the last directory where commits will be taking place for # use by the log_accum.pl script. For new files, it forces the # existence of a RCS "Id" keyword in the first ten lines of the file. # For existing files, it checks version number in the "Id" line to # prevent losing changes because an old version of a file was copied # into the direcory. # # Possible future enhancements: # #Check for cruft left by unresolved conflicts. Search for #"^<<<$", "^---$", and "^>>>$". # #Look for a copyright and automagically update it to the #current year. [[ bad idea! -- woods ]] # # # Contributed by David Hampton <[EMAIL PROTECTED]> # # Hacked on lots by Greg A. Woods <[EMAIL PROTECTED]> # # Configurable options # # Constants (remember to protect strings from RCS keyword substitution) # $LAST_FILE = "/tmp/#egcscvs.lastdir"; # must match name in log_accum.pl $ENTRIES = "CVS/Entries"; # Patterns to find $Log keywords in files # $LogString1 = "\\\$\\Log: .* \\\$"; $LogString2 = "\\\$\\Log\\\$"; $NoLog = "%s - contains an RCS \$Log keyword. It must not!\n"; # pattern to match an RCS Id keyword line with an existing ID # $IDstring = "\"@\\(#\\)[^:]*:.*\\\$\Id: .*\\\$\""; $NoId = " %s - Does not contain a properly formatted line with the keyword \"Id:\". I.e. no lines match \"" . $IDstring . "\". Please see the template files for an example.\n"; # pattern to match an RCS Id keyword line for a new file (i.e. un-expanded) # $NewId = "\"@(#)[^:]*:.*\\$\Id\\$\""; $NoName = " %s - The ID line should contain only \"@(#)module/path:\$Name\$:\$\Id\$\" for a newly created file.\n"; $BadName = " %s - The file name '%s' in the ID line does not match the actual filename.\n"; $BadVersion = " %s - How dare you!!! You replaced your copy of the file '%s', which was based upon version %s, with an %s version based upon %s. Please move your '%s' out of the way, perform an update to get the current version, and them merge your changes into that file, then try the commit again.\n"; # # Subroutines # sub write_line { local($filename, $line) = @_; open(FILE, ">$filename") || die("Cannot open $filename, stopped"); print(FILE $line, "\n"); close(FILE); } sub check_version { local($i, $id, $rname, $version); local($filename, $cvsversion) = @_; open(FILE, "<$filename") || return(0); @all_lines = (); $idpos = -1; $newidpos = -1; for ($i = 0; ; $i++) { chop; push(@all_lines, $_); if ($_ =~ /$IDstring/) { $idpos = $i; } if ($_ =~ /$NewId/) { $newidpos = $i; } } if (grep(/$LogString1/, @all_lines) || grep(/$LogString2/, @all_lines)) { print STDERR sprintf($NoLog, $filename); return(1); } if ($debug != 0) { print STDERR sprintf("file = %s, version = %d.\n", $filename, $cvsversion{$filename}); } if ($cvsversion{$filename} == 0) { if ($newidpos != -1 && $all_lines[$newidpos] !~ /$NewId/) { print STDERR sprintf($NoName, $filename); return(1); } return(0); } if ($idpos == -1) { print STDERR sprintf($NoId, $filename); return(1); } $line = $all_lines[$idpos]; $pos = index($line, "Id: "); if ($debug != 0) { print STDERR sprintf("%d in '%s'.\n", $pos, $line); } ($id
Re: [HACKERS] CRCs
[EMAIL PROTECTED] (Nathan Myers) writes: >> "Changes must be logged *before* changed data pages written". >> If this rule will be broken then data files will be inconsistent >> after crash recovery and you will not notice this, w/wo CRC in >> data blocks. You can include the data blocks' CRCs in the log entries. >> >> How could it help? > It wouldn't help you recover, but you would be able to report that > you cannot recover. How? The scenario Vadim is pointing out is where the disk drive writes a changed data block in advance of the WAL log entry describing the change. Then power drops and the WAL entry never gets made. At restart, how will you realize that that data block now contains data you don't want? There's not even a log entry telling you you need to look at it, much less one that tells you what should be in it. AFAICS, disk-block CRCs do not guard against mishaps involving intended writes. They will help guard against data corruption that might creep in due to outside factors, however. regards, tom lane
RE: [HACKERS] CRCs
> > It wouldn't help you recover, but you would be able to report that > > you cannot recover. > > How? The scenario Vadim is pointing out is where the disk > drive writes a changed data block in advance of the WAL log entry > describing the change. Then power drops and the WAL entry never gets > made. At restart, how will you realize that that data block now > contains data you don't want? There's not even a log entry telling > you you need to look at it, much less one that tells you what should > be in it. > > AFAICS, disk-block CRCs do not guard against mishaps involving intended > writes. They will help guard against data corruption that might creep > in due to outside factors, however. I couldn't describe better -:) Vadim
Re: [HACKERS] CRCs
On Fri, Jan 12, 2001 at 06:06:21PM -0500, Tom Lane wrote: > [EMAIL PROTECTED] (Nathan Myers) writes: > >> "Changes must be logged *before* changed data pages written". > >> If this rule will be broken then data files will be inconsistent > >> after crash recovery and you will not notice this, w/wo CRC in > >> data blocks. > > You can include the data blocks' CRCs in the log entries. > >> > >> How could it help? > > > It wouldn't help you recover, but you would be able to report that > > you cannot recover. > > How? The scenario Vadim is pointing out is where the disk drive writes > a changed data block in advance of the WAL log entry describing the > change. Then power drops and the WAL entry never gets made. At > restart, how will you realize that that data block now contains data you > don't want? There's not even a log entry telling you you need to look > at it, much less one that tells you what should be in it. OK. In that case, recent transactions that were acknowledged to user programs just disappear. The database isn't corrupt, but it doesn't contain what the user believes is in it. The only way I can think of to guard against that is to have a sequence number in each acknowledgement sent to users, and also reported when the database recovers. If users log their ACK numbers, they can be compared when the database comes back up. Obviously it's better to configure the disk so that it doesn't lie about what's been written. > AFAICS, disk-block CRCs do not guard against mishaps involving intended > writes. They will help guard against data corruption that might creep > in due to outside factors, however. Right. Nathan Myers [EMAIL PROTECTED]
RE: [HACKERS] SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea
> -Original Message- > From: Mikheev, Vadim > > > I think we'd be lots better off to abandon the notion that we can exit > > directly from the SIGTERM interrupt handler, and instead treat SIGTERM > > the same way we treat QueryCancel: set a flag that is inspected at > > specific places where we know we are in a good state. > > > > Comments? > > This will be much cleaner. > Hmm, CancelQuery isn't so urgent an operation currently. For example, VACUUM checks QueryCancel flag only once per table. Regards. Hiroshi Inoue
Re: [HACKERS] SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea
>> I think we'd be lots better off to abandon the notion that we can exit >> directly from the SIGTERM interrupt handler, and instead treat SIGTERM >> the same way we treat QueryCancel: set a flag that is inspected at >> specific places where we know we are in a good state. >> >> Comments? > This will be much cleaner. OK, here's a more detailed proposal. We'll eliminate elog() directly from the signal handlers, except in the extremely limited case where the main line is waiting for a lock (the existing "cancel wait for lock" mechanism for QueryCancel can be used for SIGTERM as well, I think). Otherwise, both die() and QueryCancelHandler() will just set flags and return. handle_warn() (the SIGQUIT handler) should probably go away entirely; it does nothing that's not done better by QueryCancel. I'm inclined to make SIGQUIT invoke die() the same as SIGTERM does, unless someone has a better idea what to do with that signal. I believe we should keep the "critical section count" mechanism that Vadim already created, even though it is no longer needed to discourage the signal handlers themselves from aborting processing. With the critical section counter, it is OK to have flag checks inside subroutines that are safe to abort from in some contexts and not others --- the contexts that don't want an abort just have to do START/END_CRIT_SECTION to ensure that QueryCancel/ProcDie won't happen in routines they call. So the basic flag checks are like "if (InterruptFlagSet && CritSectionCounter == 0) elog(...);". Having done that, the $64 question is where to test the flags. Obviously we can check for interrupts at all the places where QueryCancel is tested now, which is primarily the outer loops. I suggest we also check for interrupts in s_lock's wait loop (ie, we can cancel/die if we haven't yet got the lock AND we are not in a crit section), as well as in END_CRIT_SECTION. I intend to devise a macro CHECK_FOR_INTERRUPTS() that embodies all the test code, rather than duplicating these if-tests in many places. Note I am assuming that it's always reasonable to check for QueryCancel and ProcDie at the same places. I do not see any hole in that theory, but if someone finds one, we could introduce additional flags/counters to distinguish safe-to-cancel from safe-to-die states. Comments? regards, tom lane
Re: [HACKERS] SIGTERM -> elog(FATAL) -> proc_exit() is probably a bad idea
"Hiroshi Inoue" <[EMAIL PROTECTED]> writes: > Hmm, CancelQuery isn't so urgent an operation currently. > For example, VACUUM checks QueryCancel flag only > once per table. Right, we'll need to check in more places. See my just-posted proposal. Checking at any spinlock grab should ensure that we check reasonably often. I just realized I forgot to mention the case of SIGTERM while the main line is waiting for input from the frontend --- obviously we want to quit immediately in that case, too. regards, tom lane
[HACKERS] Re: still no log
El Mié 10 Ene 2001 20:55, Alfonso Peniche escribió: > Existe un archivo llamado postmaster.init en el directorio de postgres, en > ese directorio le especificas si quieres que use o no los logs. Ya > revisaste ese archivo? No existe tal archivo, pero ya lo sulocione. Gracias de todas formas. ;-) -- System Administration: It's a dirty job, but someone told I had to do it. - Martín Marqués email: [EMAIL PROTECTED] Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/ Administrador de sistemas en math.unl.edu.ar -
[HACKERS] problems with pg_geqo
I'm trying to retrieve a big query (bah, not so big, but with a dozen of joins) and all I get is this: 010112.17:22:06.050 [5387] DEBUG: Rel trabajos_docentes: Pages: 3 --> 2. 010112.17:30:48.458 [5412] DEBUG: geqo_params: ga parameter file '/var/lib/pgsql/data/pg_geqo' does not exist or permissions are not setup correctly 010112.17:30:48.475 [5412] DEBUG: geqo_params: no pool size specified; using computed value of 1024 010112.17:30:48.475 [5412] DEBUG: geqo_params: no optimization effort specified; using value of 40 010112.17:30:48.475 [5412] DEBUG: geqo_params: no number of trials specified; using computed value of 400 010112.17:30:48.475 [5412] DEBUG: geqo_params: no random seed specified; using computed value of 979331448 010112.17:30:48.475 [5412] DEBUG: geqo_params: no selection bias specified; using default value of 2.00 010112.17:30:57.936 [5412] DEBUG: geqo_main: using edge recombination crossover [ERX] Now, when I go to /var/lib/pgsql/data/ I see this: total 100 -rw--- 1 postgres postgres4 Dec 5 12:10 PG_VERSION drwx-- 5 postgres postgres 1024 Dec 19 17:41 base/ -rw--- 1 postgres postgres 8192 Jan 12 15:30 pg_control -rw--- 1 postgres postgres 8192 Jan 12 17:22 pg_database -r 1 postgres postgres 3407 Dec 5 12:10 pg_geqo.sample -rw--- 1 postgres postgres0 Dec 5 12:10 pg_group -rw--- 1 postgres postgres16384 Dec 5 12:10 pg_group_name_index -rw--- 1 postgres postgres16384 Dec 5 12:10 pg_group_sysid_index -r 1 postgres postgres 5729 Dec 5 12:10 pg_hba.conf -rw--- 1 postgres postgres16384 Jan 12 17:22 pg_log -rw--- 1 postgres postgres 58 Dec 16 17:03 pg_pwd -rw--- 1 postgres postgres0 Dec 16 17:03 pg_pwd.reload -rw--- 1 postgres postgres 8192 Dec 16 17:03 pg_shadow -rw--- 1 postgres postgres 8192 Jan 12 17:22 pg_variable drwx-- 2 postgres postgres 1024 Dec 5 12:10 pg_xlog/ -rw--- 1 postgres postgres 87 Jan 12 15:30 postmaster.opts -r 1 postgres postgres4 Dec 5 12:10 postmaster.opts.default -rw--- 1 postgres postgres4 Jan 12 15:30 postmaster.pid as you can see, there is a pg_geqo.sample, but not a pg_geqo. Should I rename it for it to work? I have been making lots of querys (inserts and selects) with no problem at all. Any help will be appretiated. -- System Administration: It's a dirty job, but someone told I had to do it. - Martín Marqués email: [EMAIL PROTECTED] Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/ Administrador de sistemas en math.unl.edu.ar -
[GENERAL] Re: [HACKERS] problems with pg_geqo
"Martin A. Marques" <[EMAIL PROTECTED]> writes: > as you can see, there is a pg_geqo.sample, but not a pg_geqo. Should I > rename it for it to work? Only if you want to mess with the default GEQO parameters. The debug messages you show don't seem to indicate that anything's wrong, so I'm not at all clear on what your complaint is. regards, tom lane
Re: [HACKERS] CRCs
* Nathan Myers <[EMAIL PROTECTED]> [010112 15:49] wrote: > On Fri, Jan 12, 2001 at 06:06:21PM -0500, Tom Lane wrote: > > [EMAIL PROTECTED] (Nathan Myers) writes: > > >> "Changes must be logged *before* changed data pages written". > > >> If this rule will be broken then data files will be inconsistent > > >> after crash recovery and you will not notice this, w/wo CRC in > > >> data blocks. > > > > You can include the data blocks' CRCs in the log entries. > > >> > > >> How could it help? > > > > > It wouldn't help you recover, but you would be able to report that > > > you cannot recover. > > > > How? The scenario Vadim is pointing out is where the disk drive writes > > a changed data block in advance of the WAL log entry describing the > > change. Then power drops and the WAL entry never gets made. At > > restart, how will you realize that that data block now contains data you > > don't want? There's not even a log entry telling you you need to look > > at it, much less one that tells you what should be in it. > > OK. In that case, recent transactions that were acknowledged to user > programs just disappear. The database isn't corrupt, but it doesn't > contain what the user believes is in it. > > The only way I can think of to guard against that is to have a sequence > number in each acknowledgement sent to users, and also reported when the > database recovers. If users log their ACK numbers, they can be compared > when the database comes back up. > > Obviously it's better to configure the disk so that it doesn't lie about > what's been written. I thought WAL+fsync wasn't supposed to allow this to happen? -- -Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]] "I have the heart of a child; I keep it in a jar on my desk."
RE: [HACKERS] CRCs
> > How? The scenario Vadim is pointing out is where the disk > > drive writes a changed data block in advance of the WAL log > > entry describing the change. Then power drops and the WAL > > entry never gets made. At restart, how will you realize that > > that data block now contains data you don't want? There's not > > even a log entry telling you you need to look at it, much less > > one that tells you what should be in it. > > OK. In that case, recent transactions that were acknowledged to user > programs just disappear. The database isn't corrupt, but it doesn't > contain what the user believes is in it. Example. 1. Tuple was inserted into index. 2. Looking for free buffer bufmgr decides to write index block. 3. Following WAL core rule bufmgr first calls XLogFlush() to write and fsync log record related to index tuple insertion. 4. *Beliving* that log record is on disk now (after successful fsync) bufmgr writes index block. If log record was not really flushed on disk in 3. but on-disk image of index block was updated in 4. and system crashed after this then after restart recovery you'll have unlawful index tuple pointing to where? Who knows! No guarantee that corresponding heap tuple was flushed on disk. Isn't database corrupted now? Vadim
Re: [HACKERS] CVS updates on committers list...
I keep meaning to work with Alfred on this ... just keeps getting backburnered :( Let me take a look at her this weekend ... On Fri, 12 Jan 2001, Larry Rosenman wrote: > > Has anyone ever thought of asking the FreeBSD folks for > their CVS COmmit message generator? They generate ONE message > with more info in it for multi-directory commits than we > do with ours. > > Thanks... > -- > Larry Rosenman http://www.lerctr.org/~ler > Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: [EMAIL PROTECTED] secondary: scrappy@{freebsd|postgresql}.org
Re: [HACKERS] CRCs
"Mikheev, Vadim" <[EMAIL PROTECTED]> writes: > If log record was not really flushed on disk in 3. but on-disk image of > index block was updated in 4. and system crashed after this then after > restart recovery you'll have unlawful index tuple pointing to where? > Who knows! No guarantee that corresponding heap tuple was flushed on > disk. This example doesn't seem very convincing. Wouldn't the XLOG entry describing creation of the heap tuple appear in the log before the one for the index tuple? Or are you assuming that both these XLOG entries are lost due to disk drive malfeasance? regards, tom lane
Re: [HACKERS] CRCs
On Fri, Jan 12, 2001 at 04:10:36PM -0800, Alfred Perlstein wrote: > Nathan Myers <[EMAIL PROTECTED]> [010112 15:49] wrote: > > > > Obviously it's better to configure the disk so that it doesn't > > lie about what's been written. > > I thought WAL+fsync wasn't supposed to allow this to happen? It's an OS and hardware configuration matter; you only get correct WAL+fsync semantics if the underlying system is configured right. IDE disks are almost always configured wrong, to spoof benchmarks; SCSI disks sometimes are. If they're configured wrong, then (now that we have a CRC in the log entry) in the event of a power outage the database might come back with recently-acknowledged transaction results discarded. That's a lot better than a corrupt database, but it's not industrial-grade semantics. (Use a UPS.) Nathan Myers [EMAIL PROTECTED]
Re: [HACKERS] CVS updates on committers list...
Done ... just tried adding white space to three files, in three directories, and it seems to go through as one commit ... let's see what happens next time Tom makes a big change :) On 12 Jan 2001, Ian Lance Taylor wrote: > Tom Lane <[EMAIL PROTECTED]> writes: > > > "Larry Rosenman" <[EMAIL PROTECTED]> writes: > > > I'm referring to the actual commit messages. > > > > It *would* be awfully nice if the pgsql-committers traffic were one > > message per commit, instead of one per directory touched per commit. > > This has been suggested before, but nothing got done ... > > This is easy to set up, for an appropriate definition of ``easy''. > > I have included two files below, commit_prep and log_accum. Do > cvs co CVSROOT > Copy the files into the CVSROOT directory. Check the definition of > $MAILER near the top of log_accum. Everything else should be fine. > Do this: > cvs add commit_prep > cvs add log_accum > > Edit the file checkoutlist, and add these lines: > > commit_prep Won't be able to do mail logging. > log_accum Won't be able to do mail logging. > > Edit the file commitinfo, and add something like this line: > > DEFAULT/usr/bin/perl $CVSROOT/CVSROOT/commit_prep -r > > Edit the file loginfo, and add something like this line (replace > MAILINGLIST with the mailing address to which you want log messages to > be sent): > > DEFAULT/usr/bin/perl $CVSROOT/CVSROOT/log_accum -m MAILINGLIST -s %s > > Then: > cvs commit > > Then test it. > > Good luck. I didn't write these scripts, but I've used them in a > number of places. > > Ian > > commit_prep: > == > #!/usr/bin/perl > # -*-Perl-*- > # > # $Id: commit_prep,v 1.1 1998/12/01 03:18:27 ian Exp $ > # > # Perl filter to handle pre-commit checking of files. This program > # records the last directory where commits will be taking place for > # use by the log_accum.pl script. For new files, it forces the > # existence of a RCS "Id" keyword in the first ten lines of the file. > # For existing files, it checks version number in the "Id" line to > # prevent losing changes because an old version of a file was copied > # into the direcory. > # > # Possible future enhancements: > # > #Check for cruft left by unresolved conflicts. Search for > #"^<<<$", "^---$", and "^>>>$". > # > #Look for a copyright and automagically update it to the > #current year. [[ bad idea! -- woods ]] > # > # > # Contributed by David Hampton <[EMAIL PROTECTED]> > # > # Hacked on lots by Greg A. Woods <[EMAIL PROTECTED]> > > # > # Configurable options > # > > # Constants (remember to protect strings from RCS keyword substitution) > # > $LAST_FILE = "/tmp/#egcscvs.lastdir"; # must match name in log_accum.pl > $ENTRIES = "CVS/Entries"; > > # Patterns to find $Log keywords in files > # > $LogString1 = "\\\$\\Log: .* \\\$"; > $LogString2 = "\\\$\\Log\\\$"; > $NoLog = "%s - contains an RCS \$Log keyword. It must not!\n"; > > # pattern to match an RCS Id keyword line with an existing ID > # > $IDstring = "\"@\\(#\\)[^:]*:.*\\\$\Id: .*\\\$\""; > $NoId = " > %s - Does not contain a properly formatted line with the keyword \"Id:\". > I.e. no lines match \"" . $IDstring . "\". > Please see the template files for an example.\n"; > > # pattern to match an RCS Id keyword line for a new file (i.e. un-expanded) > # > $NewId = "\"@(#)[^:]*:.*\\$\Id\\$\""; > > $NoName = " > %s - The ID line should contain only \"@(#)module/path:\$Name\$:\$\Id\$\" > for a newly created file.\n"; > > $BadName = " > %s - The file name '%s' in the ID line does not match > the actual filename.\n"; > > $BadVersion = " > %s - How dare you!!! You replaced your copy of the file '%s', > which was based upon version %s, with an %s version based > upon %s. Please move your '%s' out of the way, perform an > update to get the current version, and them merge your changes > into that file, then try the commit again.\n"; > > # > # Subroutines > # > > sub write_line { > local($filename, $line) = @_; > open(FILE, ">$filename") || die("Cannot open $filename, stopped"); > print(FILE $line, "\n"); > close(FILE); > } > > sub check_version { > local($i, $id, $rname, $version); > local($filename, $cvsversion) = @_; > > open(FILE, "<$filename") || return(0); > > @all_lines = (); > $idpos = -1; > $newidpos = -1; > for ($i = 0; ; $i++) { > chop; > push(@all_lines, $_); > if ($_ =~ /$IDstring/) { > $idpos = $i; > } > if ($_ =~ /$NewId/) { > $newidpos = $i; > } > } > > if (grep(/$LogString1/, @all_lines) || grep(/$LogString2/, @all_lines)) { > print STDERR sprintf($NoLog, $filename); > return(1); > } > > if ($debug != 0) { > print STDERR sprintf("file = %s, version = %d.\n", $filename, >$cvsversion{$filename}); > } > > if
RE: [HACKERS] CRCs
> > If log record was not really flushed on disk in 3. but > > on-disk image of index block was updated in 4. and system > > crashed after this then after restart recovery you'll have > > unlawful index tuple pointing to where? Who knows! > > No guarantee that corresponding heap tuple was flushed on > > disk. > > This example doesn't seem very convincing. Wouldn't the XLOG entry > describing creation of the heap tuple appear in the log before the one > for the index tuple? Or are you assuming that both these XLOG entries > are lost due to disk drive malfeasance? Yes, that was assumed. When UNDO will be implemented and uncomitted tuples will be removed by rollback part of after crash recovery we'll get corrupted database without that assumption. Vadim