C/C++/Java [was Re: [GENERAL] State of Beta 2]
On Sunday 28 September 2003 09:36, Ron Johnson wrote: On Sat, 2003-09-27 at 22:19, Dennis Gearon wrote: Ron Johnson wrote: There's always the general point that C has more pitfalls (mainly from pointers/free()/malloc(), and HLLs do more for you, thus you have to code less, and, consequently, there are fewer bugs. Someday, they're going to make a langauge called: CBC, C Bounds Checked No buffer overflows, all memory allocs and mallocs create a memory object that self expands or contracts as necessary, or issues an exception if it tries to go past a limit you put as an argumen to a malloc. With gigabytes of real memory and 100 gigibytes plus of virtual memory, the programmer should not handle memory management any more. The consumers and software users expect programmers to give up their pride and let go of total control of the memory model, (like they have it now ). The only excetion might be hardware drivers. Some would say that that's what Java and C++ are for. I'd do more Java programming if it didn't have an API the size of Montana, no make that Alaska and a good chunk of Siberia. But still, multiple pointers being able to point to the same chunk of the heap will doom any solution to inefficiency. IMNSHO, only the kernel and *high-performance* products should be written in C. Everything else should be written in HLLs. Anything from COBOL (still a useful language), FORTRAN, modern BASICs, to pointer-less Pascal, Java, Smalltalk, Lisp, and scripting languages. Note that I did *not* mention C++. Duh. I would say smart pointers in C++ take care of memory errors without adding inefficiencies and latency of garbage collection. There are plenty of examples floating on net. Its not about C's ability to provide built in bounds checking. Its about programmers follow discipline, abstraction and design. Its just that C makes those error apparent in very rude and blunt way..:-) I hate java except for unified APIs it provides. Compensating programmers mistake with throwing additional resources is not my idea of a good product. But unfortunately most of the people are concerned about getting a product out of door than giving it due attention making a robust product( Like the one I work on.. 10 years old and still going strong..:-)) Business of software development has commoditized itself.. Its just a sad side effect of it.. Shridhar ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
On Sat, Sep 27, 2003 at 10:42:02PM -0300, Marc G. Fournier wrote: On Sat, 27 Sep 2003, Ron Johnson wrote: Isn't Perl pretty ubiquitous on Unix now, though? Except maybe Unixware I know that Solaris now has it included by default ... FWIW, FreeBSD just removed it (in the 5.x versions). Of course you can still easily install it from ports. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Member: Triangle Fraternity, Sports Car Club of America Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 8: explain analyze is your friend
Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
On Sat, 2003-09-27 at 16:50, Nigel J. Andrews wrote: On Sat, 27 Sep 2003, Bruce Momjian wrote: Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: With all the discussion and pg_upgrade, I saw no one offer to work on it. Does someone want to convert it to Perl? I think that would be a better language than shell script for this purpose, and C is too low-level. The reason that it needs to be rewritten in C is that it needs access to internal stuff that the backend doesn't expose. (For example, the transaction counter, end-of-WAL pointer, etc.) I don't think Perl would offer anything except creating an entirely new dependency for Postgres. Also, C code would be easier to keep in sync with the backend code that accesses the same stuff. Isn't Perl pretty ubiquitous on Unix now, though? Except maybe Unixware True, but doing all that text manipulation is C is going to be very hard to do and maintain. What about using embedded perl? I've never done it before but the mention of it in manpages has flashed past my eyes a couple of times so I know it's possible. Did the discuss decide on what was required for this. Last I noticed was that there was a distinction being made between system and user tables but I don't recall seeing a 'requirements' summary. What about Perl w/ C modules? Of course, there's my favorite: Python. It's got a good facility for writing C modules, and I think it's better for writing s/w that needs to be constantly updated. (I swear, it's just circumstance that this particular .signature came up at this time, but it is apropos.) -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA YODA: Code! Yes. A programmer's strength flows from code maintainability. But beware of Perl. Terse syntax... more than one way to do it...default variables. The dark side of code maintainability are they. Easily they flow, quick to join you when code you write. If once you start down the dark path, forever will it dominate your destiny, consume you it will. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
On Sat, 27 Sep 2003, Ron Johnson wrote: Isn't Perl pretty ubiquitous on Unix now, though? Except maybe Unixware I know that Solaris now has it included by default ... ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
perl ships on UnixWare (5.005, but that will change in UP3). LER --On Saturday, September 27, 2003 22:42:02 -0300 Marc G. Fournier [EMAIL PROTECTED] wrote: On Sat, 27 Sep 2003, Ron Johnson wrote: Isn't Perl pretty ubiquitous on Unix now, though? Except maybe Unixware I know that Solaris now has it included by default ... ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 pgp0.pgp Description: PGP signature
Re: [GENERAL] State of Beta 2
$$$ -- I wasn't looking to purchase a programmer. :-) Well sometimes it takes money to get things done. Personally I don't see a big need for pg_upgrade but there was enough people making noise about it that it made sense to make the proposal. Several people did come back and offer to cough up a little bit but not enough to get the project done. My prefernce is to see all that work going into pg_dump, pg_dumpall and pg_restore. Sincerely, Joshua Drake ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] State of Beta 2
On Saturday 27 September 2003 09:45 pm, Joshua D. Drake wrote: $$$ -- I wasn't looking to purchase a programmer. :-) Well sometimes it takes money to get things done. Personally I don't see a big need for pg_upgrade but there was enough people making noise about it that it made sense to make the proposal. Several people did come back and offer to cough up a little bit but not enough to get the project done. I could always forward you my fan mail (context for the following message is that I was extolling the group of people that help me build the various RPM sets as an example of how backports of Fedora Core packages could be done to 'Fedora Legacy' stuff (many thanks to those who help me, BTW.)): === Re: I volunteer From: Chuck Wolber [EMAIL PROTECTED] To: [EMAIL PROTECTED] I as PostgreSQL RPM maintainer for the PostgreSQL Global Development Group do something similar to this using a loose group of volunteers. TROLL Ahhh, so you're the one. Perhaps you could write a postgreSQL RPM with upgrade functionality that actually works? /TROLL -Chuck -- Quantum Linux Laboratories - ACCELERATING Business with Open Technology * Education | -=^ Ad Astra Per Aspera ^=- * Integration| http://www.quantumlinux.com * Support| [EMAIL PROTECTED] = You know, I don't mind owning up to my own bugs. But this bug ain't mine. -- Lamar Owen Director of Information Technology Pisgah Astronomical Research Institute 1 PARI Drive Rosman, NC 28772 (828)862-5554 www.pari.edu ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
On Sat, 27 Sep 2003, Larry Rosenman wrote: perl ships on UnixWare (5.005, but that will change in UP3). In what way? :) It won't ship anymore ... or upgraded? LER --On Saturday, September 27, 2003 22:42:02 -0300 Marc G. Fournier [EMAIL PROTECTED] wrote: On Sat, 27 Sep 2003, Ron Johnson wrote: Isn't Perl pretty ubiquitous on Unix now, though? Except maybe Unixware I know that Solaris now has it included by default ... ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
--On Sunday, September 28, 2003 00:14:18 -0300 Marc G. Fournier [EMAIL PROTECTED] wrote: On Sat, 27 Sep 2003, Larry Rosenman wrote: perl ships on UnixWare (5.005, but that will change in UP3). In what way? :) It won't ship anymore ... or upgraded? upgraded to 5.8.0 (sorry, should have been more clear :-)) LER --On Saturday, September 27, 2003 22:42:02 -0300 Marc G. Fournier [EMAIL PROTECTED] wrote: On Sat, 27 Sep 2003, Ron Johnson wrote: Isn't Perl pretty ubiquitous on Unix now, though? Except maybe Unixware I know that Solaris now has it included by default ... ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED] US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 pgp0.pgp Description: PGP signature
Re: [GENERAL] State of Beta 2
Ron Johnson wrote: There's always the general point that C has more pitfalls (mainly from pointers/free()/malloc(), and HLLs do more for you, thus you have to code less, and, consequently, there are fewer bugs. Someday, they're going to make a langauge called: CBC, C Bounds Checked No buffer overflows, all memory allocs and mallocs create a memory object that self expands or contracts as necessary, or issues an exception if it tries to go past a limit you put as an argumen to a malloc. With gigabytes of real memory and 100 gigibytes plus of virtual memory, the programmer should not handle memory management any more. The consumers and software users expect programmers to give up their pride and let go of total control of the memory model, (like they have it now ). The only excetion might be hardware drivers. Nobody say C#, OK? An Msoft imposed solution that integrates all their products, mistakes, football stadium sized APIs, and private backdoors is not the answer. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
On Sat, 2003-09-27 at 22:19, Dennis Gearon wrote: Ron Johnson wrote: There's always the general point that C has more pitfalls (mainly from pointers/free()/malloc(), and HLLs do more for you, thus you have to code less, and, consequently, there are fewer bugs. Someday, they're going to make a langauge called: CBC, C Bounds Checked No buffer overflows, all memory allocs and mallocs create a memory object that self expands or contracts as necessary, or issues an exception if it tries to go past a limit you put as an argumen to a malloc. With gigabytes of real memory and 100 gigibytes plus of virtual memory, the programmer should not handle memory management any more. The consumers and software users expect programmers to give up their pride and let go of total control of the memory model, (like they have it now ). The only excetion might be hardware drivers. Some would say that that's what Java and C++ are for. I'd do more Java programming if it didn't have an API the size of Montana, no make that Alaska and a good chunk of Siberia. But still, multiple pointers being able to point to the same chunk of the heap will doom any solution to inefficiency. IMNSHO, only the kernel and *high-performance* products should be written in C. Everything else should be written in HLLs. Anything from COBOL (still a useful language), FORTRAN, modern BASICs, to pointer-less Pascal, Java, Smalltalk, Lisp, and scripting languages. Note that I did *not* mention C++. Nobody say C#, OK? An Msoft imposed solution that integrates all their products, mistakes, football stadium sized APIs, and private backdoors is not the answer. natch! -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA they love our milk and honey, but preach about another way of living Merle Haggard, The Fighting Side Of Me ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: Rewriting pg_upgrade (was Re: [GENERAL] State of Beta 2)
Ron Johnson [EMAIL PROTECTED] writes: Tom Lane wrote: The reason that it needs to be rewritten in C is that it needs access to internal stuff that the backend doesn't expose. (For example, the transaction counter, end-of-WAL pointer, etc.) I don't think Perl would offer anything except creating an entirely new dependency for Postgres. Also, C code would be easier to keep in sync with the backend code that accesses the same stuff. What about Perl w/ C modules? Of course, there's my favorite: Python. Fwiw, it's pretty easy to call out to C functions from perl code these days. bash-2.05b$ perl -e 'use Inline C = int a(int i,int j) { return i+j;}; print(a(1,2),\n)' 3 That said I don't know if this is really such a good approach. I don't see why you would need much string manipulation at all. The C code can just construct directly whatever data structures it needs and call directly whatever functions it needs. Doing string manipulation to construct dynamic sql code and then hope it gets interpreted and executed the way it's expecting seems a roundabout way to go about getting things done. -- greg ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Thu, Sep 18, 2003 at 06:49:56PM -0300, Marc G. Fournier wrote: Hadn't thought of it that way ... but, what would prompt someone to upgrade, then use something like erserver to roll back? All I can think of is that the upgrade caused alot of problems with the application itself, but in a case like that, would you have the time to be able to 're-replicate' back to the old version? The trick is to have your former master set up as slave before you turn your application back on. The lack of a rollback strategy in PostgreSQL upgrades is a major barrier for corporate use. One can only do so much testing, and it's always possible you've missed something. You need to be able to go back to some known-working state. A -- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada [EMAIL PROTECTED] M2P 2A8 +1 416 646 3304 x110 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
Marc G. Fournier wrote: On Mon, 15 Sep 2003, Joshua D. Drake wrote: I'm not going to rehash the arguments I have made before; they are all archived. Suffice to say you are simply wrong. The number of complaints over the years shows that there IS a need. I at no point suggested that there was not a need. I only suggest that the need may not be as great as some suspect or feel. To be honest -- if your arguments were the need that everyone had... it would have been implemented some how. It hasn't yet which would suggest that the number of people that have the need at your level is not as great as the number of people who have different needs from PostgreSQL. Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall anyone else looking at extending it ... if the *need* was so great, someone would have step'd up and looked into adding to what was already there ... I was thinking of working on pg_upgrade for 7.4, but other things seemed more important. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
Also, to be blunt: if pg_dump still has problems after all the years we've put into it, what makes you think that in-place upgrade will magically work reliably? Fair enough. On another front then... would all this energy we are talking about with pg_upgrade be better spent on pg_dump/pg_dumpall/pg_restore? This I am hoping changes in 7.4 as we moved to a pure c implementation. Your right that was a mistype. I was very tired and reading three different threads at the same time. Sincerely, Joshua Drake Eh? AFAIR, pg_dump has always been in C. regards, tom lane -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
Tom Lane wrote: Kaare Rasmussen [EMAIL PROTECTED] writes: Not sure about your position here. You claimed that it would be a good idea to freeze the on disk format for at least a couple of versions. I said it would be a good idea to freeze the format of user tables (and indexes) across multiple releases. Indexes aren't as big a deal. Reindexing is less painful than dump/restore. It could still lead to significant downtime for very large databases (at least the for the tables that are being reindexed), but not nearly as much. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
Joshua D. Drake [EMAIL PROTECTED] writes: Fair enough. On another front then... would all this energy we are talking about with pg_upgrade be better spent on pg_dump/pg_dumpall/pg_restore? Well, we need to work on pg_dump too. But I don't foresee it ever getting fast enough to satisfy the folks who want zero-downtime upgrades. So pg_upgrade is also an important project. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] State of Beta 2
No can do, unless your intent is to force people to work on pg_upgrade and nothing else (a position I for one would ignore ;-)). With such a policy and no pg_upgrade we'd be unable to apply any catalog changes at all, which would pretty much mean that 7.5 would look exactly like 7.4. Not sure about your position here. You claimed that it would be a good idea to freeze the on disk format for at least a couple of versions. Do you argue here that this cycle shouldn't start with the next version, or did you reverse your thought ? If the former, I think you're right. There are some too big changes close to being made - if I have read this list correctly. Table spaces and PITR would certainly change it. But if the freeze could start after 7.5 and last two-three years, it might help things. -- Kaare Rasmussen--Linux, spil,--Tlf:3816 2582 Kaki Datatshirts, merchandize Fax:3816 2501 Howitzvej 75 Åben 12.00-18.00Email: [EMAIL PROTECTED] 2000 FrederiksbergLørdag 12.00-16.00 Web: www.suse.dk ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] State of Beta 2
Joshua D. Drake [EMAIL PROTECTED] writes: The reality of pg_dump is not a good one. It is buggy and not very reliable. I think everyone acknowledges that we have more work to do on pg_dump. But we have to do that work anyway. Spreading ourselves thinner by creating a whole new batch of code for in-place upgrade isn't going to improve the situation. The thing I like about the pg_upgrade approach is that it leverages a lot of code we already have and will need to continue to maintain in any case. Also, to be blunt: if pg_dump still has problems after all the years we've put into it, what makes you think that in-place upgrade will magically work reliably? This I am hoping changes in 7.4 as we moved to a pure c implementation. Eh? AFAIR, pg_dump has always been in C. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Catalog vs. user table format (was Re: [GENERAL] State of Beta 2)
On Sat, 2003-09-20 at 11:17, Tom Lane wrote: Marc G. Fournier [EMAIL PROTECTED] writes: No, I'm not suggesting no catalog changes ... wait, I might be wording this wrong ... there are two changes that right now requires a dump/reload, changes to the catalogs and changes to the data structures, no? Or are these effectively inter-related? Oh, what you're saying is no changes in user table format. Yeah, we Whew, we're finally on the same page! So, some definitions we can agree on? catalog change: CREATE or ALTER a pg_* table. on-disk structure, a.k.a. user table format: the way that the tables/fields are actually stored on disk. So, a catalog change should *not* require a dump/restore, but an ODS/UTF change should. Agreed? -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA they love our milk and honey, but preach about another way of living Merle Haggard, The Fighting Side Of Me ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Thu, 2003-09-18 at 16:29, Andrew Sullivan wrote: On Sat, Sep 13, 2003 at 10:52:45AM -0500, Ron Johnson wrote: So instead of 1TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc) just for the 1 time per year when we'd upgrade PostgreSQL? Nope. You also need it for the time when your vendor sells controllers or chips or whatever with known flaws, and you end up having hardware that falls over 8 or 9 times in a row. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA A C program is like a fast dance on a newly waxed dance floor by people carrying razors. Waldi Ravens ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] State of Beta 2
Marc G. Fournier [EMAIL PROTECTED] writes: hmmm ... k, is it feasible to go a release or two at a time without on disk changes? if so, pg_upgrade might not be as difficult to maintain, since, unless someone an figure out a way of doing it, 'on disk change releases' could still require dump/reloads, with a period of stability in between? Yeah, for the purposes of this discussion I'm just taking pg_upgrade to mean something that does what Bruce's old script does, namely transfer the schema into the new installation using pg_dump -s and then push the user tables and indexes physically into place. We could imagine that pg_upgrade would later get some warts added to it to handle some transformations of the user data, but that might or might not ever need to happen. I think we could definitely adopt a policy of on-disk changes not oftener than every X releases if we had a working pg_upgrade, even without doing any extra work to allow updates. People who didn't want to wait for the next incompatible release could have their change sooner if they were willing to do the work to provide an update path. *Or* ... as we've seen more with this dev cycle then previous ones, how much could be easily back-patched to the previous version(s) relatively easily, without requiring on-disk changes? It's very difficult to back-port anything beyond localized bug fixes. We change the code too much --- for instance, almost no 7.4 patch will apply exactly to 7.3 or before because of the elog-to-ereport changes. But the real problem IMHO is we don't have the manpower to do adequate testing of back-branch changes that would need to be substantially different from what gets applied to HEAD. I think it's best to leave that activity to commercial support outfits, rather than put community resources into it. (Some might say I have a conflict of interest here, since I work for Red Hat which is one of said commercial support outfits. But I really do think it's more reasonable to let those companies do this kind of gruntwork than to expect the community hackers to do it.) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
On Fri, 19 Sep 2003 18:51:00 -0400, Tom Lane [EMAIL PROTECTED] wrote: transfer the schema into the new installation using pg_dump -s and then push the user tables and indexes physically into place. I'm more in favour of in-place upgrade. This might seem risky, but I think we can expect users to backup their PGDATA directory before they start the upgrade. I don't trust pg_dump because . it doesn't help when the old postmaster binaries are not longer available . it does not always produce scripts that can be loaded without manual intervention. Sometimes you create a dump and cannot restore it with the same Postmaster version. RTA. Servus Manfred ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
On Fri, 19 Sep 2003 20:06:39 -0400, Tom Lane [EMAIL PROTECTED] wrote: Perhaps you should go back and study what pg_upgrade actually did. Thanks for the friendly invitation. I did that. It needed only minimal assumptions about the format of either old or new catalogs. The reason is that it mostly relied on portability work done elsewhere (in pg_dump, for example). I was hoping that you had a more abstract concept in mind when you said pg_upgrade; not that particular implementation. I should have been more explicit that I'm not a friend of that pg_dump approach, cf. my other mail. Rod's adddepend is a good example. I don't think it's representative. ... I wouldn't call it perfect ... in other words, it doesn't work and can't be made to work. Hmm, not perfect == can't be made to work. Ok. If you want to see it this way ... Servus Manfred ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] State of Beta 2
On Fri, 19 Sep 2003, Tom Lane wrote: I think we could definitely adopt a policy of on-disk changes not oftener than every X releases if we had a working pg_upgrade, even without doing any extra work to allow updates. People who didn't want to wait for the next incompatible release could have their change sooner if they were willing to do the work to provide an update path. 'K, but let's put the horse in front of the cart ... adopt the policy so that the work on a working pg_upgrade has a chance of succeeding ... if we said no on disk changes for, let's say, the next release, then that would provide an incentive (I think!) for someone(s) to pick up the ball and make sure that pg_upgrade would provide a non-dump/reload upgrade for it ... But the real problem IMHO is we don't have the manpower to do adequate testing of back-branch changes that would need to be substantially different from what gets applied to HEAD. I think it's best to leave that activity to commercial support outfits, rather than put community resources into it. What would be nice is if we could create a small QA group ... representative of the various supported platforms, who could be called upon for testing purposes ... any bugs reported get fixed, its finding the bugs ... ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta (2)
Hi, Command Prompt will set up an escrow account online at www.escrow.com. When the Escrow account totals 2000.00 and is released, Command Prompt will dedicate a programmer for one month to debugging, documenting, reviewing, digging, crying, screaming, begging and bleeding with the code. At the end of the month and probably during depending on how everything goes Command Prompt will release its findings. The findings will include a project plan on moving forward over the next 5 months (if that is what it takes) to produce the first functional pg_upgrade. If the project is deemed as moving in the right direction by the community members and specifically the core members we will setup milestone payments for the project. What does everyone think? Sounds good. It provides a safe way for people to fund this development. I can't promise anything yet on behalf of my company, but I'll donate at least $50,- personally. Sander. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta (2)
Sounds good to me. I can throw in $500 to start. On Wednesday, September 17, 2003, at 12:06 PM, Joshua D. Drake wrote: Hello, O.k. here are my thoughts on how this could work: Command Prompt will set up an escrow account online at www.escrow.com. When the Escrow account totals 2000.00 and is released, Command Prompt will dedicate a programmer for one month to debugging, documenting, reviewing, digging, crying, screaming, begging and bleeding with the code. At the end of the month and probably during depending on how everything goes Command Prompt will release its findings. The findings will include a project plan on moving forward over the next 5 months (if that is what it takes) to produce the first functional pg_upgrade. If the project is deemed as moving in the right direction by the community members and specifically the core members we will setup milestone payments for the project. What does everyone think? Sincerely, Joshua D. Drake Dennis Gearon wrote: I had already committed $50/mo. Robert Creager wrote: Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700) Dennis Gearon [EMAIL PROTECTED] uttered something amazingly similar to: Robert Creager wrote: Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700) Joshua D. Drake [EMAIL PROTECTED] uttered something amazingly similar to: If someone is willing to pony up 2000.00 per month for a period of at Well, if you're willing to set up some sort of escrow, I'll put in $100. I Is that $100 times once, or $100 X 6mos anticiapated develop time. That's $100 once. And last I looked, there are well over 1800 subscribers on this list alone. On the astronomically small chance everyone one of them did what I'm doing, it would cover more than 6 months of development time ;-) This strikes me as like supporting public radio. The individuals do some, and the corporations do a bunch. I'm just putting my money toward a great product, rather than complaining that it's not done. Just like Joshua is doing. You cannot hire a competent programmer for $24k a year, so he is putting up some money on this also. There have been a couple of other bytes from small businesses, so who knows! You game? Cheers, Rob -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] State of Beta 2
Marc G. Fournier wrote: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. Huh? I have no disagreement that upgrading is a key feature that we are lacking ... but, if there are any *on disk* changes between releases, how do you propose 'in place upgrades'? RTA. It's been hashed, rehashed, and hashed again. I've asked twice if eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a 7.3); that question has yet to be answered. If it can do this, then I would be a much happier camper. I would be happy for a migration tool that could read the old format _without_a_running_old_backend_ and convert it to the new format _without_a_running_backend_. That's always been my beef, that the new backend is powerless to recover the old data. OS upgrades where PostgreSQL is part of the OS, FreeBSD ports upgrades (according to a user report on the lists a few months back), and RPM upgrades are absolutely horrid at this point. *You* might can stand it; some cannot. Granted, if its just changes to the system catalogs and such, pg_upgrade should be able to be taught to handle it .. I haven't seen anyone step up to do so, and for someone spending so much time pushing for an upgrade path, I haven't seen you pony up the time I believe I pony up quite a bit of time already, Marc. Not as much as some, by any means, but I am not making one red cent doing what I do for the project. And one time I was supposed to have gotten paid for a related project, I didn't. I did get paid by Great Bridge for RPM work as a one-shot deal, though. The time I've already spent on this is too much. I've probably put several hundred hours of my time into this issue in one form or another; what I don't have time to do is climb the steep slope Tom mentioned earlier. I actually need to feed my family, and my employer has more for me to do than something that should have already been done. Just curious here ... but, with all the time you've spent pushing for an easy upgrade path, have you looked at the other RDBMSs and how they deal with upgrades? I think its going to be a sort of apples-to-oranges thing, since I imagine that most of the 'big ones' don't change their disk formats anymore ... I don't use the others; thus I don't care how they do it; only how we do it. But even MySQL has a better system than we -- they allow you to migrate table by table, gaining the new features of the new format when you migrate. Tom and I pretty much reached consensus that the reason we have a problem with this is the integration of features in the system catalogs, and the lack of separation between 'system' information in the catalogs and 'feature' or 'user' information in the catalogs. It's all in the archives that nobdy seems willing to read over again. Why do we even have archives if they're not going to be used? If bugfixes were consistently backported, and support was provided for older versions running on newer OS's, then this wouldn't be as much of a problem. But we orphan our code afte one version cycle; 7.0.x is completely unsupported, for instance, while even 7.2.x is virtually unsupported. My hat's off to Red Hat for backporting the buffer overflow fixes to all their supported versions; we certainly wouldn't have don it. And 7.3.x will be unsupported once we get past 7.4 release, right? So in order to get critical bug fixes, users must upgrade to a later codebase, and go through the pain of upgrading their data. K, looking back through that it almost sounds like a ramble ... hopefully you understand what I'm asking ... *I* should complain about a ramble? :-) -- Lamar Owen Director of Information Technology Pisgah Astronomical Research Institute Formerly of WGCR Internet Radio, and the PostgreSQL RPM maintainer since 1999. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] State of Beta 2
On Thu, 18 Sep 2003, Lamar Owen wrote: Huh? I have no disagreement that upgrading is a key feature that we are lacking ... but, if there are any *on disk* changes between releases, how do you propose 'in place upgrades'? RTA. It's been hashed, rehashed, and hashed again. I've asked twice if eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a 7.3); that question has yet to be answered. 'K, I had already answered it as part of this thread when I suggested doing exactly that ... in response to which several ppl questioned the feasibility of setting up a duplicate system with 1TB of disk space to do the replication over to ... See: http://archives.postgresql.org/pgsql-general/2003-09/msg00886.php ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
On Thursday, September 18, 2003, at 12:11 PM, Lamar Owen wrote: RTA. It's been hashed, rehashed, and hashed again. I've asked twice if eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a 7.3); that question has yet to be answered. If it can do this, then I would be a much happier camper. I would be happy for a migration tool that could read the old format _without_a_running_old_backend_ and convert it to the new format _without_a_running_backend_. That's always been my beef, that the new backend is powerless to recover the old data. OS upgrades where PostgreSQL is part of the OS, FreeBSD ports upgrades (according to a user report on the lists a few months back), and RPM upgrades are absolutely horrid at this point. *You* might can stand it; some cannot. eRserver should be able to migrate the data. If you make heavy use of sequences, schemas and other such things it won't help you for those. Its not a bad idea to do it that way, if you aren't dealing with large or very complex databases. The first thing its going to do when you add a slave is do a dump/restore to create the replication target. If you can afford the disk space and time, that will migrate the data. By itself that isn't any different than doing that by hand. Where eRserver may help is keeping the data in sync while you work the other things out. Sequences and schemas are the two things it doesn't handle at the moment. I've created a patch and some new client apps to manage the schema part, but I haven't had the chance to send them off to someone to see if they'll fit in. Sequences are on my list of things to do next. Time time time time. Using eRserver may help you work around the problem, given certain conditions. It doesn't solve it. I think if we can get Mr. Drake's initiative off the ground we may at least figure out if there is a solution. Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] State of Beta 2
Andrew Rawnsley wrote: eRserver should be able to migrate the data. If you make heavy use of sequences, schemas and other such things it won't help you for those. snip Using eRserver may help you work around the problem, given certain conditions. It doesn't solve it. I think if we can get Mr. Drake's initiative off the ground we may at least figure out if there is a solution. So a replication application IS a method to migrate OR CAN BE MADE to do it somewhat AND is a RELATED project to the migration tool. Again, I wonder what on the TODO's or any other roadmap is related and should be part of a comprehensive plan to drain the swamp and not just club alligators over the head? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
If bugfixes were consistently backported, and support was provided for older versions running on newer OS's, then this wouldn't be as much of a problem. But we orphan our code afte one version cycle; 7.0.x is completely unsupported, for instance, while even 7.2.x is virtually unsupported. My hat's off to Red Hat for backporting the buffer overflow fixes to all their supported versions; we certainly wouldn't have don it. And 7.3.x will be unsupported once we get past 7.4 release, right? So in order to get critical bug fixes, users must upgrade to a later codebase, and go through the pain of upgrading their data. Command Prompt is supporting the 7.3 series until 2005 and that includes backporting certain features and bug fixes. The reality is that most (with the exception of the Linux kernel and maybe Apache) open source projects don't support back releases. That is the point of commercial releases such as RedHat DB and Mammoth. We will support the the older releases for some time. If you want to have continued support for an older rev, purchase a commercial version. I am not trying to push my product here, but frankly I think your argument is weak. There is zero reason for the community to support previous version of code. Maybe until 7.4 reaches 7.4.1 or something but longer? Why? The community should be focusing on generating new, better, faster, cleaner code. That is just my .02. Joshua Drake K, looking back through that it almost sounds like a ramble ... hopefully you understand what I'm asking ... *I* should complain about a ramble? :-) -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] State of Beta 2
Joshua D. Drake [EMAIL PROTECTED] writes: If you want to have continued support for an older rev, purchase a commercial version. I am not trying to push my product here, but frankly I think your argument is weak. There is zero reason for the community to support previous version of code. Maybe until 7.4 reaches 7.4.1 or something but longer? Why? The community should be focusing on generating new, better, faster, cleaner code. I tend to agree on this point. Red Hat is also in the business of supporting back-releases of PG, and I believe PG Inc, SRA, and others will happily do it too. I don't think it's the development community's job to do that. [ This does not, however, really bear on the primary issue, which is how can we make upgrading less unpleasant for people with large databases. We do need to address that somehow. ] regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
I have no doubt that a competent programmer could learn the Postgres innards well enough to do the job; as someone pointed out earlier in this thread, none of the core committee was born knowing Postgres. I do, however, doubt that it can be done in six months if one has any significant learning curve to climb up first. Hello, This is a completely reasonable statement. However we have three full time programmers right now that are fairly familiar with the internals of PostgreSQL. They are the programmers that are currently coding our transactional replication engine (which is going beta in about 3 weeks), plPHP, and also did the work on S/ODBC, S/JDBC and PgManage. I am not going to say that we are neccessarily Tom Lane material ;) but my programmers are quite good and learning more everyday. They have been in the guts of PostgreSQL for 9 months straight, 40 hours a week now. Sincerely, Joshua Drake regards, tom lane -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] State of Beta (2)
Hello, O.k. here are my thoughts on how this could work: Command Prompt will set up an escrow account online at www.escrow.com. When the Escrow account totals 2000.00 and is released, Command Prompt will dedicate a programmer for one month to debugging, documenting, reviewing, digging, crying, screaming, begging and bleeding with the code. At the end of the month and probably during depending on how everything goes Command Prompt will release its findings. The findings will include a project plan on moving forward over the next 5 months (if that is what it takes) to produce the first functional pg_upgrade. If the project is deemed as moving in the right direction by the community members and specifically the core members we will setup milestone payments for the project. What does everyone think? Sincerely, Joshua D. Drake Dennis Gearon wrote: I had already committed $50/mo. Robert Creager wrote: Once upon a time (Tue, 16 Sep 2003 21:26:05 -0700) Dennis Gearon [EMAIL PROTECTED] uttered something amazingly similar to: Robert Creager wrote: Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700) Joshua D. Drake [EMAIL PROTECTED] uttered something amazingly similar to: If someone is willing to pony up 2000.00 per month for a period of at Well, if you're willing to set up some sort of escrow, I'll put in $100. I Is that $100 times once, or $100 X 6mos anticiapated develop time. That's $100 once. And last I looked, there are well over 1800 subscribers on this list alone. On the astronomically small chance everyone one of them did what I'm doing, it would cover more than 6 months of development time ;-) This strikes me as like supporting public radio. The individuals do some, and the corporations do a bunch. I'm just putting my money toward a great product, rather than complaining that it's not done. Just like Joshua is doing. You cannot hire a competent programmer for $24k a year, so he is putting up some money on this also. There have been a couple of other bytes from small businesses, so who knows! You game? Cheers, Rob -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Sincerely, Joshua Drake -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] State of Beta 2
Hmmm, ok, I can't retask any of my people or reallocation funds for this year but I can personally do 5 to 10% of that monthly cost. Some more people and project plan- then the ball could roll :) Quoting Joshua D. Drake [EMAIL PROTECTED]: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Sincerely, Joshua Drake -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Keith C. Perry Director of Networks Applications VCSN, Inc. http://vcsn.com This email account is being host by: VCSN, Inc : http://vcsn.com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
Let me run some numbers. I'm interested in the idea, and I think I can push one of my clients on it. Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that sort of time commitment? Is it maintainable over time? Or are we pissing in the wind? On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Sincerely, Joshua Drake -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 8: explain analyze is your friend Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] State of Beta 2
And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. Huh? I have no disagreement that upgrading is a key feature that we are lacking ... but, if there are any *on disk* changes between releases, how do you propose 'in place upgrades'? Granted, if its just changes to the system catalogs and such, pg_upgrade should be able to be taught to handle it .. I haven't seen anyone step up to do so, and for someone spending so much time pushing for an upgrade path, I haven't seen you pony up the time ... Just curious here ... but, with all the time you've spent pushing for an easy upgrade path, have you looked at the other RDBMSs and how they deal with upgrades? I think its going to be a sort of apples-to-oranges thing, since I imagine that most of the 'big ones' don't change their disk formats anymore ... What I'd be curious about is how badly we compare as far as major releases are concerned ... I don't believe we've had a x.y.z release yet that required a dump/reload (and if so, it was a very very special circumstance), but what about x.y releases? In Oracle's case, i don't think they do x.y.z releases, do they? Only X and x.y? K, looking back through that it almost sounds like a ramble ... hopefully you understand what I'm asking ... I know when I was at the University, and they dealt with Oracle upgrades, the guys plan'd for a weekend ... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
Lamar Owen wrote: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. Ouch. I'd like to see an easy upgrade path, but I'd rather have a 7.5 with PITR then an in-place upgrade. Perhaps the demand for either is associated with the size of the db vs. the fear associated with an inability to restore to a point-in-time. My fear of an accidental: DELETE FROM foo; is greater than my loathing of the upgrade process. What good are great features if it's a PITN to get upgraded to them? What good is an in-place upgrade without new features? (I'm kinda joking here) ;-) Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] State of Beta 2
It's be EXTREMELY cool if there was some relationship betweenn the code for; PITR and Inplace upgrades Any possibility of overlaps? Mike Mascari wrote: Lamar Owen wrote: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. Ouch. I'd like to see an easy upgrade path, but I'd rather have a 7.5 with PITR then an in-place upgrade. Perhaps the demand for either is associated with the size of the db vs. the fear associated with an inability to restore to a point-in-time. My fear of an accidental: DELETE FROM foo; is greater than my loathing of the upgrade process. What good are great features if it's a PITN to get upgraded to them? What good is an in-place upgrade without new features? (I'm kinda joking here) ;-) Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
As I understand it, changes that require the dump restore fall into two categories, catalog changes, and on disk format changes. If that's the case (I'm as likely wrong as right here, I know) then it could be that most upgrades (say 7.4 to 7.5) could be accomplished more easier than the occasional ones that require actual disk format changes (i.e. 7.5 to 8.0) If that's the case, I'd imagine that as postgresql gets more mature, on disk upgrades should become easier to implement, and dump/restore would only be required for major version upgrades at some point. Is that about right, and if so, would it make maintaining this kind of program simpler if it only had to handle catalog changes? On Tue, 16 Sep 2003, Andrew Rawnsley wrote: Let me run some numbers. I'm interested in the idea, and I think I can push one of my clients on it. Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that sort of time commitment? Is it maintainable over time? Or are we pissing in the wind? On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Sincerely, Joshua Drake -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 8: explain analyze is your friend Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
On Tuesday, September 16, 2003, at 04:51 PM, Marc G. Fournier wrote: Just curious here ... but, with all the time you've spent pushing for an easy upgrade path, have you looked at the other RDBMSs and how they deal with upgrades? I think its going to be a sort of apples-to-oranges thing, since I imagine that most of the 'big ones' don't change their disk formats anymore ... That's probably the thing - they've written the on-disk stuff in stone by now. DB2 has a lot of function rebinding to do, but thats probably a different issue. Tying to my last post, concerning Joshua's offer to put up the labor if we can put up the dough, given the fact that Postgres is still in flux, do you think its even possible to do some sort of in-place upgrade, not knowing what may come up when you're writing 7.6? In other words, if we pony up and get something written now, will it need further development every time an x.y release comes up. What I'd be curious about is how badly we compare as far as major releases are concerned ... I don't believe we've had a x.y.z release yet that required a dump/reload (and if so, it was a very very special circumstance), but what about x.y releases? In Oracle's case, i don't think they do x.y.z releases, do they? Only X and x.y? Lord, who knows what they're up to. They do (or did) x.y.z releases (I'm using 8.1.6), but publicly they're calling everything 8i,9i,10g yahdah yahdah yahdah. I certainly will concede that (to me), upgrading Postgres is easier than Oracle, as I can configure, compile, install, do an initdb, and generate an entire large DDL in the time it takes the abysmal Oracle installer to even start. Then try to install/upgrade it on an 'unsupported' linux, like Slack...but I don't have to do anything with the data. To a PHB/PHC (pointy-haired-client), saying 'Oracle' is like giving them a box of Depends, even though it doesn't save them from a fire hose. They feel safe. K, looking back through that it almost sounds like a ramble ... hopefully you understand what I'm asking ... I know when I was at the University, and they dealt with Oracle upgrades, the guys plan'd for a weekend ... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
Hello, I would imagine that it would be maintainable but it would be something that would have to be constantly maintained from release to release. It would have to become part of the actual project or it would die. The reason I chose six months is that I figure it will be 30 days of full time just dinking around to make sure that we have a solid handle on how things are done for this part of the code. Then we would know what we think it would take. It was a gut theory but I believe it can be done or at least a huge jump on it. Sincerely, Joshua Drake Andrew Rawnsley wrote: Let me run some numbers. I'm interested in the idea, and I think I can push one of my clients on it. Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that sort of time commitment? Is it maintainable over time? Or are we pissing in the wind? On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote: And that has nothing to do with user need as a whole, since the care level I mentioned is predicated by the developer interest level. While I know, Marc, how the whole project got started (I have read the first posts), and I appreciate that you, Bruce, Thomas, and Vadim started the original core team because you were and are users of PostgreSQL, I sincerely believe that in this instance you are out of touch with this need of many of today's userbase. And I say that with full knowledge of PostgreSQL Inc.'s support role. If given the choice between upgrading capability, PITR, and Win32 support, my vote would go to upgrading. Then migrating to PITR won't be a PITN. If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Sincerely, Joshua Drake -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 8: explain analyze is your friend Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
Tying to my last post, concerning Joshua's offer to put up the labor if we can put up the dough, given the fact that Postgres is still in flux, do you think its even possible to do some sort of in-place upgrade, not knowing what may come up when you're writing 7.6? In other words, if we pony up and get something written now, will it need further development every time an x.y release comes up. There is probably no question that it will need further development. However, I would imagine that once the intial grunt work is done it would be much easier to migrate the code (especially if it is continually maintained) to newer releases. My thought process is that we would start with 7.4 codebase and as it migrates to 7.5 move the work directly to 7.5 and if possible release for 7.5 (although that really may be pushing it). J What I'd be curious about is how badly we compare as far as major releases are concerned ... I don't believe we've had a x.y.z release yet that required a dump/reload (and if so, it was a very very special circumstance), but what about x.y releases? In Oracle's case, i don't think they do x.y.z releases, do they? Only X and x.y? Lord, who knows what they're up to. They do (or did) x.y.z releases (I'm using 8.1.6), but publicly they're calling everything 8i,9i,10g yahdah yahdah yahdah. I certainly will concede that (to me), upgrading Postgres is easier than Oracle, as I can configure, compile, install, do an initdb, and generate an entire large DDL in the time it takes the abysmal Oracle installer to even start. Then try to install/upgrade it on an 'unsupported' linux, like Slack...but I don't have to do anything with the data. To a PHB/PHC (pointy-haired-client), saying 'Oracle' is like giving them a box of Depends, even though it doesn't save them from a fire hose. They feel safe. K, looking back through that it almost sounds like a ramble ... hopefully you understand what I'm asking ... I know when I was at the University, and they dealt with Oracle upgrades, the guys plan'd for a weekend ... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700) Joshua D. Drake [EMAIL PROTECTED] uttered something amazingly similar to: If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Well, if you're willing to set up some sort of escrow, I'll put in $100. I don't do db's except for play, but I hate the dump/restore part. I've lost data two times fat-fingering the upgrade, trying to use two running installations on the same machine. I'm that good... Cheers, Rob -- 21:28:34 up 46 days, 14:03, 4 users, load average: 2.00, 2.00, 2.00 pgp0.pgp Description: PGP signature
Re: [GENERAL] State of Beta 2
Robert Creager wrote: Once upon a time (Tue, 16 Sep 2003 12:59:37 -0700) Joshua D. Drake [EMAIL PROTECTED] uttered something amazingly similar to: If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. So if you want it bad enough there it is. I will donate all changes, patches etc.. to the project and I will cover the additional costs that are over and above the 12,000. If we get it done quicker, all the better. Well, if you're willing to set up some sort of escrow, I'll put in $100. I don't do db's except for play, but I hate the dump/restore part. I've lost data two times fat-fingering the upgrade, trying to use two running installations on the same machine. I'm that good... Cheers, Rob Is that $100 times once, or $100 X 6mos anticiapated develop time. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
Andrew Rawnsley [EMAIL PROTECTED] writes: On Tuesday, September 16, 2003, at 03:59 PM, Joshua D. Drake wrote: If someone is willing to pony up 2000.00 per month for a period of at least 6 months, I will dedicated one of my programmers to the task. Do the core folk (Tom/Bruce/Jan/etc) think this is doable with that sort of time commitment? While I dislike staring gift horses in the mouth, I have to say that the people I think could do it (a) are getting paid more than $24K/yr, and (b) are names already seen regularly in the PG commit logs. If there's anyone in category (b) who works for Command Prompt, I missed the connection. I have no doubt that a competent programmer could learn the Postgres innards well enough to do the job; as someone pointed out earlier in this thread, none of the core committee was born knowing Postgres. I do, however, doubt that it can be done in six months if one has any significant learning curve to climb up first. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
On Mon, 2003-09-15 at 13:24, Joshua D. Drake wrote: Strawmen. If we provide a good upgrade capability, we would just simply have to think about upgrades before changing features like that. The upgrade code could be cognizant of these sorts of things; and shoud be, in fact. Sure but IMHO it would be more important to fix bugs like the parser not correctly using indexes on bigint unless the value is quoted... I think everyone would agree that not having to use initdb would be nice but I think there is much more important things to focus on. Besides if you are upgrading PostgreSQL in a production environment I would assume there would be an extremely valid reason. If the reason is big enough to do a major version upgrade then an initdb shouldn't be all that bad of a requirement. Hmmm. A (US-oriented) hypothetical: BOSS: The app works now. Why rock the boat? DBA: The new version has features that will save 20% disk space, and speed up certain operations by 75% every day. BOSS: Fantastic! How long will it take to upgrade? DBA: 18 hours. BOSS: 18 hours!! We can only take that much downtime on Thanks- giving weekend, or 3-day July 4th, Christmas or New Year's weekends. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA (Women are) like compilers. They take simple statements and make them into big productions. Pitr Dubovitch ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] State of Beta 2
On Mon, 2003-09-15 at 15:23, Joshua D. Drake wrote: I'm not going to rehash the arguments I have made before; they are all archived. Suffice to say you are simply wrong. The number of complaints over the years shows that there IS a need. I at no point suggested that there was not a need. I only suggest that the need may not be as great as some suspect or feel. To be honest -- if your arguments were the need that everyone had... it would have been implemented some how. It hasn't yet which would suggest that the number of people that have the need at your level is not as great as the number of people who have different needs from PostgreSQL. But the problem is that as more and more people put larger and larger datasets, that are mission-critical, into PostgreSQL, the need will grow larger and larger. Of course, we understand the finite resources issue, and are not badgering/complaining. Simply, we are trying to make our case that this is something that should go on the TODO list, and be kept in the back of developers' minds. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA You ask us the same question every day, and we give you the same answer every day. Someday, we hope that you will believe us... U.S. Secretary of Defense Donald Rumsfeld, to a reporter ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] State of Beta 2
Hmmm. A (US-oriented) hypothetical: BOSS: The app works now. Why rock the boat? DBA: The new version has features that will save 20% disk space, and speed up certain operations by 75% every day. BOSS: Fantastic! How long will it take to upgrade? DBA: 18 hours. BOSS: 18 hours!! We can only take that much downtime on Thanks- giving weekend, or 3-day July 4th, Christmas or New Year's weekends. Sounds like you just found several weekends a year that you can do the upgrade with ;). Yes that was a joke. Sincerley, Joshua Drake ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] State of Beta 2
Not that I know anything about the internal workings of PG but it seems like a big part of the issue is the on disk representation of database. I've never had a problem with the whole dump/restore process and in fact anyone that has been doing this long enough will remember when that process was gospel associated with db upgrades. However, with 24x7 opertations or in general anyone who simply can NOT tolerant the downtown to do an upgrade I wondering if there is perhaps a way to abstract the on disk representation of PG data so that 1) Future upgrades to not have to maintain the same structure if it is deem another respresentation is better 2) Upgrade could be down in place. The abstraction I am talking about would be a logical layer that would handle disk I/O including the format of that data (lets call this the ADH). By abstracting that information, the upgrade concerns *could* because if, I upgrade to say 7.2.x to 7.3.x or 7.4.x, do I *want* to take advantage of the new disk representation. If yes, then you would have go through the necessary process of upgrading the database with would always default to the most current representation. If not, then because the ADH is abstact to the application, it could run in a 7.2.x or 7.3.x compatibility mode so that you would not *need* to do the dump and restore. Again, I am completely ignorant to how this really works (and I don't have time to read through the code) but I what I think I'm getting at is a DBI/DBD type scenario. As a result, there would be another layer of complexity and I would think some performance loss as well but how much complexity and performance loss to me is the question and when you juxtapose that against the ability to do upgrades without the dump/restore I would think many organizations would say, ok, I'll take the x% performance hit and wait util I have the resources to upgrade disk representation One of the things involved with in Philadelphia is providing IT services to social service programs for outsourced agencies of the local government. In particular, there have been and are active moves in PA to have these social service datawarehouses go up. Even though it will probably take years to actually realize this, by that time once you aggregate all the local agency databases together, we're going to be talking about very large datasets. That means that (at least for) social service programs, IT is going to have to take into account this whole upgrade question from what I think will be a standpoint of availability. In short, I don't think it is too far off to consider that the little guys will need to do reliable in place upgrades with 100% confidence. Hopefully, I was clear on my macro-concept even if I got the micro-concepts wrong. Quoting Tom Lane [EMAIL PROTECTED]: Kaare Rasmussen [EMAIL PROTECTED] writes: interesting category. It is in the category of things that will only happen if people pony up money to pay someone to do uninteresting work. And for all the ranting, I've not seen any ponying. Just for the record now that there's an argument that big companies need 24x7 - could you or someone else with knowledge of what's involved give a guesstimate of how many ponies we're talking. Is it one man month, one man year, more, or what? Well, the first thing that needs to happen is to redesign and reimplement pg_upgrade so that it works with current releases and is trustworthy for enterprise installations (the original script version depended far too much on being run by someone who knew what they were doing, I thought). I guess that might take, say, six months for one well-qualified hacker. But it would be an open-ended commitment, because pg_upgrade only really solves the problem of installing new system catalogs. Any time we do something that affects the contents or placement of user table and index files, someone would have to figure out and implement a migration strategy. Some examples of things we have done recently that could not be handled without much more work: modifying heap tuple headers to conserve storage, changing the on-disk representation of array values, fixing hash indexes. Examples of probable future changes that will take work: adding tablespaces, adding point-in-time recovery, fixing the interval datatype, generalizing locale support so you can have more than one locale per installation. It could be that once pg_upgrade exists in a production-ready form, PG developers will voluntarily do that extra work themselves. But I doubt it (and if it did happen that way, it would mean a significant slowdown in the rate of development). I think someone will have to commit to doing the extra work, rather than just telling other people what they ought to do. It could be a permanent full-time task ... at least until we stop finding reasons we need to change the on-disk data representation, which may or may not ever happen.
Re: [GENERAL] State of Beta 2
Quoting Tom Lane [EMAIL PROTECTED]: Network Administrator [EMAIL PROTECTED] writes: The abstraction I am talking about would be a logical layer that would handle disk I/O including the format of that data (lets call this the ADH). This sounds good in the abstract, but I don't see how you would define such a layer in a way that was both thin and able to cope with large changes in representation. In a very real sense, handle disk I/O including the format of the data describes the entire backend. To create an abstraction layer that will actually give any traction for maintenance, you'd have to find a way to slice it much more narrowly than that. *nod* I thought that would probably be the case. The thickness of that layer would be directly related to how the backend was sliced. However it seems to me that right now that this might not be possible while the backend is changing between major releases. Perhaps once that doesn't fluxate as much it might be feasible to create these layer so that it is not too fat. Maybe the goal is too aggressive. To ask (hopefully) a simpler question. Would it be possible to at compile time choose the on disk representation? I'm not sure but I think that might reduce the complexity since the abstraction would only exist before the application is built. Once compiled there would be no ambiguity in what representation is chosen. Even if the approach can be made to work, defining such a layer and then revising all the existing code to go through it would be a huge amount of work. Ultimately there's no substitute for hard work :-( regards, tom lane True, which is why I've never been bothered about going through a process to maintain my database's integrity and performance. However, over time, that across my entire client base I will eventually reach a point where I will need to do an in place upgrade or at least limit database downtime to a 60 minute window- or less. -- Keith C. Perry Director of Networks Applications VCSN, Inc. http://vcsn.com This email account is being host by: VCSN, Inc : http://vcsn.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
Network Administrator [EMAIL PROTECTED] writes: ... However it seems to me that right now that this might not be possible while the backend is changing between major releases. Perhaps once that doesn't fluxate as much it might be feasible to create these layer so that it is not too fat. Yeah, that's been in the back of my mind also. Once we have tablespaces and a couple of the other basic features we're still missing, it might be a more reasonable proposition to freeze the on-disk representation. At the very least we could quantize it a little more --- say, group changes that affect user table representation into every third or fourth release. But until we have a production-quality pg_upgrade this is all moot. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
He is right, it might be a good idea to head this problem 'off at the pass'. I am usually pretty good at predicting technilogical trends. I've Well, the only solution I can see is to make an inline conversion tool that knows about every step from earlier versions. I believe this has been discussed before, but it does not seem to be a small or an easy task to implement. -- Kaare Rasmussen--Linux, spil,--Tlf:3816 2582 Kaki Datatshirts, merchandize Fax:3816 2501 Howitzvej 75 Åben 12.00-18.00Email: [EMAIL PROTECTED] 2000 FrederiksbergLørdag 12.00-16.00 Web: www.suse.dk ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] State of Beta 2
Ron Johnson wrote: On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote: Small soapbox moment here... ANYTHING that can be done to eliminate having to do an initdb on version changes would make a lot of people do cartwheels. 'Do a dump/reload' sometimes comes across a bit casually on the lists (my apologies if it isn't meant to be), but it can be be incredibly onerous to do on a large production system. That's probably why you run across people running stupid-old versions. And this will become even more of an issue as it's PG's popularity grows with large and 24x7 databases. He is right, it might be a good idea to head this problem 'off at the pass'. I am usually pretty good at predicting technilogical trends. I've made some money at it. And I predict that Postgres will eclipse MySQL and be in the top 5 of all databases deployed. But it does have some achilles tendon's. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote: Hello, The initdb is not always a bad thing. In reality the idea of just being able to upgrade is not a good thing. Just think about the differences between 7.2.3 and 7.3.x... The most annoying (although appropriate) one being that integers can no longer be ''. But that's just not going to cut it if PostgreSQL wants to be a serious player in the enterprise space, where 24x7 systems are common, and you just don't *get* 12/18/24/whatever hours to dump/restore a 200GB database. For example, there are some rather large companies whose fac- tories are run 24x365 on rather old versions of VAX/VMS and Rdb/VMS, because the DBAs can't even get the 3 hours to do in-place upgrades to Rdb, much less the time the SysAdmin needs to upgrade VAX/VMS to VAX/OpenVMS. In our case, we have systems that have multiple 300+GB databases (working in concert as one big system), and dumping all of them, then restoring (which includes creating indexes on tables with row-counts in the low 9 digits, and one which has gone as high as 2+ billion records) is just totally out of the question. If we provide the ability to do a wholesale upgrade many things would just break. Heck even the connection protocol is different for 7.4. But what does a *closed* database care about changed communications protocols? When you reopen the database after an upgrade the postmaster and client libs start yakking away in a slightly diff- erent language, but so what? Dennis Gearon wrote: Ron Johnson wrote: On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote: Small soapbox moment here... ANYTHING that can be done to eliminate having to do an initdb on version changes would make a lot of people do cartwheels. 'Do a dump/reload' sometimes comes across a bit casually on the lists (my apologies if it isn't meant to be), but it can be be incredibly onerous to do on a large production system. That's probably why you run across people running stupid-old versions. And this will become even more of an issue as it's PG's popularity grows with large and 24x7 databases. He is right, it might be a good idea to head this problem 'off at the pass'. I am usually pretty good at predicting technilogical trends. I've made some money at it. And I predict that Postgres will eclipse MySQL and be in the top 5 of all databases deployed. But it does have some achilles tendon's. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA The UN couldn't break up a cookie fight in a Brownie meeting. Larry Miller ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] State of Beta 2
Hi Yes, it's been discussed to death, and it isn't easy. See the archives That's what I thought. interesting category. It is in the category of things that will only happen if people pony up money to pay someone to do uninteresting work. And for all the ranting, I've not seen any ponying. Just for the record now that there's an argument that big companies need 24x7 - could you or someone else with knowledge of what's involved give a guesstimate of how many ponies we're talking. Is it one man month, one man year, more, or what? Just in case there is a company with enough interest in this matter. Next question would of course be if anyone would care to do it even though they're paid, but one hypothetical question at the time :-) -- Kaare Rasmussen--Linux, spil,--Tlf:3816 2582 Kaki Datatshirts, merchandize Fax:3816 2501 Howitzvej 75 Åben 12.00-18.00Email: [EMAIL PROTECTED] 2000 FrederiksbergLørdag 12.00-16.00 Web: www.suse.dk ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Fri, 12 Sep 2003, Ron Johnson wrote: On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote: Hello, The initdb is not always a bad thing. In reality the idea of just being able to upgrade is not a good thing. Just think about the differences between 7.2.3 and 7.3.x... The most annoying (although appropriate) one being that integers can no longer be ''. But that's just not going to cut it if PostgreSQL wants to be a serious player in the enterprise space, where 24x7 systems are common, and you just don't *get* 12/18/24/whatever hours to dump/restore a 200GB database. For example, there are some rather large companies whose fac- tories are run 24x365 on rather old versions of VAX/VMS and Rdb/VMS, because the DBAs can't even get the 3 hours to do in-place upgrades to Rdb, much less the time the SysAdmin needs to upgrade VAX/VMS to VAX/OpenVMS. In our case, we have systems that have multiple 300+GB databases (working in concert as one big system), and dumping all of them, then restoring (which includes creating indexes on tables with row-counts in the low 9 digits, and one which has gone as high as 2+ billion records) is just totally out of the question. 'k, but is it out of the question to pick up a duplicate server, and use something like eRServer to replicate the databases between the two systems, with the new system having the upgraded database version running on it, and then cutting over once its all in sync? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Sat, 13 Sep 2003, Ron Johnson wrote: So instead of 1TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc) just for the 1 time per year when we'd upgrade PostgreSQL? Ah, see, the post that I was responding to dealt with 300GB of data, which, a disk array for, is relatively cheap ... :) But even with 1TB of data, do you note have a redundant system? If you can't afford 3 hours to dump/reload, can you actually afford any better the cost of the server itself going poof? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
Kaare Rasmussen [EMAIL PROTECTED] writes: interesting category. It is in the category of things that will only happen if people pony up money to pay someone to do uninteresting work. And for all the ranting, I've not seen any ponying. Just for the record now that there's an argument that big companies need 24x7 - could you or someone else with knowledge of what's involved give a guesstimate of how many ponies we're talking. Is it one man month, one man year, more, or what? Well, the first thing that needs to happen is to redesign and reimplement pg_upgrade so that it works with current releases and is trustworthy for enterprise installations (the original script version depended far too much on being run by someone who knew what they were doing, I thought). I guess that might take, say, six months for one well-qualified hacker. But it would be an open-ended commitment, because pg_upgrade only really solves the problem of installing new system catalogs. Any time we do something that affects the contents or placement of user table and index files, someone would have to figure out and implement a migration strategy. Some examples of things we have done recently that could not be handled without much more work: modifying heap tuple headers to conserve storage, changing the on-disk representation of array values, fixing hash indexes. Examples of probable future changes that will take work: adding tablespaces, adding point-in-time recovery, fixing the interval datatype, generalizing locale support so you can have more than one locale per installation. It could be that once pg_upgrade exists in a production-ready form, PG developers will voluntarily do that extra work themselves. But I doubt it (and if it did happen that way, it would mean a significant slowdown in the rate of development). I think someone will have to commit to doing the extra work, rather than just telling other people what they ought to do. It could be a permanent full-time task ... at least until we stop finding reasons we need to change the on-disk data representation, which may or may not ever happen. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
'k, but is it out of the question to pick up a duplicate server, and use something like eRServer to replicate the databases between the two systems, with the new system having the upgraded database version running on it, and then cutting over once its all in sync? That's just what I was thinking. It might be an easy way aournd the whole problem,for awhile, to set up the replication to be as version independent as possible. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Sat, 2003-09-13 at 11:21, Marc G. Fournier wrote: On Sat, 13 Sep 2003, Ron Johnson wrote: So instead of 1TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc) just for the 1 time per year when we'd upgrade PostgreSQL? Ah, see, the post that I was responding to dealt with 300GB of data, which, a disk array for, is relatively cheap ... :) But even with 1TB of data, do you note have a redundant system? If you can't afford 3 hours to dump/reload, can you actually afford any better the cost of the server itself going poof? We've survived all h/w issues so far w/ minimal downtime, running in degraded mode (i.e., having to yank out a CPU or RAM board) until HP could come out and install a new one. We also have dual-redun- dant disk and storage controllers, even though it's been a good long time since I've seen one of them die. And I strongly dispute the notion that it would only take 3 hours to dump/restore a TB of data. This seems to point to a downside of MVCC: this inability to to page-level database backups, which allow for rapid restores, since all of the index structures are part of the backup, and don't have to be created, in serial, as part of the pg_restore. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA ...always eager to extend a friendly claw ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
On Sat, 2003-09-13 at 10:10, Marc G. Fournier wrote: On Fri, 12 Sep 2003, Ron Johnson wrote: On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote: Hello, The initdb is not always a bad thing. In reality the idea of just being able to upgrade is not a good thing. Just think about the differences between 7.2.3 and 7.3.x... The most annoying (although appropriate) one being that integers can no longer be ''. But that's just not going to cut it if PostgreSQL wants to be a serious player in the enterprise space, where 24x7 systems are common, and you just don't *get* 12/18/24/whatever hours to dump/restore a 200GB database. For example, there are some rather large companies whose fac- tories are run 24x365 on rather old versions of VAX/VMS and Rdb/VMS, because the DBAs can't even get the 3 hours to do in-place upgrades to Rdb, much less the time the SysAdmin needs to upgrade VAX/VMS to VAX/OpenVMS. In our case, we have systems that have multiple 300+GB databases (working in concert as one big system), and dumping all of them, then restoring (which includes creating indexes on tables with row-counts in the low 9 digits, and one which has gone as high as 2+ billion records) is just totally out of the question. 'k, but is it out of the question to pick up a duplicate server, and use something like eRServer to replicate the databases between the two systems, with the new system having the upgraded database version running on it, and then cutting over once its all in sync? So instead of 1TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K fiber channel disks (and the requisite controllers, shelves, RAID overhead, etc) just for the 1 time per year when we'd upgrade PostgreSQL? Not a chance. -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA Thanks to the good people in Microsoft, a great deal of the data that flows is dependent on one company. That is not a healthy ecosystem. The issue is that creativity gets filtered through the business plan of one company. Mitchell Baker, Chief Lizard Wrangler at Mozilla ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Ron Johnson [EMAIL PROTECTED] writes: And I strongly dispute the notion that it would only take 3 hours to dump/restore a TB of data. This seems to point to a downside of MVCC: this inability to to page-level database backups, which allow for rapid restores, since all of the index structures are part of the backup, and don't have to be created, in serial, as part of the pg_restore. If you have a filesystem capable of atomic snapshots (Veritas offers this I think), you *should* be able to do this fairly safely--take a snapshot of the filesystem and back up the snapshot. On a restore of the snapshot, transactions in progress when the snapshot happened will be rolled back, but everything that committed before then will be there (same thing PG does when it recovers from a crash). Of course, if you have your database cluster split across multiple filesystems, this might not be doable. Note: I haven't done this, but it should work and I've seen it talked about before. I think Oracle does this at the storage manager level when you put a database in backup mode; doing the same in PG would probably be a lot of work. This doesn't help with the upgrade issue, of course... -Doug ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] State of Beta 2
On Thu, 11 Sep 2003 14:24:25 -0700, Sean Chittenden [EMAIL PROTECTED] wrote: Agreed, but if anyone has a table with close to 1600 columns in a table... This 1600 column limit has nothing to do with block size. It is caused by the fact that a heap tuple header cannot be larger than 255 bytes, so there is a limited number of bits in the null bitmap. Servus Manfred ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
Kaare Rasmussen [EMAIL PROTECTED] writes: I believe this has been discussed before, but it does not seem to be a small or an easy task to implement. Yes, it's been discussed to death, and it isn't easy. See the archives for Lamar Owen's eloquent rants on the subject, and various hackers' followups as to the implementation issues. What it comes down to IMHO is that (a) there are still a lot of bad, incomplete, or shortsighted decisions embedded in Postgres, which cannot really be fixed in 100% backwards compatible ways; (b) there are not all that many people competent to work on improving Postgres, and even fewer who are actually being paid to do so; and (c) those who are volunteers are likely to work on things they find interesting to fix. Finding ways to maintain backwards compatibility without dump/reload is not in the interesting category. It is in the category of things that will only happen if people pony up money to pay someone to do uninteresting work. And for all the ranting, I've not seen any ponying. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] State of Beta 2
On Fri, 12 Sep 2003, Ron Johnson wrote: On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote: Small soapbox moment here... ANYTHING that can be done to eliminate having to do an initdb on version changes would make a lot of people do cartwheels. 'Do a dump/reload' sometimes comes across a bit casually on the lists (my apologies if it isn't meant to be), but it can be be incredibly onerous to do on a large production system. That's probably why you run across people running stupid-old versions. And this will become even more of an issue as it's PG's popularity grows with large and 24x7 databases. And dump/reload isn't always such a casual operation to do. I initialise a database from dump but I have to fiddle the sql on the reload to make it work. The odd thing is I never thought it a bug, just something to work around, until someone else has been persuing it on the list as one (it's the create schema thing). -- Nigel J. Andrews ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] State of Beta 2
On Fri, Sep 12, 2003 at 03:48:48PM -0700, Joshua D. Drake wrote: The initdb is not always a bad thing. In reality the idea of just being able to upgrade is not a good thing. Just think about the differences between 7.2.3 and 7.3.x... The most annoying (although appropriate) one being that integers can no longer be ''. But it would be much easier if one wasn't forced to create a dump and then restore it. One would still need to change the applications, but that doesn't force downtime. If we provide the ability to do a wholesale upgrade many things would just break. Heck even the connection protocol is different for 7.4. But the new client libpq _can_ talk to older servers. -- Alvaro Herrera (alvherre[a]dcc.uchile.cl) FOO MANE PADME HUM ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Upgrading (was Re: [GENERAL] State of Beta 2)
On Fri, 2003-09-12 at 17:01, Kaare Rasmussen wrote: He is right, it might be a good idea to head this problem 'off at the pass'. I am usually pretty good at predicting technilogical trends. I've Well, the only solution I can see is to make an inline conversion tool that knows about every step from earlier versions. I believe this has been discussed before, but it does not seem to be a small or an easy task to implement. Does the on-disk structure really change that much between major versions? -- - Ron Johnson, Jr. [EMAIL PROTECTED] Jefferson, LA USA Vanity, my favorite sin. Larry/John/Satan, The Devil's Advocate ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
Hello, The initdb is not always a bad thing. In reality the idea of just being able to upgrade is not a good thing. Just think about the differences between 7.2.3 and 7.3.x... The most annoying (although appropriate) one being that integers can no longer be ''. If we provide the ability to do a wholesale upgrade many things would just break. Heck even the connection protocol is different for 7.4. J Dennis Gearon wrote: Ron Johnson wrote: On Fri, 2003-09-12 at 10:50, Andrew Rawnsley wrote: Small soapbox moment here... ANYTHING that can be done to eliminate having to do an initdb on version changes would make a lot of people do cartwheels. 'Do a dump/reload' sometimes comes across a bit casually on the lists (my apologies if it isn't meant to be), but it can be be incredibly onerous to do on a large production system. That's probably why you run across people running stupid-old versions. And this will become even more of an issue as it's PG's popularity grows with large and 24x7 databases. He is right, it might be a good idea to head this problem 'off at the pass'. I am usually pretty good at predicting technilogical trends. I've made some money at it. And I predict that Postgres will eclipse MySQL and be in the top 5 of all databases deployed. But it does have some achilles tendon's. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com The most reliable support for the most reliable Open Source database. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] State of Beta 2
MGF == Marc G Fournier [EMAIL PROTECTED] writes: MGF Without a fair amount of testing, especially on other platforms, it most MGF likely won't happen in the distribution itself ... one of the things that MGF was bantered around for after v7.4 is released is seeing how increasing it MGF on the various platforms fairs, and possibly just raising the default to MGF 16k or 32k (Tatsuo mentioned a 15% improvement at 32k) ... MGF But, we'll need broader testing before that happens ... Well... if we had a good load generator (many threads; many small, medium, large transactions; many inserts; many reads) I'd run it to death on my idle server until 7.4 is released, at which point that server won't be idle anymore. I tried building one of the OSDL DB benchmark, but after installing the dependencies which are only announced by the failure of configure to run, it errored out with a C syntax error... at that point I gave up. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] State of Beta 2
I haven't had a chance to sit down and do any exhaustive testing yet and don't think I will for a while. That said, once 7.4 goes gold, I'm going to provide databases/postgresql-devel with a tunable that will allow people to choose what block size they would like (4k, 8K, 16K, 32K, or 64K) when they build the port. If you do this, you *have* to put in a very very big warning that databases created with non-PostgreSQL-standard block sizes may not be transferrable to a standard-PostgreSQL install ... that is Tom's major problem, is cross-platform/system dump/restores may no work is the database schema was designed with a 16k block size in mind ... Agreed, but if anyone has a table with close to 1600 columns in a table... is either nuts or knows what they're doing. If someone has 1600 columns, that is an issue, but isn't one that I think can be easily fended off without the backend being able to adapt on the fly to different block sizes, which seems like something that could be done with a rewrite of some of this code when table spaces are introduced. -sc -- Sean Chittenden ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
I haven't had a chance to sit down and do any exhaustive testing yet and don't think I will for a while. That said, once 7.4 goes gold, I'm going to provide databases/postgresql-devel with a tunable that will allow people to choose what block size they would like (4k, 8K, 16K, 32K, or 64K) when they build the port. If you do this, you *have* to put in a very very big warning that databases created with non-PostgreSQL-standard block sizes may not be transferrable to a standard-PostgreSQL install ... that is Tom's major problem, is cross-platform/system dump/restores may no work is the database schema was designed with a 16k block size in mind ... ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] State of Beta 2
AR == Andrew Rawnsley [EMAIL PROTECTED] writes: AR Anyone out there using beta 2 in production situations? Comments on AR stability? I am rolling out a project in the next 4 weeks, and really AR don't want to go though an upgrade soon after its released on an AR Unsuspecting Client, so I would LIKE to start working with 7.4. I'm pondering doing the same, but I'm not 100% sure there won't be any dump/restore-required changes to it before it goes gold. From my tuning tests I've been running on it, it appears to be extremely fast and stable. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: [EMAIL PROTECTED] Rockville, MD +1-240-453-8497 AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] State of Beta 2
On Wed, 10 Sep 2003, Vivek Khera wrote: AR == Andrew Rawnsley [EMAIL PROTECTED] writes: AR Anyone out there using beta 2 in production situations? Comments on AR stability? I am rolling out a project in the next 4 weeks, and really AR don't want to go though an upgrade soon after its released on an AR Unsuspecting Client, so I would LIKE to start working with 7.4. I'm pondering doing the same, but I'm not 100% sure there won't be any dump/restore-required changes to it before it goes gold. From my tuning tests I've been running on it, it appears to be extremely fast and stable. Yeah, right now it's looking like the only thing you'll have to do is reindex hash indexes between beta2 and beta3. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster