Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
People seemed to like the idea: Add a script to ask system configuration questions and tune postgresql.conf. --- Bruce Momjian wrote: Peter Eisentraut wrote: Tom Lane writes: Well, as I commented later in that mail, I feel that 1000 buffers is a reasonable choice --- but I have to admit that I have no hard data to back up that feeling. I know you like it in that range, and 4 or 8 MB of buffers by default should not be a problem. But personally I think if the optimal buffer size does not depend on both the physical RAM you want to dedicate to PostgreSQL and the nature and size of the database, then we have achieved a medium revolution in computer science. ;-) I have thought about this and I have an idea. Basically, increasing the default values may get us closer, but it will discourage some to tweek, and it will cause problems with some OS's that have small SysV params. So, my idea is to add a message at the end of initdb that states people should run the pgtune script before running a production server. The pgtune script will basically allow us to query the user, test the OS version and perhaps parameters, and modify postgresql.conf with reasonable values. I think this is the only way to cleanly get folks close to where they should be. For example, we can ask them how many rows and tables they will be changing, on average, between VACUUM runs. That will allow us set the FSM params. We can ask them about using 25% of their RAM for shared buffers. If they have other major apps running on the server or have small tables, we can make no changes. We can basically ask them questions and use that info to set values. We can even ask about sort usage maybe and set sort memory. We can even control checkpoint_segments this way if they say they will have high database write activity and don't worry about disk space usage. We may even be able to compute some random page cost estimate. Seems a script is going to be the best way to test values and assist folks in making reasonable decisions about each parameter. Of course, they can still edit the file, and we can ask them if they want assistance to set each parameter or leave it alone. I would restrict the script to only deal with tuning values, and tell people they still need to review that file for other useful parameters. Another option would be to make a big checklist or web page that asks such questions and computes proper values, but it seems a script would be easiest. We can even support '?' which would explain why the question is being ask and how it affects the value. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Nutshell: Easy to install but is horribly slow. or Took a couple of minutes to configure and it rocks! Since when is it easy to install on win32? The easiest way I know of is through Cygwin, then you have to worry about installing the IPC service (an getting the right version too!) I've installed versions 6.1 to 7.1, but I almost gave up on the windows install. At least in 6.x you had very comprehensive installation guide with a TOC. Versus the competition which are you going to choose if you're a wanna-be DBA? The one with all he hoops to jump through, or the one that comes with a setup.exe? Now I actually am in support of making it more aggressive, but it should wait until we too have a setup.exe for the native windows port. (Changing it on *n*x platforms is of little benefit because most benchmarks seem to run it on w32 anyway :-( ) Just my $.02. I reserve the right to be wrong. -J ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Josh Berkus wrote: Uh ... do we have a basis for recommending any particular sets of parameters for these different scenarios? This could be a good idea in the abstract, but I'm not sure I know enough to fill in the details. Sure. Mostly-Read database, few users, good hardware, complex queries: = High shared buffers and sort mem, high geqo and join collapse thresholds, moderate fsm settings, defaults for WAL. Same as above with many users and simple queries (webserver) = same as above, except lower sort mem and higher connection limit High-Transaction Database = Moderate shared buffers and sort mem, high FSM settings, increase WAL files and buffers. Workstation = Moderate to low shared buffers and sort mem, moderate FSM, defaults for WAL, etc. Low-Impact server = current defaults, more or less. Okay, but there should probably be one more, called Benchmark. The real problem is what values to use for it. :-) -- Kevin Brown [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Bruce Momjian wrote: We could prevent the postmaster from starting unless they run pg_tune or if they have modified postgresql.conf from the default. Of course, that's pretty drastic. If you're going to do that, then you may as well make the defaults something that will perform reasonably well under the widest circumstances possible and let the postmaster fail when it can't acquire the resources those defaults demand. What I'd do is go ahead and make the defaults something reasonable, and if the postmaster can't allocate, say, enough shared memory pages, then it should issue an error message saying not only that it wasn't able to allocate enough shared memory, but also which parameter to change and (if it's not too much trouble to implement) what it can be changed to in order to get past that part of the initialization (this means that the postmaster has to figure out how much shared memory it can actually allocate, via a binary search allocate/free method). It should also warn that by lowering the value, the resulting performance may be much less than satisfactory, and that the alternative (to increase SHMMAX, in this example) should be used if good performance is desired. That way, someone whose only concern is to make it work will be able to do so without having to do a lot of experimentation, and will get plenty of warning that the result isn't likely to work very well. And we end up getting better benchmarks in the cases where people don't have to touch the default config. :-) -- Kevin Brown [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: So, my idea is to add a message at the end of initdb that states people should run the pgtune script before running a production server. Do people read what initdb has to say? IIRC, the RPM install scripts hide initdb's output from the user entirely. I wouldn't put much faith in such a message as having any real effect on people... Yes, that is a problem. We could show something in the server logs if pg_tune hasn't been run. Not sure what else we can do, but it would give folks a one-stop thing to run to deal with performance configuration. We could prevent the postmaster from starting unless they run pg_tune or if they have modified postgresql.conf from the default. Of course, that's pretty drastic. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, Feb 11, 2003 at 05:25:29PM -0700, Rick Gigger wrote: The type of person who can't configure it or doesnt' think to try is probably not doing a project that requires any serious performance. I have piles of email, have fielded thousands of phone calls, and have had many conversations which prove that claim false. People think that computers are magic. That they don't think the machines require a little bit of attention is nowise an indication that they don't need the system to come with reasonable defaults. A -- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada [EMAIL PROTECTED] M2P 2A8 +1 416 646 3304 x110 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane writes: Well, as I commented later in that mail, I feel that 1000 buffers is a reasonable choice --- but I have to admit that I have no hard data to back up that feeling. I know you like it in that range, and 4 or 8 MB of buffers by default should not be a problem. But personally I think if the optimal buffer size does not depend on both the physical RAM you want to dedicate to PostgreSQL and the nature and size of the database, then we have achieved a medium revolution in computer science. ;-) -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Peter Eisentraut [EMAIL PROTECTED] writes: I know you like it in that range, and 4 or 8 MB of buffers by default should not be a problem. But personally I think if the optimal buffer size does not depend on both the physical RAM you want to dedicate to PostgreSQL and the nature and size of the database, then we have achieved a medium revolution in computer science. ;-) But this is not about optimal settings. This is about pretty good settings. As long as we can get past the knee of the performance curve, I think we've done what should be expected of a default parameter set. I believe that 1000 buffers is enough to get past the knee in most scenarios. Again, I haven't got hard evidence, but that's my best guess. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Peter Eisentraut wrote: Tom Lane writes: Well, as I commented later in that mail, I feel that 1000 buffers is a reasonable choice --- but I have to admit that I have no hard data to back up that feeling. I know you like it in that range, and 4 or 8 MB of buffers by default should not be a problem. But personally I think if the optimal buffer size does not depend on both the physical RAM you want to dedicate to PostgreSQL and the nature and size of the database, then we have achieved a medium revolution in computer science. ;-) I have thought about this and I have an idea. Basically, increasing the default values may get us closer, but it will discourage some to tweek, and it will cause problems with some OS's that have small SysV params. So, my idea is to add a message at the end of initdb that states people should run the pgtune script before running a production server. The pgtune script will basically allow us to query the user, test the OS version and perhaps parameters, and modify postgresql.conf with reasonable values. I think this is the only way to cleanly get folks close to where they should be. For example, we can ask them how many rows and tables they will be changing, on average, between VACUUM runs. That will allow us set the FSM params. We can ask them about using 25% of their RAM for shared buffers. If they have other major apps running on the server or have small tables, we can make no changes. We can basically ask them questions and use that info to set values. We can even ask about sort usage maybe and set sort memory. We can even control checkpoint_segments this way if they say they will have high database write activity and don't worry about disk space usage. We may even be able to compute some random page cost estimate. Seems a script is going to be the best way to test values and assist folks in making reasonable decisions about each parameter. Of course, they can still edit the file, and we can ask them if they want assistance to set each parameter or leave it alone. I would restrict the script to only deal with tuning values, and tell people they still need to review that file for other useful parameters. Another option would be to make a big checklist or web page that asks such questions and computes proper values, but it seems a script would be easiest. We can even support '?' which would explain why the question is being ask and how it affects the value. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 2003-02-11 at 10:20, Tom Lane wrote: Merlin Moncure [EMAIL PROTECTED] writes: May I make a suggestion that maybe it is time to start thinking about tuning the default config file, IMHO its just a little bit too conservative, It's a lot too conservative. I've been thinking for awhile that we should adjust the defaults. The original motivation for setting shared_buffers = 64 was so that Postgres would start out-of-the-box on machines where SHMMAX is 1 meg (64 buffers = 1/2 meg, leaving 1/2 meg for our other shared data structures). At one time SHMMAX=1M was a pretty common stock kernel setting. But our other data structures blew past the 1/2 meg mark some time ago; at default settings the shmem request is now close to 1.5 meg. So people with SHMMAX=1M have already got to twiddle their postgresql.conf settings, or preferably learn how to increase SHMMAX. That means there is *no* defensible reason anymore for defaulting to 64 buffers. We could retarget to try to stay under SHMMAX=4M, which I think is the next boundary that's significant in terms of real-world platforms (isn't that the default SHMMAX on some BSDen?). That would allow us 350 or so shared_buffers, which is better, but still not really a serious choice for production work. What I would really like to do is set the default shared_buffers to 1000. That would be 8 meg worth of shared buffer space. Coupled with more-realistic settings for FSM size, we'd probably be talking a shared memory request approaching 16 meg. This is not enough RAM to bother any modern machine from a performance standpoint, but there are probably quite a few platforms out there that would need an increase in their stock SHMMAX kernel setting before they'd take it. So what this comes down to is making it harder for people to get Postgres running for the first time, versus making it more likely that they'll see decent performance when they do get it running. It's worth noting that increasing SHMMAX is not nearly as painful as it was back when these decisions were taken. Most people have moved to platforms where it doesn't even take a kernel rebuild, and we've acquired documentation that tells how to do it on all(?) our supported platforms. So I think it might be okay to expect people to do it. The alternative approach is to leave the settings where they are, and to try to put more emphasis in the documentation on the fact that the factory-default settings produce a toy configuration that you *must* adjust upward for decent performance. But we've not had a lot of success spreading that word, I think. With SHMMMAX too small, you do at least get a pretty specific error message telling you so. Comments? I'd personally rather have people stumble trying to get PostgreSQL running, up front, rather than allowing the lowest common denominator more easily run PostgreSQL only to be disappointed with it and move on. After it's all said and done, I would rather someone simply say, it's beyond my skill set, and attempt to get help or walk away. That seems better than them being able to run it and say, it's a dog, spreading word-of-mouth as such after they left PostgreSQL behind. Worse yet, those that do walk away and claim it performs horribly are probably doing more harm to the PostgreSQL community than expecting someone to be able to install software ever can. Nutshell: Easy to install but is horribly slow. or Took a couple of minutes to configure and it rocks! Seems fairly cut-n-dry to me. ;) Regards, -- Greg Copeland [EMAIL PROTECTED] Copeland Computer Consulting ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane wrote: "Merlin Moncure" [EMAIL PROTECTED] writes: May I make a suggestion that maybe it is time to start thinking about tuning the default config file, IMHO its just a little bit too conservative, It's a lot too conservative. I've been thinking for awhile that we should adjust the defaults. One of the things I did on my Windows install was to have a number of default configuration files, postgresql.conf.small, postgresql.conf.medium, postgresql.conf.large. Rather than choose one, in the "initdb" script, ask for or determine the mount of shared memory, memory, etc. Another pet peeve I have is forcing the configuration files to be in the database directory. We had this argument in 7.1 days, and I submitted a patch that allowed a configuration file to be specified as a command line parameter. One of the things that Oracle does better is separating the "configuration" from the data. It is an easy patch to allow PostgreSQL to use a separate configuration directory, and specify the data directory within the configuration file (The way any logical application works), and, NO, symlinks are not a solution, they are a kludge.
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane wrote: snip What I would really like to do is set the default shared_buffers to 1000. That would be 8 meg worth of shared buffer space. Coupled with more-realistic settings for FSM size, we'd probably be talking a shared memory request approaching 16 meg. This is not enough RAM to bother any modern machine from a performance standpoint, but there are probably quite a few platforms out there that would need an increase in their stock SHMMAX kernel setting before they'd take it. snip Totally agree with this. We really, really, really, really need to get the default to a point where we have _decent_ default performance. The alternative approach is to leave the settings where they are, and to try to put more emphasis in the documentation on the fact that the factory-default settings produce a toy configuration that you *must* adjust upward for decent performance. But we've not had a lot of success spreading that word, I think. With SHMMMAX too small, you do at least get a pretty specific error message telling you so. Comments? Yep. Here's an *unfortunately very common* scenario, that again unfortunately, a _seemingly large_ amount of people fall for. a) Someone decides to benchmark database XYZ vs PostgreSQL vs other databases b) Said benchmarking person knows very little about PostgreSQL, so they install the RPM's, packages, or whatever, and it works. Then they run whatever benchmark they've downloaded, or designed, or whatever c) PostgreSQL, being practically unconfigured, runs at the pace of a slow, mostly-disabled snail. d) Said benchmarking person gets better performance from the other databases (also set to their default settings) and thinks PostgreSQL has lots of features, and it's free, but it's Too Slow. Yes, this kind of testing shouldn't even _pretend_ to have any real world credibility. e) Said benchmarking person tells everyone they know, _and_ everyone they meet about their results. Some of them even create nice looking or profesional looking web pages about it. f) People who know even _less_ than the benchmarking person hear about the test, or read the result, and don't know any better than to believe it at face value. So, they install whatever system was recommended. g) Over time, the benchmarking person gets the hang of their chosen database more and writes further articles about it, and doesn't generally look any further afield than it for say... a couple of years. By this time, they've already influenced a couple of thousand people in the non-optimal direction. h) Arrgh. With better defaults, our next release would _appear_ to be a lot faster to quite a few people, just because they have no idea about tuning. So, as sad as this scenario is, better defaults will probably encourage a lot more newbies to get involved, and that'll eventually translate into a lot more experienced users, and a few more coders to assist. ;-) Personally I'd be a bunch happier if we set the buffers so high that we definitely have decent performance, and the people that want to run PostgreSQL are forced to make the choice of either: 1) Adjust their system settings to allow PostgreSQL to run properly, or 2) Manually adjust the PostgreSQL settings to run memory-constrained This way, PostgreSQL either runs decently, or they are _aware_ that they're limiting it. That should cut down on the false benchmarks (hopefully). :-) Regards and best wishes, Justin Clift regards, tom lane -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Greg Copeland wrote: I'd personally rather have people stumble trying to get PostgreSQL running, up front, rather than allowing the lowest common denominator more easily run PostgreSQL only to be disappointed with it and move on. After it's all said and done, I would rather someone simply say, it's beyond my skill set, and attempt to get help or walk away. That seems better than them being able to run it and say, it's a dog, spreading word-of-mouth as such after they left PostgreSQL behind. Worse yet, those that do walk away and claim it performs horribly are probably doing more harm to the PostgreSQL community than expecting someone to be able to install software ever can. RANT And that my friends is why PostgreSQL is still relatively obscure. This attitude sucks. If you want a product to be used, you must put the effort into making it usable. It is a no-brainer to make the default configuration file suitable for the majority of users. It is lunacy to create a default configuration which provides poor performance for over 90% of the users, but which allows the lowest common denominator to work. A product must not perform poorly out of the box, period. A good product manager would choose one of two possible configurations, (a) a high speed fairly optimized system from the get-go, or (b) it does not run unless you create the configuration file. Option (c) out of the box it works like crap, is not an option. This is why open source gets such a bad reputation. Outright contempt for the user who may not know the product as well as those developing it. This attitude really sucks and it turns people off. We want people to use PostgreSQL, to do that we must make PostgreSQL usable. Usability IS important. /RANT ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 2003-02-11 at 12:10, Steve Crawford wrote: A quick-'n'-dirty first step would be more comments in postgresql.conf. Most This will not solve the issue with the large number of users who have no interest in looking at the config file -- but are interested in publishing their results. -- Rod Taylor [EMAIL PROTECTED] PGP Key: http://www.rbt.ca/rbtpub.asc signature.asc Description: This is a digitally signed message part
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
What if we supplied several sample .conf files, and let the user choose which to copy into the database directory? We could have a high read Exactly my first thought when reading the proposal for a setting suited for performance tests. performance profile, and a transaction database profile, and a workstation profile, and a low impact profile. We could even supply a And a .benchmark profile :-) Perl script that would adjust SHMMAX and SHMMALL on platforms where this can be done from the command line. Or maybe configuration could be adjusted with ./configure if SHMMAX can be determined at that point? -- Kaare Rasmussen--Linux, spil,--Tlf:3816 2582 Kaki Datatshirts, merchandize Fax:3816 2501 Howitzvej 75 Åben 12.00-18.00Email: [EMAIL PROTECTED] 2000 FrederiksbergLørdag 12.00-16.00 Web: www.suse.dk ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
mlw [EMAIL PROTECTED] writes: This attitude sucks. If you want a product to be used, you must put the effort into making it usable. [snip] AFAICT, you are flaming Greg for recommending the exact same thing you are recommending. Please calm down and read again. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 2003-02-11 at 12:08, Justin Clift wrote: b) Said benchmarking person knows very little about PostgreSQL, so they install the RPM's, packages, or whatever, and it works. Then they run whatever benchmark they've downloaded, or designed, or whatever Out of curiosity, how feasible is it for the rpm/package/deb/exe maintainers to modify their supplied postgresql.conf settings when building said distribution? AFAIK the minimum default SHHMAX setting on Red Hat 8.0 is 32MB, seems like bumping shared buffers to work with that amount would be acceptable inside the 8.0 rpm's. Robert Treat ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Apology After Mark calms down and, in fact, sees that Greg was saying the right thing after all, chagrin is the only word. I'm sorry. Greg Copeland wrote: On Tue, 2003-02-11 at 11:23, mlw wrote: Greg Copeland wrote: I'd personally rather have people stumble trying to get PostgreSQL running, up front, rather than allowing the lowest common denominator more easily run PostgreSQL only to be disappointed with it and move on. After it's all said and done, I would rather someone simply say, "it's beyond my skill set", and attempt to get help or walk away. That seems better than them being able to run it and say, "it's a dog", spreading word-of-mouth as such after they left PostgreSQL behind. Worse yet, those that do walk away and claim it performs horribly are probably doing more harm to the PostgreSQL community than expecting someone to be able to install software ever can. RANT And that my friends is why PostgreSQL is still relatively obscure. This attitude sucks. If you want a product to be used, you must put the effort into making it usable. Ah..okay It is a no-brainer to make the default configuration file suitable for the majority of users. It is lunacy to create a default configuration which provides poor performance for over 90% of the users, but which allows the lowest common denominator to work. I think you read something into my email which I did not imply. I'm certainly not advocating a default configuration file assuming 512M of share memory or some such insane value. Basically, you're arguing that they should keep doing exactly what they are doing. It's currently known to be causing problems and propagating the misconception that PostgreSQL is unable to perform under any circumstance. I'm arguing that who cares if 5% of the potential user base has to learn to properly install software. Either they'll read and learn, ask for assistance, or walk away. All of which are better than Jonny-come-lately offering up a meaningless benchmark which others are happy to eat with rather large spoons. A product must not perform poorly out of the box, period. A good product manager would choose one of two possible configurations, (a) a high speed fairly optimized system from the get-go, or (b) it does not run unless you create the configuration file. Option (c) out of the box it works like crap, is not an option. That's the problem. Option (c) is what we currently have. I'm amazed that you even have a problem with option (a), as that's what I'm suggesting. The problem is, potentially for some minority of users, it may not run out of the box. As such, I'm more than happy with this situation than 90% of the user base being stuck with a crappy default configuration. Oddly enough, your option (b) is even worse than what you are ranting at me about. Go figure. This is why open source gets such a bad reputation. Outright contempt for the user who may not know the product as well as those developing it. This attitude really sucks and it turns people off. We want people to use PostgreSQL, to do that we must make PostgreSQL usable. Usability IS important. /RANT There is no contempt here. Clearly you've read your own bias into this thread. If you go back and re-read my posting, I think it's VERY clear that it's entirely about usability. Regards,
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
My other pet peeve is the default max connections setting. This should be higher if possible, but of course, there's always the possibility of running out of file descriptors. Apache has a default max children of 150, and if using PHP or another language that runs as an apache module, it is quite possible to use up all the pgsql backend slots before using up all the apache child slots. Is setting the max connections to something like 200 reasonable, or likely to cause too many problems? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
scott.marlowe [EMAIL PROTECTED] writes: Is setting the max connections to something like 200 reasonable, or likely to cause too many problems? That would likely run into number-of-semaphores limitations (SEMMNI, SEMMNS). We do not seem to have as good documentation about changing that as we do about changing the SHMMAX setting, so I'm not sure I want to buy into the it's okay to expect people to fix this before they can start Postgres the first time argument here. Also, max-connections doesn't silently skew your testing: if you need to raise it, you *will* know it. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane wrote: I think that what this discussion is really leading up to is that we are going to decide to apply the same principle to performance. The out-of-the-box settings ought to give reasonable performance, and if your system can't handle it, you should have to take explicit action to acknowledge the fact that you aren't going to get reasonable performance. What I don't understand is why this is such a huge issue. Set it to a reasonable level (be it 4M or whatever the concensus is) let the packagers worry about it if that's not appropriate. Isn't it their job to have a good out-of-the-package experience? Won't they have better knowledge of what the system limits are for the packages they develop for? Worst case, couldn't they have a standard conf package a special high-performance conf package in addition to all the base packages? After all, it's the users of the RPMs that are the real problem, not usually the people that compile it on their own. If you were having problems with the compile-it-yourself audience, couldn't you just hit them over the head three or four times (configure, install, initdb failed startup to name a few) reminding them to change it if it wasn't appropriate. What more can you really do? At some point, the end user has to bear some responsibility... -- Jeff Hoffmann PropertyKey.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tuesday 11 February 2003 13:03, Robert Treat wrote: On Tue, 2003-02-11 at 12:08, Justin Clift wrote: b) Said benchmarking person knows very little about PostgreSQL, so they install the RPM's, packages, or whatever, and it works. Then they run whatever benchmark they've downloaded, or designed, or whatever Out of curiosity, how feasible is it for the rpm/package/deb/exe maintainers to modify their supplied postgresql.conf settings when building said distribution? AFAIK the minimum default SHHMAX setting on Red Hat 8.0 is 32MB, seems like bumping shared buffers to work with that amount would be acceptable inside the 8.0 rpm's. Yes, this is easy to do. But what is a sane default? I can patch any file I'd like to, but my preference is to patch as little as possible, as I'm trying to be generic here. I can't assume Red Hat 8 in the source RPM, and my binaries are to be preferred only if the distributor doesn't have updated ones. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 2003-02-11 at 10:20, Tom Lane wrote: Merlin Moncure [EMAIL PROTECTED] writes: May I make a suggestion that maybe it is time to start thinking about tuning the default config file, IMHO its just a little bit too conservative, It's a lot too conservative. I've been thinking for awhile that we should adjust the defaults. The original motivation for setting shared_buffers = 64 was so that Postgres would start out-of-the-box on machines where SHMMAX is 1 meg (64 buffers = 1/2 meg, leaving 1/2 meg for our other shared data structures). At one time SHMMAX=1M was a pretty common stock kernel setting. But our other data structures blew past the 1/2 meg mark some time ago; at default settings the shmem request is now close to 1.5 meg. So people with SHMMAX=1M have already got to twiddle their postgresql.conf settings, or preferably learn how to increase SHMMAX. That means there is *no* defensible reason anymore for defaulting to 64 buffers. We could retarget to try to stay under SHMMAX=4M, which I think is the next boundary that's significant in terms of real-world platforms (isn't that the default SHMMAX on some BSDen?). That would allow us 350 or so shared_buffers, which is better, but still not really a serious choice for production work. What I would really like to do is set the default shared_buffers to 1000. That would be 8 meg worth of shared buffer space. Coupled with more-realistic settings for FSM size, we'd probably be talking a shared memory request approaching 16 meg. This is not enough RAM to bother any modern machine from a performance standpoint, but there are probably quite a few platforms out there that would need an increase in their stock SHMMAX kernel setting before they'd take it. So what this comes down to is making it harder for people to get Postgres running for the first time, versus making it more likely that they'll see decent performance when they do get it running. It's worth noting that increasing SHMMAX is not nearly as painful as it was back when these decisions were taken. Most people have moved to platforms where it doesn't even take a kernel rebuild, and we've acquired documentation that tells how to do it on all(?) our supported platforms. So I think it might be okay to expect people to do it. The alternative approach is to leave the settings where they are, and to try to put more emphasis in the documentation on the fact that the factory-default settings produce a toy configuration that you *must* adjust upward for decent performance. But we've not had a lot of success spreading that word, I think. With SHMMMAX too small, you do at least get a pretty specific error message telling you so. Comments? I'd personally rather have people stumble trying to get PostgreSQL running, up front, rather than allowing the lowest common denominator more easily run PostgreSQL only to be disappointed with it and move on. After it's all said and done, I would rather someone simply say, it's beyond my skill set, and attempt to get help or walk away. That seems better than them being able to run it and say, it's a dog, spreading word-of-mouth as such after they left PostgreSQL behind. Worse yet, those that do walk away and claim it performs horribly are probably doing more harm to the PostgreSQL community than expecting someone to be able to install software ever can. Nutshell: Easy to install but is horribly slow. or Took a couple of minutes to configure and it rocks! Seems fairly cut-n-dry to me. ;) The type of person who can't configure it or doesnt' think to try is probably not doing a project that requires any serious performance. As long as you are running it on decent hardware postgres will run fantastic for anything but a very heavy load. I think there may be many people out there who have little experience but want an RDBMS to manage their data. Those people need something very, very easy. Look at the following that mysql gets despite how poor of a product it is. It's very, very easy. Mysql works great for many peoples needs but then when they need to do something real they need to move to a different database entirely. I think there is a huge advantage to having a product that can be set up very quickly out of the box. Those who need serious performance, hopefully for ther employers sake, will be more like to take a few minutes to do some quick performance tuning. Rick Gigger ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 11 Feb 2003, Tom Lane wrote: It's a lot too conservative. I've been thinking for awhile that we should adjust the defaults. Some of these issues could be made to Just Go Away with some code changes. For example, using mmap rather than SysV shared memory would automatically optimize your memory usage, and get rid of the double-buffering problem as well. If we could find a way to avoid using semephores proportional to the number of connections we have, then you wouldn't have to worry about that configuration parameter, either. In fact, some of this stuff might well improve our portability, too. For example, mmap is a POSIX standard, whereas shmget is only an X/Open standard. That makes me suspect that mmap is more widely available on non-Unix platforms. (But I could be wrong.) cjs -- Curt Sampson [EMAIL PROTECTED] +81 90 7737 2974 http://www.netbsd.org Don't you know, in this new Dark Age, we're all light. --XTC ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 11 Feb 2003, Rick Gigger wrote: The type of person who can't configure it or doesnt' think to try is probably not doing a project that requires any serious performance. As long as you are running it on decent hardware postgres will run fantastic for anything but a very heavy load. I think there may be many people out there who have little experience but want an RDBMS to manage their data. Those people need something very, very easy. Look at the following that mysql gets despite how poor of a product it is. It's very, very easy. Mysql works great for many peoples needs but then when they need to do something real they need to move to a different database entirely. I think there is a huge advantage to having a product that can be set up very quickly out of the box. Those who need serious performance, hopefully for ther employers sake, will be more like to take a few minutes to do some quick performance tuning. Very good point. I'm pushing for changes that will NOT negatively impact joe beginner on the major platforms (Linux, BSD, Windows) in terms of install. I figure anyone installing on big iron already knows enough about their OS we don't have to worry about shared buffers being too big for that machine. So, a compromise of faster performance out of the box, with little or no negative user impact seems the sweet spot here. I'm thinking a good knee setting for each one, where not too much memory / semaphores / file handles get gobbled up, but the database isn't pokey. The poor performance of Postgresql in it's current default configuration HAS cost us users, trust me, I know a few we've almost lost where I work that I converted after some quick tweaking of their database. In it's stock form Postgresql is very slow at large simple queries, like 'select * from table1 t1 natural join table2 t2 where t1.field='a'; where you get back something like 10,000 rows. The real bottleneck here is sort_mem. A simple bump up to 8192 or so makes the database much more responsive. If we're looking at changing default settings for 7.4, then we should look at changing ALL of them that matter, since we'll have the most time to shake out problems if we do them early, and we won't have four or five rounds of setting different defaults over time and finding the limitations of the HOST OSes one at a time. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Wed, 12 Feb 2003, Curt Sampson wrote: On Tue, 11 Feb 2003, Tom Lane wrote: It's a lot too conservative. I've been thinking for awhile that we should adjust the defaults. Some of these issues could be made to Just Go Away with some code changes. For example, using mmap rather than SysV shared memory would automatically optimize your memory usage, and get rid of the double-buffering problem as well. If we could find a way to avoid using semephores proportional to the number of connections we have, then you wouldn't have to worry about that configuration parameter, either. In fact, some of this stuff might well improve our portability, too. For example, mmap is a POSIX standard, whereas shmget is only an X/Open standard. That makes me suspect that mmap is more widely available on non-Unix platforms. (But I could be wrong.) I'll vote for mmap. I use the mm libs with apache/openldap/authldap and it is very fast and pretty common nowadays. It seems quite stable as well. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, Feb 11, 2003 at 17:42:06 -0700, scott.marlowe [EMAIL PROTECTED] wrote: The poor performance of Postgresql in it's current default configuration HAS cost us users, trust me, I know a few we've almost lost where I work that I converted after some quick tweaking of their database. About two years ago I talked some people into trying it at work to use with IMP/Horde which had been having some corruption problems while using MySQL (though it wasn't necessarily a problem with MySQL). I told them to be sure to use 7.1. When they tried it out it couldn't keep up with the load. I asked the guys what they tried and found out they couldn't find 7.1 rpms and didn't want to compile from source and so ended up using 7.0.?. Also as far as I could tell from talking to them, they didn't do any tuning at all. They weren't interested in taking another look at it after that. We are still using MySQL with that system today. One of our DBAs is using it for some trial projects (including one for me) even though we have a site license for Oracle. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
After it's all said and done, I would rather someone simply say, it's beyond my skill set, and attempt to get help or walk away. That seems better than them being able to run it and say, it's a dog, spreading word-of-mouth as such after they left PostgreSQL behind. Worse yet, those that do walk away and claim it performs horribly are probably doing more harm to the PostgreSQL community than expecting someone to be able to install software ever can. RANT And that my friends is why PostgreSQL is still relatively obscure. Dude - I hang out on PHPBuilder's database forums and you wouldn't believe how often the oh, don't use Postgres, it has a history of database corruption problems thing is mentioned. Chris ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tuesday 11 Feb 2003 10:56 pm, you wrote: Josh Berkus [EMAIL PROTECTED] writes: What if we supplied several sample .conf files, and let the user choose which to copy into the database directory? We could have a high read performance profile, and a transaction database profile, and a workstation profile, and a low impact profile. Uh ... do we have a basis for recommending any particular sets of parameters for these different scenarios? This could be a good idea in the abstract, but I'm not sure I know enough to fill in the details. Let's take very simple scenario to supply pre-configured postgresql.conf. Assume that SHMMAX=Total memory/2 and supply different config files for 64MB/128Mb/256MB/512MB and above. Is it simple enough? Shridhar ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane writes: We could retarget to try to stay under SHMMAX=4M, which I think is the next boundary that's significant in terms of real-world platforms (isn't that the default SHMMAX on some BSDen?). That would allow us 350 or so shared_buffers, which is better, but still not really a serious choice for production work. What is a serious choice for production work? And what is the ideal choice? The answer probably involves some variables, but maybe we should get values for those variables in each case and work from there. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom, Justin, What I would really like to do is set the default shared_buffers to 1000. That would be 8 meg worth of shared buffer space. Coupled with more-realistic settings for FSM size, we'd probably be talking a shared memory request approaching 16 meg. This is not enough RAM to bother any modern machine from a performance standpoint, but there are probably quite a few platforms out there that would need an increase in their stock SHMMAX kernel setting before they'd take it. What if we supplied several sample .conf files, and let the user choose which to copy into the database directory? We could have a high read performance profile, and a transaction database profile, and a workstation profile, and a low impact profile. We could even supply a Perl script that would adjust SHMMAX and SHMMALL on platforms where this can be done from the command line. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Josh Berkus [EMAIL PROTECTED] writes: What if we supplied several sample .conf files, and let the user choose which to copy into the database directory? We could have a high read performance profile, and a transaction database profile, and a workstation profile, and a low impact profile. Uh ... do we have a basis for recommending any particular sets of parameters for these different scenarios? This could be a good idea in the abstract, but I'm not sure I know enough to fill in the details. A lower-tech way to accomplish the same result is to document these alternatives in postgresql.conf comments and encourage people to review that file, as Steve Crawford just suggested. But first we need the raw knowledge. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Josh Berkus wrote: Tom, Justin, snip What if we supplied several sample .conf files, and let the user choose which to copy into the database directory? We could have a high read performance profile, and a transaction database profile, and a workstation profile, and a low impact profile. We could even supply a Perl script that would adjust SHMMAX and SHMMALL on platforms where this can be done from the command line. This might have value as the next step in the process of: a) Are we going to have better defaults? or b) Let's stick with the current approach. If we decide to go with better (changed) defaults, we may also be able to figure out a way of having profiles that could optionally be chosen from. As a longer term thought, it would be nice if the profiles weren't just hard-coded example files, but more of: pg_autotune --setprofile=xxx Or similar utility, and it did all the work. Named profiles being one capability, and other tuning measurements (i.e. cpu costings, disk performance profiles, etc) being the others. Regards and best wishes, Justin Clift -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom Lane wrote: snip Uh ... do we have a basis for recommending any particular sets of parameters for these different scenarios? This could be a good idea in the abstract, but I'm not sure I know enough to fill in the details. A lower-tech way to accomplish the same result is to document these alternatives in postgresql.conf comments and encourage people to review that file, as Steve Crawford just suggested. But first we need the raw knowledge. Without too much hacking around, you could pretty easily adapt the pg_autotune code to do proper profiles of a system with different settings. i.e. increment one setting at a time, run pgbench on it with some decent amount of transactions and users, stuff the results into a different database. Aggregate data over time kind of thing. Let it run for a week, etc. If it's helpful, there's a 100% spare Althon 1.6Ghz box around with (choose your OS) + Adaptec 29160 + 512MB RAM + 2 x 9GB Seagate Cheetah 10k rpm drives hanging around. No stress to set that up and let it run any long terms tests you'd like plus send back results. Regards and best wishes, Justin Clift regards, tom lane -- My grandfather once told me that there are two kinds of people: those who work and those who take the credit. He told me to try to be in the first group; there was less competition there. - Indira Gandhi ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Tom, Justin, Uh ... do we have a basis for recommending any particular sets of parameters for these different scenarios? This could be a good idea in the abstract, but I'm not sure I know enough to fill in the details. Sure. Mostly-Read database, few users, good hardware, complex queries: = High shared buffers and sort mem, high geqo and join collapse thresholds, moderate fsm settings, defaults for WAL. Same as above with many users and simple queries (webserver) = same as above, except lower sort mem and higher connection limit High-Transaction Database = Moderate shared buffers and sort mem, high FSM settings, increase WAL files and buffers. Workstation = Moderate to low shared buffers and sort mem, moderate FSM, defaults for WAL, etc. Low-Impact server = current defaults, more or less. While none of these settings will be *perfect* for anyone, they will be considerably better than what's shipping with postgresql. And, based on my Learning Perl knowledge, I'm pretty sure I could write the program. All we'd need to do is argue out, on the PERFORMANCE list, what's a good value for each profile. That's the tough part. The Perl script is easy. A lower-tech way to accomplish the same result is to document these alternatives in postgresql.conf comments and encourage people to review that file, as Steve Crawford just suggested. But first we need the raw knowledge. That's also not a bad approach ... the CONF file should be more heavily commented, period, regardless of what approach we take. I volunteer to work on this with other participants. Without too much hacking around, you could pretty easily adapt the pg_autotune code to do proper profiles of a system with different settings. No offense, Justin, but I don't know anyone else who's gotten your pg_autotune script to run other than you. And pg_bench has not been useful performance measure for any real database server I have worked on so far. I'd be glad to help improve pg_autotune, with two caveats: 1) We will still need to figure out the profiles above so that we have decent starting values. 2) I suggest that we do pg_autotune in Perl or Python or another higher-level language. This would enable several performance buffs who don't do C to contribute to it, and a performance-tuning script is a higher-level-language sort of function, anyway. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Justin Clift [EMAIL PROTECTED] writes: Tom Lane wrote: Uh ... do we have a basis for recommending any particular sets of parameters for these different scenarios? This could be a good idea in the abstract, but I'm not sure I know enough to fill in the details. Without too much hacking around, you could pretty easily adapt the pg_autotune code to do proper profiles of a system with different settings. i.e. increment one setting at a time, run pgbench on it with some decent amount of transactions and users, stuff the results into a different database. If I thought that pgbench was representative of anything, or even capable of reliably producing repeatable numbers, then I might subscribe to results derived this way. But I have little or no confidence in pgbench. Certainly I don't see how you'd use it to produce recommendations for a range of application scenarios, when it's only one very narrow scenario itself. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
On Tue, 2003-02-11 at 13:01, Tom Lane wrote: Jon Griffin [EMAIL PROTECTED] writes: So it appears that linux at least is way above your 8 meg point, unless I am missing something. Yeah, AFAIK all recent Linuxen are well above the range of parameters that I was suggesting (and even if they weren't, Linux is particularly easy to change the SHMMAX setting on). It's other Unixoid platforms that are likely to have a problem. Particularly the ones where you have to rebuild the kernel to change SHMMAX; people may be afraid to do that. The issue as I see it is: Better performing vs. More Compatible Out of the box Defaults. Perhaps a compromise (hack?): Set the default to some default value that performs well, a value we all agree is not too big (16M? 32M?). On startup, if the OS can't give us what we want, instead of failing, we can try again with a smaller amount, perhaps half the default, if that fails try again with half until we reach some bottom threshold (1M?). The argument against this might be: When I set shared_buffers=X, I want X shared buffers. I don't want it to fail silently and give me less than what I need / want. To address this we might want to add a guc option that controls this behavior. So we ship postgresql.conf with 32M of shared memory and auto_shared_mem_reduction = true. With a comment that the administrator might want to turn this off for production. Thoughts? I think this will allow most uninformed users get decent performing defaults as most systems will accommodate this larger value. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
Peter Eisentraut [EMAIL PROTECTED] writes: Tom Lane writes: We could retarget to try to stay under SHMMAX=4M, which I think is the next boundary that's significant in terms of real-world platforms (isn't that the default SHMMAX on some BSDen?). That would allow us 350 or so shared_buffers, which is better, but still not really a serious choice for production work. What is a serious choice for production work? Well, as I commented later in that mail, I feel that 1000 buffers is a reasonable choice --- but I have to admit that I have no hard data to back up that feeling. Perhaps we should take this to the pgsql-perform list and argue about reasonable choices. A separate line of investigation is what is the lowest common denominator nowadays? I think we've established that SHMMAX=1M is obsolete, but what replaces it as the next LCD? 4M seems to be correct for some BSD flavors, and I can confirm that that's the current default for Mac OS X --- any other comments? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
We could retarget to try to stay under SHMMAX=4M, which I think is the next boundary that's significant in terms of real-world platforms (isn't that the default SHMMAX on some BSDen?). That would allow us 350 or so shared_buffers, which is better, but still not really a serious choice for production work. What is a serious choice for production work? Well, as I commented later in that mail, I feel that 1000 buffers is a reasonable choice --- but I have to admit that I have no hard data to back up that feeling. Perhaps we should take this to the pgsql-perform list and argue about reasonable choices. Damn. Another list I have to subscribe to! The results I just posted indicate that 1000 buffers is really quite bad performance comaped to 4000, perhaps up to 100 TPS for selects and 30 TPS for TPC-B. Still, that 1000 is in itself vastly better than 64!! Chris ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Changing the default configuration (was Re: [pgsql-advocacy]
A separate line of investigation is what is the lowest common denominator nowadays? I think we've established that SHMMAX=1M is obsolete, but what replaces it as the next LCD? 4M seems to be correct for some BSD flavors, and I can confirm that that's the current default for Mac OS X --- any other comments? It's 1025 * 4k pages on FreeBSD = 4MB Chris ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly