Re: [HACKERS] bgwriter never dies
On Thu, Feb 26, 2004 at 17:09:21 -0500, Tom Lane [EMAIL PROTECTED] wrote: Simon Riggs [EMAIL PROTECTED] writes: Should we have a pgmon process that watches the postmaster and restarts it if required? I doubt it; in practice the postmaster is *very* reliable (because it doesn't do much), and so I'm not sure that adding a watchdog is going to increase the net reliability of the system. The sorts of things that could take out the postmaster are likely to take out a watchdog too. Running postgres under daemontools is an easy way to accomplish this. The one case it won't handle is if some process gets the process id from the old postgres process before the new one starts up. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] bgwriter never dies
Tom, ... In the case of a postmaster crash, I think something in the system is so wrong that I'd prefer an immediate shutdown. Surely some other people have opinions on this? Hello out there? Well, my opinion is based on the question, can we restart the postmaster if it dies and the other backends are still running? If not, why are we even discussing this? Otherwise, as someone who does support for a number of high-demand databases, I'd like to give you the rules that such applications need to have enforced in order to continue using Postgres: 1) If any transaction is reported as complete to the client, it must be written to disk and not rolled back, even in the event of a sudden power outage. 2) If any transaction is *not* reported as complete, except in split-second timing cases, it should be rolled back. Always. Basically, in high-demand situations where one expects occasional failures due to load, one depends on the application log, the syslog and the transaction log being in synch. Otherwise on restart one doesn't know what has or hasn't been committed. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] bgwriter never dies
Josh Berkus [EMAIL PROTECTED] writes: Well, my opinion is based on the question, can we restart the postmaster if it dies and the other backends are still running? You can't start a fresh postmaster until the last of the old backends is gone (and yes, there's an interlock to prevent you from making this mistake, though people have been known to override the interlock :-(). This means there's a tradeoff between allowing new sessions to start soon and letting old ones finish their work. It seems to me that either of these goals might have priority in particular situations, and that it would not be wise for us to design in a forced choice. With the current setup, the DBA can manually SIGINT or SIGTERM individual backends when he deems them unworthy of allowing to finish. If we put in a forced quit as Jan is suggesting, we take away that flexibility. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] bgwriter never dies
Simon Riggs [EMAIL PROTECTED] writes: Should we have a pgmon process that watches the postmaster and restarts it if required? I doubt it; in practice the postmaster is *very* reliable (because it doesn't do much), and so I'm not sure that adding a watchdog is going to increase the net reliability of the system. The sorts of things that could take out the postmaster are likely to take out a watchdog too. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] bgwriter never dies
On Tuesday 24 February 2004 23:47, Neil Conway wrote: Jan Wieck [EMAIL PROTECTED] writes: In the case of a postmaster crash, I think something in the system is so wrong that I'd prefer an immediate shutdown. I agree. Allowing existing backends to commit transactions after the postmaster has died doesn't strike me as being that useful, and is probably more confusing than anything else. That said, if it takes some period of time between the death of the postmaster and the shutdown of any backends, we *need* to ensure that any transactions committed during that period still make it to durable storage. Yes, roll back any existing/uncommited transactions and shutdown those connections, but make sure that committed transactions are stored on disk before exiting completly. Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] bgwriter never dies
At 12:19 AM 26/02/2004, Robert Treat wrote: Yes, roll back any existing/uncommited transactions and shutdown I'm not event sure I'd go with the rollback; whatever killed the PM may make the rest of the system unstable. I'd prefer to see the transactions rolled back (if necessary) as part of the log recovery on PM startup, not by possibly dying PG proceses. Philip Warner| __---_ Albatross Consulting Pty. Ltd. |/ - \ (A.B.N. 75 008 659 498) | /(@) __---_ Tel: (+61) 0500 83 82 81 | _ \ Fax: (+61) 03 5330 3172 | ___ | Http://www.rhyme.com.au |/ \| |---- PGP key available upon request, | / and from pgp.mit.edu:11371 |/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] bgwriter never dies
Philip Warner [EMAIL PROTECTED] writes: I'm not event sure I'd go with the rollback; whatever killed the PM may make the rest of the system unstable. I'd prefer to see the transactions rolled back (if necessary) as part of the log recovery on PM startup, not by possibly dying PG proceses. Well, in the first place rollback is not an explicit action in Postgres; you're thinking of Oracle or some other old-line technology. There's nothing that has to happen to undo the effects of a failed transaction. But my real problem with the above line of reasoning is that there is no basis for assuming that a postmaster failure has anything to do with problems at the backend level. We have always gone out of our way to ensure that the postmaster is disconnected from backend failure causes --- it doesn't touch any but the simplest shared-memory datastructures, for example. This design rule exists mostly to try to ensure that the postmaster will survive backend crashes, but the effects cut both ways: there is no reason that a backend won't survive a postmaster crash. In practice, the few postmaster crashes I've seen have been due to localized bugs in postmaster-only code or a Linux kernel randomly seizing on the postmaster as the victim for an out-of-memory kill. I have never seen the postmaster crash as a result of backend-level problems, and if I did I'd be out to fix it immediately. So my opinion is that kill all the backends when the postmaster crashes is a bad idea that will only result in a net reduction in system reliability. There is no point in building insulated independent components if you then put in logic to force the system uptime to be the minimum of the component uptimes. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] bgwriter never dies
At 04:01 PM 26/02/2004, Tom Lane wrote: there is no basis for assuming that a postmaster failure has anything to do with problems at the backend levelSo my opinion is that kill all the backends when the postmaster crashes is a bad idea Sounds fine. Then a system that will allow a new PM to start ASAP and serve other connections would be great. I assume that means an orderly shutdown restart, but I can't see a way to make the restart work. Philip Warner| __---_ Albatross Consulting Pty. Ltd. |/ - \ (A.B.N. 75 008 659 498) | /(@) __---_ Tel: (+61) 0500 83 82 81 | _ \ Fax: (+61) 03 5330 3172 | ___ | Http://www.rhyme.com.au |/ \| |---- PGP key available upon request, | / and from pgp.mit.edu:11371 |/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] bgwriter never dies
Tom Lane wrote: Jan Wieck [EMAIL PROTECTED] writes: Tom Lane wrote: Maybe there should be a provision similar to the stats collector's check-for-read-ready-from-a-pipe? the case of the bgwriter is a bit of a twist here. In contrast to the collectors it is connected to the shared memory. So it can keep resources and also even worse, it could write() after the postmaster died. That's not worse, really. Any backends that are still alive are committing real live transactions --- they're telling their clients they committed, so we'd better commit. I don't mind if performance gets worse or if we lose pg_stats statistics, but we'd better not adopt the attitude that transaction correctness no longer matters after a postmaster crash. So one thing we ought to think about here is whether system correctness depends on the bgwriter continuing to run until the last backend is gone. AFAICS that is not true now --- the bgwriter just improves performance --- but we'd better decide what our plan for the future is. Maybe there is a chance to create a watchdog for free here. Do we currently create our own process group, with all processes under the postmaster belonging to it? We do not; I'm not sure the notion of a process group is even portable, and I am pretty sure that the API to control process grouping isn't. If the bgwriter would at the times it naps check if its parent process is init, (Win32 note, check if the postmaster does not exist any more instead), it could kill the entire process group on behalf of the dead postmaster. I don't think we want that. IMHO the preferred behavior if the postmaster crashes should be like a smart shutdown --- you don't spawn any more backends (obviously) but existing backends should be allowed to run until their clients exit. That's how things have always worked anyway... [ thinks ... ] If we do want it we don't need any process-group assumptions. The bgwriter is connected to shmem so it can scan the PGPROC array and issue kill() against each sibling. Right. Which can change the backend behaviour from a smart shutdown to an immediate shutdown. In the case of a postmaster crash, I think something in the system is so wrong that I'd prefer an immediate shutdown. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] bgwriter never dies
Jan Wieck [EMAIL PROTECTED] writes: In the case of a postmaster crash, I think something in the system is so wrong that I'd prefer an immediate shutdown. I agree. Allowing existing backends to commit transactions after the postmaster has died doesn't strike me as being that useful, and is probably more confusing than anything else. That said, if it takes some period of time between the death of the postmaster and the shutdown of any backends, we *need* to ensure that any transactions committed during that period still make it to durable storage. -Neil ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] bgwriter never dies
I noticed while doing some debugging this morning that if the postmaster crashes for some reason (eg kill -9) the bgwriter process never goes away. Backends will eventually exit when their clients quit, and the stats collection processes shut down nicely, but the bgwriter process has to be killed by hand. This doesn't seem like a real good thing. Maybe there should be a provision similar to the stats collector's check-for-read-ready-from-a-pipe? regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] bgwriter never dies
Tom Lane wrote: I noticed while doing some debugging this morning that if the postmaster crashes for some reason (eg kill -9) the bgwriter process never goes away. Backends will eventually exit when their clients quit, and the stats collection processes shut down nicely, but the bgwriter process has to be killed by hand. This doesn't seem like a real good thing. Maybe there should be a provision similar to the stats collector's check-for-read-ready-from-a-pipe? H, the case of the bgwriter is a bit of a twist here. In contrast to the collectors it is connected to the shared memory. So it can keep resources and also even worse, it could write() after the postmaster died. Maybe there is a chance to create a watchdog for free here. Do we currently create our own process group, with all processes under the postmaster belonging to it? If the bgwriter would at the times it naps check if its parent process is init, (Win32 note, check if the postmaster does not exist any more instead), it could kill the entire process group on behalf of the dead postmaster. This is one more system call at a time, where the bgwriter does a system call with a timeout to nap anyway. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] bgwriter never dies
Jan Wieck [EMAIL PROTECTED] writes: Tom Lane wrote: Maybe there should be a provision similar to the stats collector's check-for-read-ready-from-a-pipe? the case of the bgwriter is a bit of a twist here. In contrast to the collectors it is connected to the shared memory. So it can keep resources and also even worse, it could write() after the postmaster died. That's not worse, really. Any backends that are still alive are committing real live transactions --- they're telling their clients they committed, so we'd better commit. I don't mind if performance gets worse or if we lose pg_stats statistics, but we'd better not adopt the attitude that transaction correctness no longer matters after a postmaster crash. So one thing we ought to think about here is whether system correctness depends on the bgwriter continuing to run until the last backend is gone. AFAICS that is not true now --- the bgwriter just improves performance --- but we'd better decide what our plan for the future is. Maybe there is a chance to create a watchdog for free here. Do we currently create our own process group, with all processes under the postmaster belonging to it? We do not; I'm not sure the notion of a process group is even portable, and I am pretty sure that the API to control process grouping isn't. If the bgwriter would at the times it naps check if its parent process is init, (Win32 note, check if the postmaster does not exist any more instead), it could kill the entire process group on behalf of the dead postmaster. I don't think we want that. IMHO the preferred behavior if the postmaster crashes should be like a smart shutdown --- you don't spawn any more backends (obviously) but existing backends should be allowed to run until their clients exit. That's how things have always worked anyway... [ thinks ... ] If we do want it we don't need any process-group assumptions. The bgwriter is connected to shmem so it can scan the PGPROC array and issue kill() against each sibling. We oughta debate the desired behavior first though. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])