Re: [HACKERS] Automatically setting work_mem
On Fri, 2006-03-17 at 13:29 +0800, Qingqing Zhou wrote: Simon Riggs [EMAIL PROTECTED] wrote Interesting, I understand that shared_work_mem is process-wise, allocate-when-use, request-may-or-may-not-get-it (as you have pointed out, this may make planner in a hard situation if we are sensitive to work_mem). But I still have something unclear. Let's say we have a sort operation need 1024 memory. So the DBA may have the following two options: (1) SET work_mem = 1024; SET shared_work_mem = 0; do sort; (2) SET work_mem = 512; SET shared_work_mem = 512; do sort; So what's the difference between these two strategy? (1) Running time: do they use the same amount of memory? Why option 2 is better than 1? (2) Idle time: after sort done, option 1 will return all 1024 to the OS and 2 will still keep 512? The differences are (1) no performance difference - all memory would be allocated and deallocated at same time in either case (2) shared_work_mem is SUSET rather than USERSET as work_mem is... (3) The value is set for the whole server rather than by individual tuning, so it would not make sense to use it as you have shown, even though you could if you were the superuser The goal is to do this: do sort;/* no work_mem settings at all */ with shared_work_mem set once for the whole server in postgresql.conf Best Regards, Simon Riggs ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Commit turns into rollback?
What sense is this supposed to make? = begin; BEGIN = blah; ERROR: ... = commit; ROLLBACK Even if this is justifiable, this behavior is not documented (in any obvious place). -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Commit turns into rollback?
Ühel kenal päeval, R, 2006-03-17 kell 10:12, kirjutas Peter Eisentraut: What sense is this supposed to make? = begin; BEGIN = blah; ERROR: ... = commit; ROLLBACK Return status tells you what actually happened. Even if this is justifiable, this behavior is not documented (in any obvious place). What would you suggest as an obvious place ? - Hannu ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Commit turns into rollback?
Am Freitag, 17. März 2006 11:25 schrieb Hannu Krosing: = commit; ROLLBACK Return status tells you what actually happened. If I send a COMMIT, I want a commit to happen or an error. Even if this is justifiable, this behavior is not documented (in any obvious place). What would you suggest as an obvious place ? The COMMIT reference page would be a start. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Commit turns into rollback?
Ühel kenal päeval, R, 2006-03-17 kell 11:27, kirjutas Peter Eisentraut: Am Freitag, 17. März 2006 11:25 schrieb Hannu Krosing: = commit; ROLLBACK Return status tells you what actually happened. If I send a COMMIT, I want a commit to happen or an error. You already got several errors. Any suggestion how to get out of the ERROR state ? = begin; BEGIN = blah; ERROR: ... = commit; ERROR: ... = select 1; ERROR: current transaction is aborted, commands ignored until end of transaction block Or would you like a mode where you need explicitly send a ROLLBACK to get out ? Even if this is justifiable, this behavior is not documented (in any obvious place). What would you suggest as an obvious place ? The COMMIT reference page would be a start. Good point :) Hannu ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Commit turns into rollback?
Am Freitag, 17. März 2006 13:03 schrieb Hannu Krosing: If I send a COMMIT, I want a commit to happen or an error. You already got several errors. Right, but a defensively programmed application, I want to get all the errors all the time at every possible place. Or would you like a mode where you need explicitly send a ROLLBACK to get out ? I faintly recall that this used to be the behavior. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Commit turns into rollback?
Ühel kenal päeval, R, 2006-03-17 kell 14:10, kirjutas Peter Eisentraut: Am Freitag, 17. März 2006 13:03 schrieb Hannu Krosing: If I send a COMMIT, I want a commit to happen or an error. You already got several errors. Right, but a defensively programmed application, I want to get all the errors all the time at every possible place. Or would you like a mode where you need explicitly send a ROLLBACK to get out ? I faintly recall that this used to be the behavior. in 7.4 it was worse ahf=# begin; BEGIN ahf=# select 1/0; ERROR: division by zero ahf=# commit; COMMIT it still did a rollback, but reported a COMMIT --- Hannu ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Automatically setting work_mem
Qingqing Zhou [EMAIL PROTECTED] writes: So what's the difference between these two strategy? (1) Running time: do they use the same amount of memory? Why option 2 is better than 1? (2) Idle time: after sort done, option 1 will return all 1024 to the OS and 2 will still keep 512? Point 2 is actually a serious flaw in Simon's proposal, because there is no portable way to make malloc return freed memory to the OS. Some mallocs will do that ... in some cases ... but many simply don't ever move the brk address down. It's not an easy thing to do when the arena gets cluttered with a lot of different alloc chunks and only some of them get freed. So the semantics we'd have to adopt is that once a backend claims some shared work mem, it keeps it until process exit. I don't think that makes the idea worthless, because there's usually a clear distinction between processes doing expensive stuff and processes doing cheap stuff. But it's definitely a limitation. Also, if you've got a process doing expensive stuff, it's certainly possible to expect the user to just increase work_mem locally. (BTW, given that work_mem is locally increasable, I'm not sure what's the point of considering that shared_work_mem has to be SUSET. It's not to prevent users from blowing out memory.) My own thoughts about the problems with our work_mem arrangement are that the real problem is the rule that we can allocate work_mem per sort or hash operation; this makes the actual total memory use per backend pretty unpredictable for nontrivial queries. I don't know how to fix this though. The planner needs to know the work_mem that will be used for any one of these operations in order to estimate costs, so simply trying to divide up work_mem among the operations of a completed plan tree is not going to improve matters. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Commit turns into rollback?
Peter Eisentraut [EMAIL PROTECTED] writes: Am Freitag, 17. März 2006 13:03 schrieb Hannu Krosing: Or would you like a mode where you need explicitly send a ROLLBACK to get out ? I faintly recall that this used to be the behavior. No, it never was like that, and I don't think we can change it without breaking an awful lot of stuff --- particularly psql scripts, which would basically be unable to do anything at all with such a definition. A script can't change its behavior depending on whether there was an error earlier. It would also move us further away from the SQL standard. The spec says that COMMIT ends the transaction, full stop, not ends it only if you're not in an error state. Of course the spec hasn't got a notion of a transaction error state at all, but my point is that making COMMIT leave you in the broken transaction is not an improvement compliance-wise. I am surprised that the COMMIT reference page makes no mention of the point though. That needs to be fixed. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Automatically setting work_mem
My own thoughts about the problems with our work_mem arrangement are that the real problem is the rule that we can allocate work_mem per sort or hash operation; this makes the actual total memory use per backend pretty unpredictable for nontrivial queries. I don't know how to fix this though. The planner needs to know the work_mem that will be used for any one of these operations in order to estimate costs, so simply trying to divide up work_mem among the operations of a completed plan tree is not going to improve matters. I know this is not right to the point related to what is discussed in this thread, and that it would need some serious work, but how about a mechanism to allow plans some flexibility at run-time ? What I mean is not to do all the decisions at plan time, but include some branches in the plan, and execute one branch or the other depending on actual parameter values, current statistics, current memory available, ... (name here other run-time resources). This would make a lot more feasible to long-term cache query plans. For e.g. you wouldn't have to worry too much about changing statistics if at runtime you can check them again... and you could put decision points based on current memory resources. Of course it still must be a balance between the number of the decision points (which ultimately means the size of the plan) and robustness against changing conditions, i.e. branches should only go in for conditions likely to change. Is this completely not feasible with current postgres architecture ? I have no idea how the planning/runtime works internally. It worths a look at how apache Derby does with query planning, where a planned query is actually a compiled Java class, i.e. the executable byte code which will run to fetch the results, created and compiled by the planner... interesting approach, allows for lots of flexibility at run-time, but probably won't work with C :-) Cheers, Csaba. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Commit turns into rollback?
Am Freitag, 17. März 2006 16:07 schrieb Tom Lane: It would also move us further away from the SQL standard. The spec says that COMMIT ends the transaction, full stop, not ends it only if you're not in an error state. Of course the spec hasn't got a notion of a transaction error state at all, but my point is that making COMMIT leave you in the broken transaction is not an improvement compliance-wise. The standard does address the issue of transactions that cannot be committed because of an error. In 16.6. commit statement GR 6 it basically says that if the transaction cannot be completed (here: because of a constraint violation), then an exception condition should be raised. That is, the transaction is over but you get an error. I think that behavior would be better. For example, Java programs will get an exception and know something is wrong. Right now, I don't even know whether it is possible in all programming interfaces to get at the command tag and infer failure to commit from there. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Commit turns into rollback?
Peter Eisentraut [EMAIL PROTECTED] writes: Am Freitag, 17. März 2006 16:07 schrieb Tom Lane: It would also move us further away from the SQL standard. The spec says that COMMIT ends the transaction, full stop, not ends it only if you're not in an error state. Of course the spec hasn't got a notion of a transaction error state at all, but my point is that making COMMIT leave you in the broken transaction is not an improvement compliance-wise. The standard does address the issue of transactions that cannot be committed because of an error. In 16.6. commit statement GR 6 it basically says that if the transaction cannot be completed (here: because of a constraint violation), then an exception condition should be raised. That is, the transaction is over but you get an error. I think that behavior would be better. So it's not the fact that it rolls back that bugs you, it's the way that the action is reported? We could talk about changing that maybe --- it wouldn't break existing scripts AFAICS. It might break applications though. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Commit turns into rollback?
On 3/17/06, Tom Lane [EMAIL PROTECTED] wrote: The standard does address the issue of transactions that cannot be committed because of an error. In 16.6. commit statement GR 6 it basically says that if the transaction cannot be completed (here: because of a constraint violation), then an exception condition should be raised. That is, the transaction is over but you get an error. I think that behavior would be better. So it's not the fact that it rolls back that bugs you, it's the way that the action is reported? We could talk about changing that maybe --- it wouldn't break existing scripts AFAICS. It might break applications though. Error means the actual command failed. _Doing_ something, successfully, and still reporting error seems rather wrong. IMHO only other behaviour than current one that is not broken is requiring ROLLBACK for failed transactions. And that is no good for backwards-compatibility reasons. So -1 for changing anything. -- marko ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] Seperate command-line histories for seperate databases
Someone sent the following request to the Debian bug tracking system: It'd be nice if the .psql_history file were per-database. This is because the commands executed on different databases are likely to be very different, so there is no good reason to lose history for db1 when working with db2, especially since the new history from db2 is probably not useful for db1. I'd suggest giving psql a directory, and naming history like this: .psql/history/hostname/dbname There is undoubtedly some merit to that, but I still have my doubts. But at least we can discuss it. Comments? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Seperate command-line histories for seperate databases
Peter Eisentraut wrote: Someone sent the following request to the Debian bug tracking system: It'd be nice if the .psql_history file were per-database. This is because the commands executed on different databases are likely to be very different, so there is no good reason to lose history for db1 when working with db2, especially since the new history from db2 is probably not useful for db1. I'd suggest giving psql a directory, and naming history like this: .psql/history/hostname/dbname There is undoubtedly some merit to that, but I still have my doubts. But at least we can discuss it. Comments? The psql manual pages for 8.1 now has: HISTFILE The file name that will be used to store the his- tory list. The default value is ~/.psql_history. For example, putting \set HISTFILE ~/.psql_history- :DBNAME in ~/.psqlrc will cause psql to maintain a separate history for each database. -- Bruce Momjian http://candle.pha.pa.us SRA OSS, Inc. http://www.sraoss.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Seperate command-line histories for seperate databases
Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Excellent! -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Seperate command-line histories for seperate databases
Peter Eisentraut wrote: Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Excellent! Yea, it is a cool trick someone posted once and I added it to the manual page for others to use. -- Bruce Momjian http://candle.pha.pa.us SRA OSS, Inc. http://www.sraoss.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Seperate command-line histories for seperate databases
2006/3/17, Bruce Momjian pgman@candle.pha.pa.us: Peter Eisentraut wrote: Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Any reason psql doesn't do this by default? It is clear that the database name does not unambiguously identify a database, but having a history for each database name is already an improvement over the current situation. Markus Bertheau ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Seperate command-line histories for seperate databases
Markus Bertheau wrote: 2006/3/17, Bruce Momjian pgman@candle.pha.pa.us: Peter Eisentraut wrote: Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Any reason psql doesn't do this by default? It is clear that the database name does not unambiguously identify a database, but having a history for each database name is already an improvement over the current situation. I am not sure everyone would want it. If they want it, we have given them the capability. -- Bruce Momjian http://candle.pha.pa.us SRA OSS, Inc. http://www.sraoss.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Seperate command-line histories for seperate databases
[EMAIL PROTECTED] (Markus Bertheau) writes: 2006/3/17, Bruce Momjian pgman@candle.pha.pa.us: Peter Eisentraut wrote: Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Any reason psql doesn't do this by default? It is clear that the database name does not unambiguously identify a database, but having a history for each database name is already an improvement over the current situation. I fairly frequently find myself accessing different (but similar) databases, and it's quite valuable to be able to submit the *same* queries to them. This change would make it troublesome to do that. I'd not be all that keen on the change... -- select 'cbbrowne' || '@' || 'ntlug.org'; http://cbbrowne.com/info/rdbms.html On the other hand, O'Reilly's book about running W95 has a toad as the cover animal. Makes sense; both have lots of warts and croak all the time. --- Michael Kagalenko, ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Seperate command-line histories for seperate databases
Chris Browne wrote: [EMAIL PROTECTED] (Markus Bertheau) writes: 2006/3/17, Bruce Momjian pgman@candle.pha.pa.us: Peter Eisentraut wrote: Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Any reason psql doesn't do this by default? It is clear that the database name does not unambiguously identify a database, but having a history for each database name is already an improvement over the current situation. I fairly frequently find myself accessing different (but similar) databases, and it's quite valuable to be able to submit the *same* queries to them. This change would make it troublesome to do that. I'd not be all that keen on the change... Just what I was thinking. cheers andrew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Automatically setting work_mem
Tom, My own thoughts about the problems with our work_mem arrangement are that the real problem is the rule that we can allocate work_mem per sort or hash operation; this makes the actual total memory use per backend pretty unpredictable for nontrivial queries. I don't know how to fix this though. The planner needs to know the work_mem that will be used for any one of these operations in order to estimate costs, so simply trying to divide up work_mem among the operations of a completed plan tree is not going to improve matters. Yes ... the unpredictability is the problem: (1) We can only allocate the # of connections and default work_mem per operation. (2) There are a variable # of concurrent queries per connection (0..1) (3) Each query has a variable # of operations requiring work_mem, which will require a variable amount of work_mem. If your malloc is good, this is limited to concurrent operations, but for some OSes this is all operations per query. Thus the former uses 0..3xwork_mem per query and the latter 0..7x in general practice. Overall, this means that based on a specific max_connections and work_mem, a variable amount between 1*work_mem and (connections*3*work_mem) memory may be needed at once. Since the penalty for overallocating RAM is severe on most OSes, DBAs are forced to allow for the worst case. This results in around 2/3 underallocation on systems with unpredictable loads. This means that, even on heavily loaded DB systems, most of the time you're wasting a big chunk of your RAM. Simon and I met about this (and other stuff) at the GreenPlum offices last summer. The first plan we came up with is query queueing. Query queueing would eliminate variability (2), which would then make the only variability one of how much work_mem would be needed per query, reducing (but not eliminating) the underallocation. Additionally, we thought to tie the query queues to ROLES, which would allow the administrator to better control how much work_mem per type of query was allowed. It would also allow admins to balance priorities better on mixed-load machines. Mind you, I'm also thinking that on enterprise installations with multi-department use of the database, the fact that work_mem is inalienably USERSET is also an allocation problem. One user with a SET command can blow all of your resource planning away. -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Automatically setting work_mem
Josh Berkus josh@agliodbs.com writes: Mind you, I'm also thinking that on enterprise installations with multi-department use of the database, the fact that work_mem is inalienably USERSET is also an allocation problem. One user with a SET command can blow all of your resource planning away. One user with ability to enter arbitrary SQL commands can *always* blow your resource planning away. Blaming such things on work_mem is seriously misguided. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Seperate command-line histories for seperate databases
Andrew Dunstan [EMAIL PROTECTED] writes: Chris Browne wrote: [EMAIL PROTECTED] (Markus Bertheau) writes: Any reason psql doesn't do this by default? I fairly frequently find myself accessing different (but similar) databases, and it's quite valuable to be able to submit the *same* queries to them. This change would make it troublesome to do that. I'd not be all that keen on the change... Just what I was thinking. +1 on not changing the default behavior. However, this discussion makes me wonder just how well the hack recommended by the psql man page actually works. What happens if you use \c to change databases during a psql session? What *should* happen, if you're trying to keep per-database histories? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Automatically setting work_mem
Ühel kenal päeval, R, 2006-03-17 kell 09:46, kirjutas Tom Lane: Qingqing Zhou [EMAIL PROTECTED] writes: So what's the difference between these two strategy? (1) Running time: do they use the same amount of memory? Why option 2 is better than 1? (2) Idle time: after sort done, option 1 will return all 1024 to the OS and 2 will still keep 512? Point 2 is actually a serious flaw in Simon's proposal, because there is no portable way to make malloc return freed memory to the OS. So perhaps we could keep the shaded_work_mem in actual shared memory, and alloc it to processes from there ? We probably can't get it into a continuous chunk, but alt least we can give it back for other backends to use when done. My own thoughts about the problems with our work_mem arrangement are that the real problem is the rule that we can allocate work_mem per sort or hash operation; this makes the actual total memory use per backend pretty unpredictable for nontrivial queries. I don't know how to fix this though. The planner needs to know the work_mem that will be used for any one of these operations in order to estimate costs, so simply trying to divide up work_mem among the operations of a completed plan tree is not going to improve matters. Why not maybe make the work_mem allocation one of the variable parameters thet is fed to planner, and try optimising for different sets of sub-work_mems ? --- Hannu ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Seperate command-line histories for seperate databases
Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Chris Browne wrote: [EMAIL PROTECTED] (Markus Bertheau) writes: Any reason psql doesn't do this by default? I fairly frequently find myself accessing different (but similar) databases, and it's quite valuable to be able to submit the *same* queries to them. This change would make it troublesome to do that. I'd not be all that keen on the change... Just what I was thinking. +1 on not changing the default behavior. However, this discussion makes me wonder just how well the hack recommended by the psql man page actually works. What happens if you use \c to change databases during a psql session? What *should* happen, if you're trying to keep per-database histories? Unless we re-read .psqlrc (which I don't think we do), we will dump into the original file. -- Bruce Momjian http://candle.pha.pa.us SRA OSS, Inc. http://www.sraoss.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Automatically setting work_mem
Hannu Krosing [EMAIL PROTECTED] writes: So perhaps we could keep the shaded_work_mem in actual shared memory, and alloc it to processes from there ? No, that's utterly not reasonable, both from an allocation point of view (you'd have to make shared memory enormous, and not all platforms will like that) and from a locking point of view. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Seperate command-line histories for seperate databases
On Friday 17 March 2006 10:55, Chris Browne wrote: [EMAIL PROTECTED] (Markus Bertheau) writes: 2006/3/17, Bruce Momjian pgman@candle.pha.pa.us: Peter Eisentraut wrote: Bruce Momjian wrote: The psql manual pages for 8.1 now has: \set HISTFILE ~/.psql_history- :DBNAME Any reason psql doesn't do this by default? It is clear that the database name does not unambiguously identify a database, but having a history for each database name is already an improvement over the current situation. I fairly frequently find myself accessing different (but similar) databases, and it's quite valuable to be able to submit the *same* queries to them. This change would make it troublesome to do that. I'd not be all that keen on the change... I'm with Chris on this one, I routinly execute the same command on 6 or 7 of our databases, all from the same management consol. I would find a change of this sort soemewhat frustrating, I'd probably be inclined to symlink all my history files together. The presented way looks perfectly acceptable to me. -- Darcy Buskermolen Wavefire Technologies Corp. http://www.wavefire.com ph: 250.717.0200 fx: 250.763.1759 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Automatically setting work_mem
On Fri, Mar 17, 2006 at 04:45:17PM -0500, Tom Lane wrote: Hannu Krosing [EMAIL PROTECTED] writes: So perhaps we could keep the shaded_work_mem in actual shared memory, and alloc it to processes from there ? No, that's utterly not reasonable, both from an allocation point of view (you'd have to make shared memory enormous, and not all platforms will like that) and from a locking point of view. Perhaps we just need to tweak the memory allocation routines to use mmap() for large allocations rather than malloc(). Then they can be easily returned to the system unlike the system heap. glibc does this automatically sometimes. Though you have to be careful, continuous mmap()/munmap() is more expensive than malloc()/free() because mmap()ed memory must be zerod out, which costs cycles... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. signature.asc Description: Digital signature
[HACKERS] PostgreSQL Anniversary Proposals -- Important Update
Folks, Wanted to update you on a few things regarding speaking at the upcoming PostgreSQL Anniversary Summit: -- There are only 13 days left to submit a proposal. Please do so. We'd rather not be forced into a last-minute rush to evaluate all of the stuff in April. Remember this is a family event so you don't have to have all of your materials together before you send something. Heck, if you have an idea for a talk you'd really, really, really like to see and can't give it, send it anyway. We may be able to find a speaker. -- Due to e-mail issues (not fixed) we lost some proposals without a trace. So if you submitted a proposal to us, and have not already received a response, please re-submit it! We might not have seen it. -- Thanks to the tremendous generosity of Afilias, EnterpriseDB, Greenplum, and Pervasive (as well as SRA and OpenMFG), we will have the budget to help some speakers with travel funds. So if you were holding off on a proposal because you weren't sure you could afford to fly to Toronto, please send one in. Proposals can be sent through http://conference.postgresql.org/Proposals/ or directly to [EMAIL PROTECTED] -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL Anniversary Proposals -- Important Update
Josh Berkus josh@agliodbs.com writes: -- There are only 13 days left to submit a proposal. Please do so. We'd rather not be forced into a last-minute rush to evaluate all of the stuff in April. Remember this is a family event so you don't have to have all of your materials together before you send something. Heck, if you have an idea for a talk you'd really, really, really like to see and can't give it, send it anyway. We may be able to find a speaker. Speaking of which, I've been trying to think of a talk proposal and am not coming up with anything that seems terribly sexy. I've talked a couple times about the planner and am afraid people would be bored by that again. I'd be willing to hold forth on almost any part of the backend system design (a bold claim, but with three months to prepare I figure I can back it up...). What would people like to hear about? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PostgreSQL Anniversary Proposals -- Important
Tom, On 3/17/06 7:03 PM, Tom Lane [EMAIL PROTECTED] wrote: Speaking of which, I've been trying to think of a talk proposal and am not coming up with anything that seems terribly sexy. I've talked a couple times about the planner and am afraid people would be bored by that again. I'd be willing to hold forth on almost any part of the backend system design (a bold claim, but with three months to prepare I figure I can back it up...). What would people like to hear about? How about future plans, so to speak? You've made some pretty significant improvements for 8.1 (virtual tuples, the caching algorithm, etc), what's on deck for 8.2 and beyond? - Luke ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Automatically setting work_mem
Csaba, On 3/17/06 7:07 AM, Csaba Nagy [EMAIL PROTECTED] wrote: It worths a look at how apache Derby does with query planning, where a planned query is actually a compiled Java class, i.e. the executable byte code which will run to fetch the results, created and compiled by the planner... interesting approach, allows for lots of flexibility at run-time, but probably won't work with C :-) We've looked at using the open source llvm compiler to create an intermediate representation of the plan, then generate machine code and dispatch for execution. This would have the advantage of being able to place runtime constants into the intermediate representation as constants (like the address of a comparator function or operator), then let the compiler optimize them out, hoist, etc. You can't do this at compile time, and there would be no change of the nice abstract code in the executor. It's on our list - anyone else interested? - Luke ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PostgreSQL Anniversary Proposals -- Important Update
Tom Lane [EMAIL PROTECTED] wrote What would people like to hear about? I am really interested in the concurrency control part of the PostgreSQL. I can see the MVCC/lock rules there, and basically I can follow them -- but there are so many if-else in the rules, so the problem always for me is: how can we gaurantee that the rules are complete and correct? So I guess I may miss a big picture somewhere. Regards, Qingqing ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Automatically setting work_mem
Luke Lonergan [EMAIL PROTECTED] writes: We've looked at using the open source llvm compiler to create an intermediate representation of the plan, then generate machine code and dispatch for execution. This would buy what exactly? regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostgreSQL Anniversary Proposals -- Important Update
Qingqing Zhou [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] wrote What would people like to hear about? I am really interested in the concurrency control part of the PostgreSQL. Hm, I already talked about that once: http://www.postgresql.org/files/developer/transactions.pdf but perhaps that's not the level of detail you are after? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PostgreSQL Anniversary Proposals -- Important
Tom, Luke Lonergan wrote: I figure I can back it up...). What would people like to hear about? How about future plans, so to speak? You've made some pretty significant improvements for 8.1 (virtual tuples, the caching algorithm, etc), what's on deck for 8.2 and beyond? I'm *really* *really* interested in making PostgreSQL to be vacuum-less. Can we have a vacuum-less PostgreSQL in the future? How? If you have any ideas, I *really* want to hear about that from you. It will be a next generation of PostgreSQL. :) Thanks. -- NAGAYASU Satoshi [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster