Re: [HACKERS] Reducing overhead of frequent table locks
On Fri, May 13, 2011 at 11:05 PM, Noah Misch n...@leadboat.com wrote: Incidentally, I used the term local lock because I assumed fast-path locks would still go through the lock manager far enough to populate the local lock table. But there may be no reason to do so. Oh, good point. I think we probably WOULD need to update the local lock lock hash table. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI-related code drift between index_getnext() and heap_hot_search_buffer()
On Fri, May 13, 2011 at 12:10 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: FWIW, so far what I know is that it will take an example something like the one shown here: http://archives.postgresql.org/pgsql-hackers/2011-02/msg00325.php with the further requirements that the update in T3 must not be a HOT update, T1 would still need to acquire a snapshot before T2 committed while moving its current select down past the commit of T3, and that select would need to be modified so that it would scan the visible tuple and then stop (e.g., because of a LIMIT) before reaching the tuple which represents the next version of the row. I think I see another problem here. Just before returning each tuple, index_getnext() records in the IndexScanDesc the offset number of the next tuple in the HOT chain, and the XMAX of the tuple being returned. On the next call, it will go on to examine that TID checking, among other things, whether the XMIN of the tuple at that location matches the previously stored XMAX. But no buffer content locks is held across calls. So consider a HOT chain A - B. After returning A, the IndexScanDesc will consider that we should next look at B. Now B rolls back, and a new transaction updates A, so we now have A - C. (I believe this is possible.) When the next call to index_getnext() occurs, it'll look at B and consider that it's reached the end of the HOT chain - but in reality it has not, because it has never looked at C. Now, prior to SSI, I believe this did not matter, because the only time we traversed the entire HOT chain rather than stopping at the first visible tuple was when we were using a non-MVCC snapshot. According to Heikki's submission notes for the patch I was trying to rebase, the only time that happens is during CLUSTER, at which point we have an AccessExclusiveLock on the table. But SSI wants to traverse the whole HOT chain even when using an MVCC snapshot, so now we (maybe) have a problem. I think I have an inkling of how to plug this, but first I have to go buy groceries. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reducing overhead of frequent table locks
Robert Haas robertmh...@gmail.com writes: On Fri, May 13, 2011 at 11:05 PM, Noah Misch n...@leadboat.com wrote: Incidentally, I used the term local lock because I assumed fast-path locks would still go through the lock manager far enough to populate the local lock table. But there may be no reason to do so. Oh, good point. I think we probably WOULD need to update the local lock lock hash table. I haven't read this thread closely, but the general behavior of the backend assumes that it's very very cheap to re-acquire a lock that's already held by the current transaction. It's probably worth maintaining a local counter just so you can keep that being true, even if there were no other need for it. (Since I've not read the thread, I'll refrain from asking how you're gonna clean up at transaction end if there's no local memory of what locks you hold.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Exporting closePGconn from libpq
A minor issue has come up in creating low-level bindings to libpq for safe garbage-collected languages, namely that PQfinish is the only (AFAICT) way to close a connection but also de-allocates the memory used to represent the database connection.It would be preferable to call PQfinish to free the memory in a finalizer, but appilcations need a way to disconnect from the database at a predictable and deterministic point in time, whereas leaving a bit of memory around until the GC finally gets to it is relatively harmless.The low-level binding has a couple of options: 1. Ignore the issue and allow for the possibility of a segfault if the library is used incorrectly, which is not a good situation for safe languages. 2. Create a wrapper that tracks whether or not PQfinish has ever been called, so that attempts to use a connection afterwards can be turned into native exceptions/other forms of error signaling. This kind of solution can introduce their own minor issues. 3. Hack libpq to export closePGconn so that libpq can safely signal the low-level bindings of the error when a connection is used after it is disconnected, reserving PQfinish to run in a GC-triggered finalizer. While this is a technically preferable solution, without getting the change into upstream sources it is also a deployment nightmare. Is there any particular reason why closePGconn should not be exported from libpq? Best, Leon -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Exporting closePGconn from libpq
Leon Smith leon.p.sm...@gmail.com writes: ... The low-level binding has a couple of options: 1. Ignore the issue and allow for the possibility of a segfault if the library is used incorrectly, which is not a good situation for safe languages. 2. Create a wrapper that tracks whether or not PQfinish has ever been called, so that attempts to use a connection afterwards can be turned into native exceptions/other forms of error signaling. This kind of solution can introduce their own minor issues. 3. Hack libpq to export closePGconn so that libpq can safely signal the low-level bindings of the error when a connection is used after it is disconnected, reserving PQfinish to run in a GC-triggered finalizer. While this is a technically preferable solution, without getting the change into upstream sources it is also a deployment nightmare. Is there any particular reason why closePGconn should not be exported from libpq? Yes: it'd introduce a new externally-visible state that libpq now has to worry about supporting in all its operations, ie connection closed but not gone. This state is guaranteed to be poorly tested and hence buggy. I think you need a wrapper object. Given the context you're describing, I'd be willing to lay a side bet that you'll end up needing a wrapper anyway, even if it seems like you could avoid it right now. Language embeddings of libpq tend to accrete features... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Exporting closePGconn from libpq
On Sat, May 14, 2011 at 11:37 AM, Tom Lane t...@sss.pgh.pa.us wrote: Yes: it'd introduce a new externally-visible state that libpq now has to worry about supporting in all its operations, ie connection closed but not gone. This state is guaranteed to be poorly tested and hence buggy. If you connect to a database over an unreliable network, you can lose the connection without warning at any time. Thus libpq must already support a connection 'closed' but not gone state, and I'm fine with making the explicitly disconnected state indistinguishable from the connection lost state. I think you need a wrapper object. Given the context you're describing, I'd be willing to lay a side bet that you'll end up needing a wrapper anyway, even if it seems like you could avoid it right now. Language embeddings of libpq tend to accrete features... The intention of the low-level bindings I'm working on is to keep features to an absolute minimum; to bind calls to C in a 1-1 fashion and to handle memory management and error signaling associated with foreign calls. Of course such a library is not intended to be particularly attractive for application development, but rather as a library that can be wrapped up into a higher-level database access library that's free to accrete features. :) Best, Leon -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reducing overhead of frequent table locks
On Fri, May 13, 2011 at 5:55 PM, Robert Haas robertmh...@gmail.com wrote: On Fri, May 13, 2011 at 4:16 PM, Noah Misch n...@leadboat.com wrote: I wonder if, instead, we could signal all backends at marker 1 to dump the applicable parts of their local (memory) lock tables to files. Or to another shared memory region, if that didn't mean statically allocating the largest possible required amount. If we were willing to wait until all backends reach a CHECK_FOR_INTERRUPTS, they could instead make the global insertions directly. That might yield a decent amount of bug swatting to fill in missing CHECK_FOR_INTERRUPTS, though. I've thought about this; I believe it's unworkable. If one backend goes into the tank (think: SIGSTOP, or blocking on I/O to an unreadable disk sector) this could lead to cascading failure. Would that risk be substantially worse than it currently is? If a backend goes into the tank while holding access shared locks, it will still block access exclusive locks until it recovers. And those queued access exclusive locks will block new access shared locks from other backends. How much is risk magnified by the new approach, going from any backend holding the lock is tanked to any process at all is tanked? What I'd considered playing with in the past is having LockMethodLocalHash hang on to an Access Shared lock even after locallock-nLocks == 0, so that re-granting the lock would be a purely local operation. Anyone wanting an Access Exclusive lock and not immediately getting it would have to send out a plea (via SINVA?) for other processes to release their locallock-nLocks == 0 locks. But this would suffer from the same problem of tanked processes. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] DOMAINs and CASTs
Hi, If i create a DOMAIN an then want to create a CAST from that domain to another type it gives an error. Consider this example: create domain datetime as timestamp with time zone check (value between '1753-01-01 00:00:00' and '-12-31 23:59:59'); create function datetime2int(datetime) returns int language sql stable strict as $$ select $1::date - '1753-01-01'::date; $$; create cast(datetime as int) with function datetime2int(datetime); if i try to cast, get this error: select now()::datetime::int; ERROR: cannot cast type datetime to integer The problem is that in find_coercion_pathway() the very first thing we do is to get the base type of both: the source and target types. So, the way to make it work is to create the function and the cast on the base types. But what if i create 2 domains on the same base types and want a different behaviour on a cast to the same target type? ok, sounds odd... basic example datetime and smalldatetime types in ms sql server... when casting to int the former give the number of days since 1753-01-01 and the latter the number of days since 1900-01-01... some systems i have seen (specially ERPs) tend to store dates as number of days so there is a use case for this. the fix for this doesn't look complicated (unless have missed something), just try first with the types i receive and then with the base types if they are domains... i'm not trying mixed situations: the base type of the source and the target as we receive it and viceversa, i think that's just complicating for a very little benefit if any... attached (pass all regression tests), comments? -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte y capacitación de PostgreSQL diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c index 0418972..7101499 100644 *** a/src/backend/parser/parse_coerce.c --- b/src/backend/parser/parse_coerce.c *** find_coercion_pathway(Oid targetTypeId, *** 1873,1885 *funcid = InvalidOid; ! /* Perhaps the types are domains; if so, look at their base types */ ! if (OidIsValid(sourceTypeId)) ! sourceTypeId = getBaseType(sourceTypeId); ! if (OidIsValid(targetTypeId)) ! targetTypeId = getBaseType(targetTypeId); ! ! /* Domains are always coercible to and from their base type */ if (sourceTypeId == targetTypeId) return COERCION_PATH_RELABELTYPE; --- 1873,1879 *funcid = InvalidOid; ! /* If they are the same type will always be coercible */ if (sourceTypeId == targetTypeId) return COERCION_PATH_RELABELTYPE; *** find_coercion_pathway(Oid targetTypeId, *** 1888,1893 --- 1882,1913 ObjectIdGetDatum(sourceTypeId), ObjectIdGetDatum(targetTypeId)); + if (!HeapTupleIsValid(tuple)) + { + Oid sourceBaseTypeId; + Oid targetBaseTypeId; + + /* Perhaps the types are domains; if so, look at their base types */ + if (OidIsValid(sourceTypeId)) + sourceBaseTypeId = getBaseType(sourceTypeId); + if (OidIsValid(targetTypeId)) + targetBaseTypeId = getBaseType(targetTypeId); + + if ((sourceBaseTypeId != sourceTypeId) || (targetBaseTypeId != targetTypeId)) + { + sourceTypeId = sourceBaseTypeId; + targetTypeId = targetBaseTypeId; + + /* Domains are always coercible to and from their base type */ + if (sourceTypeId == targetTypeId) + return COERCION_PATH_RELABELTYPE; + + tuple = SearchSysCache2(CASTSOURCETARGET, + ObjectIdGetDatum(sourceTypeId), + ObjectIdGetDatum(targetTypeId)); + } + } + if (HeapTupleIsValid(tuple)) { Form_pg_cast castForm = (Form_pg_cast) GETSTRUCT(tuple); -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
Hi, We can't accept patches just based on a pointer to a web site. Please e-mail this to the mailing list so that it can be considered a submission under the project's licensing terms. I hope this would be committable and the final version. PostgreSQL has high standards for code submissions. Extremely few submissions are committed without significant revisions to them based on code review. So far you've gotten a first round of high-level design review, there's several additional steps before something is considered for a commit. The whole process is outlined at http://wiki.postgresql.org/wiki/Submitting_a_Patch OK, I would do so for my next patch. From a couple of minutes of reading the patch, the first things that pop out as problems are: -All of the ControlFile - controlFile renaming has add a larger difference to ReadControlFile than I would consider ideal. I think so too, I will consider this again. -Touching StrategyControl is not something this patch should be doing. Sorry, I could not get this. Could you describe this? I think StrategyControl needs to be adjusted if shared_buffers setting was changed. -I don't think your justification (debugging or portability) for keeping around your original code in here is going to be sufficient to do so. -This should not be named enable_buffer_cache_hibernation. That very large diff you ended up with in the regression tests is because all of the settings named enable_* are optimizer control settings. Using the name buffer_cache_hibernation instead would make a better starting point. OK, how about `buffer_cache_hibernation_level'? The value 0 to disable(default), 1 for saving buffer descriptors only, 2 for saving buffer descriptors and buffer blocks. From a bigger picture perspective, this really hasn't addressed any of my comments about shared_buffers only being the beginning of the useful cache state to worry about here. I'd at least like the solution to the buffer cache save/restore to have a plan for how it might address that too one day. This project is also picky about only committing code that fits into the long-term picture for desired features. My simple motivation on this is that `We don't want to restart our DB server because the DB buffer cache will be lost and the DB server needs to start its operations with zero cache. Does any DBMS product support holding the contents of DB cache as it is even by restarting, just like the hibernation feature of PC?'. It's very simple and many of DB admins will be happy soon with this feature, I think. Thanks -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
Hi, I'd suggest doing this as an extension module. All the changes to existing server code seem superficial. It sounds interesting. I'll try it later. Are there any good examples for extension module? Thanks -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Review: psql include file using relative path
I had a chance to give this patch a look. This review is of the second patch posted by Gurjeet, at: http://archives.postgresql.org/message-id/AANLkTi=yjb_a+ggt_pxmrqhbhyid6aswwb8h-lw-k...@mail.gmail.com == Summary == This patch implements the \ir command for psql, with a long alias \include_relative. This new backslash command is similar to the existing \i or \include command, but it allows loading .sql files with paths in reference to the currently-executing script's directory, not the CWD of where psql is called from. This feature would be used when one .sql file needs to load another .sql file in a related directory. == Utility == I would find the \ir command useful for constructing small packages of SQL to be loaded together. For example, I keep the DDL for non-trivial databases in source control, often broken down into one file or more per schema, with a master file loading in all the sub-files; this command would help the master file be sure it's referencing the locations of other files correctly. == General == The patch applies cleanly to HEAD. No regression tests are included, but I don't think they're needed here. == Documentation == The patch includes the standard psql help output description for the new \ir command. I think ./doc/src/sgml/ref/psql-ref.sgml needs to be patched as well, though. Tangent: AFAICT we're not documenting the long form of psql commands, such as \print, anywhere. Following that precedent, this patch doesn't document \include_relative. Not sure if we want to document such options anywhere, but in any case a separate issue from this patch. == Code == 1.) I have some doubts about whether the memory allocated here: char *relative_file = pg_malloc(dir_len + 1 + file_len + 1); is always free()'d, particularly if this condition is hit: if (!fd) { psql_error(%s: %s\n, filename, strerror(errno)); return EXIT_FAILURE; } 2.) This comment should mention \ir * Handler for \i, but can be used for other things as well. ... 3.) settings.h has the comment about pset.inputfile : char *inputfile; /* for error reporting */ But this variable is use for more than just error reporting now (i.e. determining whether psql is executing a file). 4.) I think the changes to process_file() merit another comment or two, e.g. describing what relative_file is supposed to be. 5.) I tried the patch out on Linux and OS X; perhaps someone should give it a quick check on Windows as well -- I'm not sure if pathname manipulations like: last_slash = strrchr(pset.inputfile, '/'); work OK on Windows. 6.) The indentation of these lines in tab-complete.c around line 2876 looks off: strcmp(prev_wd, \\i) == 0 || strcmp(prev_wd, \\include) == 0 || strcmp(prev_wd, \\ir) == 0 || strcmp(prev_wd, \\include_relative) == 0 || (I think the first of those lines was off before the patch, and the patch followed its example) That's it for now. Overall, I think this patch provides a useful feature, and am hoping it could be ready for commit in 9.2 in the 2011-next commitfest with some polishing. Josh -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
Mitsuru IWASAKI wrote: Are there any good examples for extension module? Browse the subdirectories of contrib. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] DOMAINs and CASTs
Jaime Casanova ja...@2ndquadrant.com writes: If i create a DOMAIN an then want to create a CAST from that domain to another type it gives an error. Yes. See previous discussions about that, e.g. these threads: http://archives.postgresql.org/pgsql-hackers/2006-05/msg00072.php http://archives.postgresql.org/pgsql-hackers/2006-09/msg00695.php It's *not* trivial to fix, at least not in a way that gives desirable behavior for more than the simplest cases. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] DOMAINs and CASTs
On Sat, May 14, 2011 at 5:13 PM, Tom Lane t...@sss.pgh.pa.us wrote: Jaime Casanova ja...@2ndquadrant.com writes: If i create a DOMAIN an then want to create a CAST from that domain to another type it gives an error. It's *not* trivial to fix, at least not in a way that gives desirable behavior for more than the simplest cases. well, i'm just trying to manage the simplest case... do you think we should manage other cases? what else should we do? it's better to allows the creation of casts that are ignored? -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte y capacitación de PostgreSQL -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] DOMAINs and CASTs
Jaime Casanova wrote: If i create a DOMAIN an then want to create a CAST from that domain to another type it gives an error. Consider this example: create domain datetime as timestamp with time zone check (value between '1753-01-01 00:00:00' and '-12-31 23:59:59'); create function datetime2int(datetime) returns int language sql stable strict as $$ select $1::date - '1753-01-01'::date; $$; create cast(datetime as int) with function datetime2int(datetime); if i try to cast, get this error: select now()::datetime::int; ERROR: cannot cast type datetime to integer The problem is that in find_coercion_pathway() the very first thing we do is to get the base type of both: the source and target types. So, the way to make it work is to create the function and the cast on the base types. But what if i create 2 domains on the same base types and want a different behaviour on a cast to the same target type? I think that overloading the same cast syntax to get different behavior for different domains over the same base type is a bad idea. First of all, what if cast(timestamp as int) was already defined? Which cast then would you expect to be invoked here? '1800-01-01 00:00:00'::int ... the one for timestamp or the one for datetime? Second of all, what if you had 2 domains defined over timestamp and they overlapped and they both defined a cast as you did, with generic syntax? And you were casting a value in both domains as an int? I think it would be best that the generic cast syntax only be useable for casts defined on the base type, and if you want a domain-specific one you should use the function syntax such as your datetime2int(). That way it is easier for users to predict what behavior will occur, and implementation will be easier too. -- Darren Duncan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] DOMAINs and CASTs
Darren Duncan wrote: I think it would be best that the generic cast syntax only be useable for casts defined on the base type, and if you want a domain-specific one you should use the function syntax such as your datetime2int(). That way it is easier for users to predict what behavior will occur, and implementation will be easier too. Replying to myself, I offer another alternative: What you ask for is indeed supported, but that if for a given input value more than one cast applies to it, particularly for 2 overlapping domains, then which cast is invoked is undefined, so for example the DBMS may just use the first one it finds. It is then up to the user to ensure that when they define casts over domains that they just define ones that either produce the same outputs for the same overlapping inputs (the best answer) or they ensure that they don't overlap in their input domains. -- Darren Duncan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] DOMAINs and CASTs
- Original Message - From: Darren Duncan dar...@darrenduncan.net To: Jaime Casanova ja...@2ndquadrant.com Cc: PostgreSQL-development pgsql-hackers@postgresql.org Sent: Saturday, May 14, 2011 9:46 PM Subject: Re: [HACKERS] DOMAINs and CASTs Darren Duncan wrote: I think it would be best that the generic cast syntax only be useable for casts defined on the base type, and if you want a domain-specific one you should use the function syntax such as your datetime2int(). That way it is easier for users to predict what behavior will occur, and implementation will be easier too. Replying to myself, I offer another alternative: What you ask for is indeed supported, but that if for a given input value more than one cast applies to it, particularly for 2 overlapping domains, then which cast is invoked is undefined, so for example the DBMS may just use the first one it finds. It is then up to the user to ensure that when they define casts over domains that they just define ones that either produce the same outputs for the same overlapping inputs (the best answer) or they ensure that they don't overlap in their input domains. -- Darren Duncan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Fw: [HACKERS] DOMAINs and CASTs
- Original Message - From: Darren Duncan dar...@darrenduncan.net To: Jaime Casanova ja...@2ndquadrant.com Cc: PostgreSQL-development pgsql-hackers@postgresql.org Sent: Saturday, May 14, 2011 9:42 PM Subject: Re: [HACKERS] DOMAINs and CASTs Jaime Casanova wrote: If i create a DOMAIN an then want to create a CAST from that domain to another type it gives an error. Consider this example: create domain datetime as timestamp with time zone check (value between '1753-01-01 00:00:00' and '-12-31 23:59:59'); create function datetime2int(datetime) returns int language sql stable strict as $$ select $1::date - '1753-01-01'::date; $$; create cast(datetime as int) with function datetime2int(datetime); if i try to cast, get this error: select now()::datetime::int; ERROR: cannot cast type datetime to integer The problem is that in find_coercion_pathway() the very first thing we do is to get the base type of both: the source and target types. So, the way to make it work is to create the function and the cast on the base types. But what if i create 2 domains on the same base types and want a different behaviour on a cast to the same target type? I think that overloading the same cast syntax to get different behavior for different domains over the same base type is a bad idea. First of all, what if cast(timestamp as int) was already defined? Which cast then would you expect to be invoked here? '1800-01-01 00:00:00'::int ... the one for timestamp or the one for datetime? Second of all, what if you had 2 domains defined over timestamp and they overlapped and they both defined a cast as you did, with generic syntax? And you were casting a value in both domains as an int? I think it would be best that the generic cast syntax only be useable for casts defined on the base type, and if you want a domain-specific one you should use the function syntax such as your datetime2int(). That way it is easier for users to predict what behavior will occur, and implementation will be easier too. -- Darren Duncan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Fw: [HACKERS] DOMAINs and CASTs
- Original Message - From: Jaime Casanova ja...@2ndquadrant.com To: Tom Lane t...@sss.pgh.pa.us Cc: PostgreSQL-development pgsql-hackers@postgresql.org Sent: Saturday, May 14, 2011 8:29 PM Subject: Re: [HACKERS] DOMAINs and CASTs On Sat, May 14, 2011 at 5:13 PM, Tom Lane t...@sss.pgh.pa.us wrote: Jaime Casanova ja...@2ndquadrant.com writes: If i create a DOMAIN an then want to create a CAST from that domain to another type it gives an error. It's *not* trivial to fix, at least not in a way that gives desirable behavior for more than the simplest cases. well, i'm just trying to manage the simplest case... do you think we should manage other cases? what else should we do? it's better to allows the creation of casts that are ignored? -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte y capacitación de PostgreSQL -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers