Re: [HACKERS] RI triggers and schemas
Christopher Kings-Lynne wrote: > > I've just realized that if we change the RI trigger arguments this way, > > we will have a really serious problem with accepting pg_dump scripts > > from prior versions. The scripts' representation of foreign key > > constraints will contain commands like > > > > CREATE CONSTRAINT TRIGGER "" AFTER UPDATE ON "bar" FROM "baz" NOT >DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_upd" >('', 'baz', 'bar', 'UNSPECIFIED', 'f1', 'f1'); > > > > which will absolutely not work at all if the 7.3 triggers are expecting > > to find OIDs in those arguments. > > Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up > the OIDs, etc. for the arguments and convert them internally to an ALTER > TABLE/ADD CONSTRAINT or whatever... And what language hack do you suggest to suppress the complete referential check of the foreign key table at ALTER TABLE ... time? Currently, it does a sequential scan of the entire table to check every single row. So adding 3 constraints to a 10M row table might take some time. Note, that that language hack will again make the dump non- ANSI complient and thus, I don't consider the entire change to ALTER TABLE an improvement at all. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] RI triggers and schemas
Christopher Kings-Lynne wrote: > > Yeah, although it'd still be a good idea probably to convert the dump form > > to ALTER TABLE in any case. The one downside that was brought up in the > > past was the time involved in checking dumped (presumably correct) data > > when the constraint is added to very large tables. I can probably make > > that faster since right now it's just running the check on each row, > > but it'll still be slow on big tables possibly. Another option would > > be to have an argument that would disable the check on an add constraint, > > except that wouldn't work for unique/primary key. > > Maybe it could be a really evil SET CONSTRAINTS command like: > > SET CONSTRAINTS UNCHECKED; Hmmm, I would like this one if restricted to superusers or database owner. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Proposed patch for ODBC driver w/ C-a-n-c-e-l
Patch against 7,2 submitted for comment. It's a little messy; I had some trouble trying to reconcile the code style of libpq which I copied from, and odbc. Suggestions on what parts look ugly, and or where to send this (is there a separate ODBC place?) are welcome. This seems to work just fine; Now, when our users submit a 2 hour query with four million row sorts by accident, then cancel it 30 seconds later, it doesn't bog down the server ... regards, -Brad diff -cr postgresql-7.2/src/interfaces/odbc/connection.c postgresql-7.2-brad/src/interfaces/odbc/connection.c *** postgresql-7.2/src/interfaces/odbc/connection.c Sun Dec 30 18:09:42 2001 --- postgresql-7.2-brad/src/interfaces/odbc/connection.cWed Mar 27 10:04:45 2002 *** *** 19,24 --- 19,27 #include #include #include + #ifndef WIN32 + #include + #endif #include "environ.h" #include "socket.h" *** *** 828,835 } break; case 'K': /* Secret key (6.4 protocol) */ ! (void) SOCK_get_int(sock, 4); /* pid */ ! (void) SOCK_get_int(sock, 4); /* key */ break; case 'Z': /* Backend is ready for new query (6.4) */ --- 831,839 } break; case 'K': /* Secret key (6.4 protocol) */ ! self->be_pid = SOCK_get_int(sock, 4); /* pid */ ! self->be_key = SOCK_get_int(sock, 4); /* key */ ! qlog("conn=%u, Backend pid=%u\n",self,self->be_pid); break; case 'Z': /* Backend is ready for new query (6.4) */ *** *** 1837,1839 --- 1841,1903 value = BLCKSZ; return value; } + + int + CC_send_cancel_request(const ConnectionClass *conn) + { + #ifdef WIN32 + int save_errno = (WSAGetLastError()); + #else + int save_errno = errno; + #endif + int tmpsock = -1; + struct + { + uint32 packetlen; + CancelRequestPacket cp; + } crp; + + /* Check we have an open connection */ + if (!conn) + return FALSE; + + if (conn->sock == NULL ) + { + return FALSE; + } + + /* + * We need to open a temporary connection to the postmaster. Use the + * information saved by connectDB to do this with only kernel calls. + */ + if ((tmpsock = socket(AF_INET, SOCK_STREAM, 0)) < 0) + { + return FALSE; + } + if (connect(tmpsock, (struct sockaddr *)&(conn->sock->sadr), + sizeof(conn->sock->sadr)) < 0) + { + return FALSE; + } + + /* + * We needn't set nonblocking I/O or NODELAY options here. + */ + crp.packetlen = htonl((uint32) sizeof(crp)); + crp.cp.cancelRequestCode = (MsgType) htonl(CANCEL_REQUEST_CODE); + crp.cp.backendPID = htonl(conn->be_pid); + crp.cp.cancelAuthCode = htonl(conn->be_key); + + if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp)) + { + return FALSE; + } + + /* Sent it, done */ + closesocket(tmpsock); + #ifdef WIN32 + WSASetLastError(save_errno); + #else + errno = save_errno; + #endif + } diff -cr postgresql-7.2/src/interfaces/odbc/connection.h postgresql-7.2-brad/src/interfaces/odbc/connection.h *** postgresql-7.2/src/interfaces/odbc/connection.h Mon Nov 5 12:46:38 2001 --- postgresql-7.2-brad/src/interfaces/odbc/connection.hTue Mar 26 14:45:35 2002 *** *** 125,130 --- 125,146 chartty[PATH_SIZE]; } StartupPacket6_2; + /* Transferred from pqcomm.h: */ + + + typedef ProtocolVersion MsgType; + + #define PG_PROTOCOL(m,n) (((m) << 16) | (n)) + #define CANCEL_REQUEST_CODE PG_PROTOCOL(1234,5678) + + typedef struct CancelRequestPacket + { + /* Note that each field is stored in network byte order! */ + MsgType cancelRequestCode; /* code to identify a cancel +request */ + unsigned intbackendPID; /* PID of client's backend */ + unsigned intcancelAuthCode; /* secret key to authorize cancel */ + } CancelRequestPacket; + /*Structure to hold all the connection att
Re: [HACKERS] Data integrity and sanity check
Rod Taylor wrote: > > 2) re-check any constraint inserted into the database > > There should not be any if it was accepted, however if it's a new > constraint it doesn't get applied to data that already exists. A dump > and restore will ignore these as well (with good reason). Please don't make up any answers. If you don't know for sure, look at the code in question or just don't answer. PostgreSQL does check all existing data when adding a foreign key contraint. It skips the check during the restore of a dump though. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] RI triggers and schemas
Jan Wieck <[EMAIL PROTECTED]> writes: > Christopher Kings-Lynne wrote: >> Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up >> the OIDs, etc. for the arguments and convert them internally to an ALTER >> TABLE/ADD CONSTRAINT or whatever... > And what language hack do you suggest to suppress the > complete referential check of the foreign key table at ALTER > TABLE ... time? Actually, I was interpreting his idea to mean that we add intelligence to CREATE TRIGGER to adjust the specified trigger arguments if it sees the referenced trigger procedure is one of the RI triggers. It'd be fairly self-contained, really, since the CREATE TRIGGER code could use its "ON table" and "FROM table" arguments to derive the correct OIDs to insert. This could be done always (whether the incoming arguments look like OIDs or not), which'd also give us a short-term answer for dumping/reloading 7.3-style RI triggers. I'd still like to change pg_dump to output some kind of ALTER command in place of CREATE TRIGGER, but we'd have some breathing room to debate about how. I'm now inclined to leave the attribute arguments alone (stick to names not numbers) just to avoid possible conversion problems there. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] timeout implementation issues
> On Sat, 30 Mar 2002, Tom Lane wrote: > > > Au contraire, it is not assuming anything. It is sending off a cancel > > request and then waiting to see what happens. Maybe the query will be Okay, I see now: when processCancelRequest() is called, a return of 127 is sent. That would indeed work; thanks for walking me through it. My other question was how to send the timeout value to the backend. Bruce said at one point: > Timeout can be part of BEGIN, or a SET value, which would work from > jdbc. I'm not sure how this would work. The timeout value would be sent as part of a SQL query? j ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] timeout implementation issues
Jessica Perry Hekman wrote: > > On Sat, 30 Mar 2002, Tom Lane wrote: > > > > > Au contraire, it is not assuming anything. It is sending off a cancel > > > request and then waiting to see what happens. Maybe the query will be > > Okay, I see now: when processCancelRequest() is called, a return of 127 is > sent. That would indeed work; thanks for walking me through it. > > My other question was how to send the timeout value to the backend. Bruce > said at one point: > > > Timeout can be part of BEGIN, or a SET value, which would work from > > jdbc. > > I'm not sure how this would work. The timeout value would be sent as part > of a SQL query? I think there are two ways of making this capability visible to users. First, you could do: SET query_timeout = 5; and all queries after that would time out at 5 seconds. Another option is: BEGIN WORK TIMEOUT 5; ... COMMIT; which would make the transaction timeout after 5 seconds. We never decided which one we wanted, or both. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] timeout implementation issues
Jessica Perry Hekman <[EMAIL PROTECTED]> writes: > My other question was how to send the timeout value to the backend. I would imagine that the most convenient way to handle it would be as a SET variable: SET query_timeout = n; Establishes a time limit on subsequent queries (n expressed in milliseconds, perhaps). SET query_timeout = 0; Disables query time limit. This assumes that the query timeout should apply to each subsequent query, individually, until explicitly canceled. If you want a timeout that applies to only one query and is then forgotten, then maybe this wouldn't be the most convenient definition. What semantics are you trying to obtain, exactly? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] RI triggers and schemas
On Mon, 1 Apr 2002, Tom Lane wrote: > Jan Wieck <[EMAIL PROTECTED]> writes: > > Christopher Kings-Lynne wrote: > >> Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up > >> the OIDs, etc. for the arguments and convert them internally to an ALTER > >> TABLE/ADD CONSTRAINT or whatever... > > > And what language hack do you suggest to suppress the > > complete referential check of the foreign key table at ALTER > > TABLE ... time? > > Actually, I was interpreting his idea to mean that we add intelligence > to CREATE TRIGGER to adjust the specified trigger arguments if it sees > the referenced trigger procedure is one of the RI triggers. It'd be > fairly self-contained, really, since the CREATE TRIGGER code could use > its "ON table" and "FROM table" arguments to derive the correct OIDs > to insert. This could be done always (whether the incoming arguments > look like OIDs or not), which'd also give us a short-term answer for > dumping/reloading 7.3-style RI triggers. I'd still like to change > pg_dump to output some kind of ALTER command in place of CREATE TRIGGER, > but we'd have some breathing room to debate about how. > > I'm now inclined to leave the attribute arguments alone (stick to names > not numbers) just to avoid possible conversion problems there. Well, there is another place where the current name behavior causes problems so we'd need to be sticking in the fully qualified name, otherwise creating a table in your search path earlier than the intended table would break the constraint. This currently already happens with temp tables. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] RI triggers and schemas
I said: > The table OIDs in pg_trigger would do fine if the trigger function could > get at them, but it can't; so we need to copy them into the trigger > arguments. (Hmm, I suppose another option is to extend the Trigger > data structure to include tgconstrrelid, and just ignore the table names > in the trigger argument list.) After further thought, this is clearly the right approach to take, because it provides a solution path for other triggers besides the RI ones. So we'll fix the problem at the code level. The trigger arguments will be unchanged, but the table names therein will become purely decorative (or documentation, if you prefer ;-)). Perhaps someday we could eliminate them ... but not as long as pg_dump dumps RI constraints in the form of trigger definitions. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] RI triggers and schemas
Stephan Szabo <[EMAIL PROTECTED]> writes: > Sorry, I must have misunderstood you. I thought you were backing away > from changing the arguments that were created for the trigger. Or did > you mean using the stored info on the two oids we already have in the > record (tgrelid and tgconstrrelid)? No, I still want to put table OIDs not names into the trigger arguments. The table OIDs in pg_trigger would do fine if the trigger function could get at them, but it can't; so we need to copy them into the trigger arguments. (Hmm, I suppose another option is to extend the Trigger data structure to include tgconstrrelid, and just ignore the table names in the trigger argument list.) I am backing away from changing the attribute name arguments to attnums, though; I'm thinking that the potential conversion problems outweigh being able to eliminate some RENAME support code. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] RI triggers and schemas
Stephan Szabo <[EMAIL PROTECTED]> writes: > Well, there is another place where the current name behavior > causes problems so we'd need to be sticking in the fully qualified > name, otherwise creating a table in your search path earlier than > the intended table would break the constraint. This currently already > happens with temp tables. But the point is that the table name would be resolved to OID once at CREATE TRIGGER time (or when the original FK constraint is created). After that, it's up to the trigger to construct queries using the fully-qualified table name. This should eliminate the temp table gotcha as well as change-of-search-path issues. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] timeout implementation issues
Bruce Momjian <[EMAIL PROTECTED]> writes: > ... It will be tricky to manage multiple > alarms in a single process, but it can be done by creating an alarm > queue. I would argue that we should only support *one* kind of timeout, either transaction-level or statement-level, so as to avoid that complexity. I don't want to see us gilding the lily in the first implementation of something that IMHO is of dubious usefulness in the first place. We can think about extending the facility later, when and if it proves sufficiently useful to justify more complexity. I don't have a very strong feeling about whether transaction-level or statement-level is more useful; am willing to do whichever one the JDBC spec wants. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] timeout implementation issues
Tom Lane wrote: > Bruce Momjian <[EMAIL PROTECTED]> writes: > > ... It will be tricky to manage multiple > > alarms in a single process, but it can be done by creating an alarm > > queue. > > I would argue that we should only support *one* kind of timeout, either > transaction-level or statement-level, so as to avoid that complexity. > I don't want to see us gilding the lily in the first implementation of > something that IMHO is of dubious usefulness in the first place. > We can think about extending the facility later, when and if it proves > sufficiently useful to justify more complexity. > > I don't have a very strong feeling about whether transaction-level or > statement-level is more useful; am willing to do whichever one the > JDBC spec wants. Agreed, only one timeout. I just considered the statement/transaction level quite interesting. We could easily do GUC for query level, and allow BEGIN WORK to override that for transaction level. That would give us the best of both worlds, if we want it. I am not sure what people are going to use this timeout for. My guess is that only transaction level is the way to go. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] timeout implementation issues
Peter Eisentraut wrote: > Bruce Momjian writes: > > > I think there are two ways of making this capability visible to users. > > First, you could do: > > > > SET query_timeout = 5; > > > > and all queries after that would time out at 5 seconds. Another option > > is: > > > > BEGIN WORK TIMEOUT 5; > > ... > > COMMIT; > > > > which would make the transaction timeout after 5 seconds. We never > > decided which one we wanted, or both. > > Note that the first is a statement-level timeout and the second is a > transaction-level timeout. Be sure to clarify which one we want. Oh, wow, that is an interesting distinction. If there is a multi-query transaction, do we time each query separately or the entire transaction? I don't know which people want, and maybe this is why we need both GUC and BEGIN WORK timeouts. I don't remember this distinction in previous discussions but it may be significant. Of course, the GUC could behave at a transaction level as well. It will be tricky to manage multiple alarms in a single process, but it can be done by creating an alarm queue. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] RI triggers and schemas
On Mon, 1 Apr 2002, Tom Lane wrote: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > Well, there is another place where the current name behavior > > causes problems so we'd need to be sticking in the fully qualified > > name, otherwise creating a table in your search path earlier than > > the intended table would break the constraint. This currently already > > happens with temp tables. > > But the point is that the table name would be resolved to OID once at > CREATE TRIGGER time (or when the original FK constraint is created). > After that, it's up to the trigger to construct queries using the > fully-qualified table name. This should eliminate the temp table > gotcha as well as change-of-search-path issues. Sorry, I must have misunderstood you. I thought you were backing away from changing the arguments that were created for the trigger. Or did you mean using the stored info on the two oids we already have in the record (tgrelid and tgconstrrelid)? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] timeout implementation issues
On Mon, 1 Apr 2002, Tom Lane wrote: > This assumes that the query timeout should apply to each subsequent > query, individually, until explicitly canceled. If you want a timeout > that applies to only one query and is then forgotten, then maybe this > wouldn't be the most convenient definition. What semantics are you > trying to obtain, exactly? The semantices of the JDBC API: "Transaction::setQueryTimeout(): Sets the number of seconds the driver will wait for a Statement to execute to the given number of seconds. If the limit is exceeded, a SQLException is thrown." So it should apply to all queries on a given transaction. I think that the above implemenation suggestion (and Bruce's) would apply to all queries, regardless of which transaction they were associated with. If each transaction has some kind of unique ID, maybe that could be added to the SET statement? Does anyone know how someone else did this (mSQL, mySQL, etc)? It seems like there ought to already exist some sort of standard. I'll poke around and see if I can find anything. j ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] timeout implementation issues
On Mon, 1 Apr 2002, Bruce Momjian wrote: > I don't know which people want, and maybe this is why we need both GUC > and BEGIN WORK timeouts. I don't remember this distinction in previous > discussions but it may be significant. Of course, the GUC could behave > at a transaction level as well. It will be tricky to manage multiple > alarms in a single process, but it can be done by creating an alarm > queue. I think we should do just BEGIN WORK (transaction-level) timeouts; that is all that the JDBC spec asks for. Does that sound good to people? So the work that would need to be done is asking the driver to request the timeout via "BEGIN WORK TIMEOUT 5"; getting the backend to parse that request and set the alarm on each query in that transaction; getting the backend to send a cancel request if the alarm goes off. I am right now in the process of finding the place where BEGIN-level queries are parsed. Any pointers to the right files to read would be appreciated. j ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] timeout implementation issues
Tom Lane wrote: > Jessica Perry Hekman <[EMAIL PROTECTED]> writes: > > My other question was how to send the timeout value to the backend. > > I would imagine that the most convenient way to handle it would be as > a SET variable: > >SET query_timeout = n; > > Establishes a time limit on subsequent queries (n expressed in > milliseconds, perhaps). > >SET query_timeout = 0; > > Disables query time limit. > > This assumes that the query timeout should apply to each subsequent > query, individually, until explicitly canceled. If you want a timeout > that applies to only one query and is then forgotten, then maybe this > wouldn't be the most convenient definition. What semantics are you > trying to obtain, exactly? Why don't we use two separate GUC variables and leave the BEGIN syntax as is completely? SET transaction_timeout = m; SET statement_timeout = n; The alarm is set to the smaller of (what's left for) the transaction or statement. If you want to go sub-second, I suggest making it microseconds. That's what struct timeval (used in struct itimerval) uses. But I strongly suggest not doing so at all, because the usage of itimers disables the ability to profile with gprof completely. Compute the time spent so far in a transaction exactly, but round UP to full seconds for the alarm allways. And before someone asks, no, I don't think that a connection_timeout is a good thing. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # _ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] timeout implementation issues
Jessica, My reading of the JDBC spec would indicate that this is a statement level property (aka query level) since the method to enable this is on the Statement object and is named setQueryTimeout(). There is nothing I can find that would indicate that this would apply to the transaction in my reading of the jdbc spec. thanks, --Barry Jessica Perry Hekman wrote: > On Mon, 1 Apr 2002, Bruce Momjian wrote: > > >>I don't know which people want, and maybe this is why we need both GUC >>and BEGIN WORK timeouts. I don't remember this distinction in previous >>discussions but it may be significant. Of course, the GUC could behave >>at a transaction level as well. It will be tricky to manage multiple >>alarms in a single process, but it can be done by creating an alarm >>queue. > > > I think we should do just BEGIN WORK (transaction-level) timeouts; that is > all that the JDBC spec asks for. Does that sound good to people? > > So the work that would need to be done is asking the driver to request the > timeout via "BEGIN WORK TIMEOUT 5"; getting the backend to parse that > request and set the alarm on each query in that transaction; getting the > backend to send a cancel request if the alarm goes off. I am right now in > the process of finding the place where BEGIN-level queries are parsed. Any > pointers to the right files to read would be appreciated. > > j > > > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] timeout implementation issues
On Mon, 1 Apr 2002, Jan Wieck wrote: > Why don't we use two separate GUC variables and leave the > BEGIN syntax as is completely? > > SET transaction_timeout = m; > SET statement_timeout = n; What's a GUC variable? Would this apply to all subsequent statements? I think it needs to apply to just the specified statement. I'm sorry about the confusion earlier when I said that setQueryTimeout() was transaction-level; Barry Lind correctly pointed out that it is statement-level. We mostly seem to feel that we don't want to do both, so is statement-only okay? Jan, do you feel strongly that you want to see both implemented? > If you want to go sub-second, I suggest making it > microseconds. That's what struct timeval (used in struct I don't think that's necessary. JDBC only wants it specified in seconds. j ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] timeout implementation issues
Jessica Perry Hekman <[EMAIL PROTECTED]> writes: > What's a GUC variable? A parameter that you can set with SET. > Would this apply to all subsequent statements? I > think it needs to apply to just the specified statement. Yes, if the JDBC spec expects this to be applied to just a single statement, then a SET variable doesn't fit very nicely with that. You'd have to have logic on the application side to reset the variable to "no limit" after the statement --- and this could be rather difficult. (For example, if you are inside a transaction block and the statement errors out, you won't be able to simply issue a new SET; so you'd have to remember that you needed a SET until after you exit the transaction block. Ugh.) On the other hand, we do not have anything in the backend now that applies to just one statement and then automatically resets afterwards; and I'm not eager to add a parameter with that behavior just for JDBC's convenience. It seems like it'd be a big wart. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] timeout implementation issues
Barry Lind writes: > My reading of the JDBC spec would indicate that this is a statement > level property (aka query level) since the method to enable this is on > the Statement object and is named setQueryTimeout(). There is nothing I > can find that would indicate that this would apply to the transaction in > my reading of the jdbc spec. Does it time out only queries or any kind of statement? -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] timeout implementation issues
The spec isn't clear on that point, but my interpretation is that it would apply to all types of statements not just queries. --Barry Peter Eisentraut wrote: > Barry Lind writes: > > >>My reading of the JDBC spec would indicate that this is a statement >>level property (aka query level) since the method to enable this is on >>the Statement object and is named setQueryTimeout(). There is nothing I >>can find that would indicate that this would apply to the transaction in >>my reading of the jdbc spec. > > > Does it time out only queries or any kind of statement? > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] timeout implementation issues
On Mon, 1 Apr 2002, Peter Eisentraut wrote: > Does it time out only queries or any kind of statement? Any kind, I believe. FWIW, I took a look at the recommended JDBC driver for MySQL, hoping for ideas; it does not implement query timeouts at all. I'll take a look at mSQL next. j ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[HACKERS] please apply patch
- Original Message - From: Nicolas Bazin To: PostgreSQL-development Cc: Tom Lane ; Bruce Momjian ; Michael Meskes Sent: Thursday, March 28, 2002 9:30 AM Subject: Always the same ecpg bug - please (re)apply patch Here is the description: When a macro is replaced by the preprocessor, pgc.l reaches a end of file, which is not the actual end of the file. One side effect of that is that if you are in a ifdef block, you get a wrong error telling you that a endif is missing. This patch corrects pgc.l and also adds a test of this problem to test1.pgc. To convince you apply the patch to test1.pgc first then try to compile the test then apply the patch to pgc.l. The patch moves the test of the scope of an ifdef block to the end of the file beeing parsed, including all includes files, ... . For the record, this patch was applied a first time by bruce then overwritten by Micheal and reapplied by him. But the big mystery is that there is no trace of that in CVS Nicolas ecpg.patch Description: Binary data ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[HACKERS] inserting user defined types through a rule?
I recently discovered a problem inserting a user-defined type when going through a rule. I'm not sure if it's a -hackers or -users question, but since it involves the interaction of a user-defined type and rules I thought it envitable that I would end up here anyway. The object in question is my X.509 type. For compelling reasons beyond the scope of this discussion, I need to define a table as: create table certs ( namevarchar(20), certx509, -- fields used with CRL lookups serial_number hugeint not null constraint c1 check (serial_number = serial_number(cert)), issuer principal not null constraint c2 check (issuer = issuer(cert)), subject principal not null unique constraint c3 check (subject = subject(cert)), ... ); where the constraints guarantee that the cached attributes accurately reflect the contents of the cert (but these fields can be indexed and searched). In practice it's impossible to get those fields right in a query so I also defined: create view cert_insert as select name, cert from certs; create rule certi as on insert to cert_insert do instead insert into certs (name, cert, serial_number, subject, issuer,... ) values (new.name, new.cert, serial_number(new.cert), subject(new.cert), issuer(new.cert),... ); The problem is that I can insert literal text: create table t ( cert x509 ); insert into t values (' BEGIN CERTIFICATE '); but when I try the same with cert_insert it's clear that "new.cert" isn't getting initialized properly. (It works fine when the cert is already in the database.) Trying to explicitly cast the literal to as part of the query doesn't help - it seems that the rule just rewrites the query and the cast is getting lost. Workarounds don't seem to be viable. I can't use a trigger on a temporary table since there doesn't seem to be a clean way to trigger a rule from one. (I need to get parameters from the trigger to the SQL function to the rule, and SQL functions don't seem to be able to take parameters -- or its undocumented if it can take something like $1, $2, etc.) I can't use a rule on the temporary table since it appears a rule still looks at the original parameters, not the temp table. Any ideas? Is this something addressed in 7.2? (I'm trying to stick with the oldest useable version to avoid forcing DB upgrades.) Or is this a genuine hole in the user type/rules/triggers model? Bear ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] inserting user defined types through a rule?
Bear Giles <[EMAIL PROTECTED]> writes: > I recently discovered a problem inserting a user-defined type when > going through a rule. ... > The problem is that I can insert literal text: > create table t ( cert x509 ); > insert into t values (' BEGIN CERTIFICATE '); > but when I try the same with cert_insert it's clear that "new.cert" > isn't getting initialized properly. (It works fine when the cert is > already in the database.) Trying to explicitly cast the literal to > as part of the query doesn't help - it seems that the rule just rewrites > the query and the cast is getting lost. This seems like a bug, but I don't have much hope of being able to find it without a test case to step through. Could you boil things down to a reproducible test case? FWIW, it seems unlikely that the issue is your user-defined type per se; the rule rewriter mechanisms are quite type-ignorant. You may be able to develop a test case that doesn't use your own type at all. > Any ideas? Is this something addressed in 7.2? Can't tell at this point. What version are you using, anyway? regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] serial and namespace
Hi, I created a schema *inoue* and tried the following. # create table inoue.t1 (id serial primary key, dt text); NOTICE: CREATE TABLE will create implicit sequence 't1_id_seq' for SERIAL column 't1.id' NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 't1_pkey' for table 't1' CREATE # insert into inoue.t1 (dt) values ('abc'); ERROR: Relation "t1_id_seq" does not exist regards, Hiroshi Inoue ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] inserting user defined types through a rule?
I'm using 7.1.3 currently, but am building and installing 7.2.1 tonight to see if this fixes the problem. I don't know the standard types and functions well enough to be able to whip out a test case, but I think I do have an idea on what the problem is. If I'm right, the problem is triggered by any rule with a function that operates on one of the parameters. If the parameter is already the type then the rule succeeds. If the parameter needs to be cast (e.g., because it's a literal value) then the rule fails. E.g., if there is a function like function strlen(varchar) returns int4 ... try create table test (s varchar(20), len int4); create view test_view as select s from test; create rule test_rule as on insert to test_view do instead insert into test (s, strlen(s)); then insert into test_view values ('crash-n-burn!'); will fail. Taken even further, you could probably use create rule test_rule2 as on insert to test_view do instead insert into test2 (strlen(s)); The earlier example is just an updateable view with the tweak that some of hidden underlying fields are also updated. Strictly speaking this breaks 3NF, but with the consistency checks it's a useful way of caching derived values while ensuring that they can't get out of sync with the objects they cache. Bear P.S., it just occured to me that rules can allow multiple statements. Maybe the workaround is create rule... do instead ( insert into temporary table; insert into final table from temporary table using functions; clear temporary table ); ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] inserting user defined types through a rule?
Bear Giles <[EMAIL PROTECTED]> writes: > I don't know the standard types and functions well enough to be able to > whip out a test case, but I think I do have an idea on what the problem > is. If I'm right, the problem is triggered by any rule with a function > that operates on one of the parameters. If the parameter is already the > type then the rule succeeds. If the parameter needs to be cast (e.g., > because it's a literal value) then the rule fails. I tried this, but apparently there's more to it than that; AFAICT it works in the cases where I'd expect it to work (viz, where there is a suitable cast function available). test71=# create function strlen(varchar) returns int as test71-# 'select length($1)::int' language 'sql'; CREATE test71=# create table test (s varchar(20), len int4); CREATE test71=# create view test_view as select s from test; CREATE test71=# create rule test_rule as on insert to test_view test71-# do instead insert into test values (new.s, strlen(new.s)); CREATE test71=# insert into test_view values ('crash-n-burn!'); INSERT 1610948 1 test71=# insert into test_view values (33::int); INSERT 1610949 1 test71=# insert into test_view values (33::numeric); ERROR: Attribute 's' is of type 'varchar' but expression is of type 'numeric' You will need to rewrite or cast the expression test71=# select * from test; s | len ---+- crash-n-burn! | 13 33| 2 (2 rows) Perhaps there's a particular case where it fails, but you'll have to give us more of a clue... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] serial and namespace
Hiroshi Inoue <[EMAIL PROTECTED]> writes: > # create table inoue.t1 (id serial primary key, dt text); > NOTICE: CREATE TABLE will create implicit sequence 't1_id_seq' > for SERIAL column 't1.id' > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > 't1_pkey' for table 't1' > CREATE > # insert into inoue.t1 (dt) values ('abc'); > ERROR: Relation "t1_id_seq" does not exist Okay, I fixed SERIAL column creation so that you get a default like this: regression=# \d t1 Table "t1" Column | Type | Modifiers +-+--- id | integer | not null default nextval('"inoue"."t1_id_seq"'::text) dt | text| Indexes: t1_pkey primary key btree (id) I'm not entirely thrilled with this solution, because it forecloses the possibility of dumping the table definition and then reloading it into a different schema. We haven't yet talked much about how pg_dump should behave with schemas --- but I think it will be important for pg_dump to be able to choose whether to qualify object names with a schema name or not in its dump output. The above approach makes it harder to do so. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Autoconf upgraded
On Fri, Mar 29, 2002 at 12:39:01PM -0500, Peter Eisentraut wrote: > * Running 'autoconf' is now very slow. Too bad. But rerunning autoconf should be fast, thanks to autom4te.cache. -- albert chin ([EMAIL PROTECTED]) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html