Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
Brian, As far as I can tell, the proposal places the burden for ensuring atomicity entirely on the server. However, PATCH is explicitly not idempotent. If a client issues a PATCH, and the server executes the PATCH, but the client then fails to receive an indication of success due to an extraneous network glitch (and TCP reset?), then what prevents the client issuing the same PATCH again? In other words, absent a two-phase commit, there appears to be no transactional integrity. I think Lisa should respond to this as well. One of the issues here is that I believe there is an assumption that there will be some external means to know that in fact the PATCH succeeded, like for instance retrieving the impacted web pages and comparing results. In thinking about this again, however, perhaps that may not be sufficient, if for instance, PATCH is used for purposes of a side effect that is not easily verified. At that point you probably want something a kin to a transaction ID where you can check against that ID to deterine if it had succeeded. An alternative view, however, would be that for a specific object type where that is important, one could imbed such a transaction ID somewhere. That's not so easy for common objects we retrieve today (you would have to add some sort of meta tag into an HTML doc, and who knows what you would do with images), but since those objects could easily be retrieved and compared, they're not the problem. Put this another way: I don't think PATCH is a replacement for all the network database protocols and mechanisms that exist today, but simply closer to what it is named: a means to patch one or more objects. Eliot ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
On 2009-11-13 20:19, Julian Reschke wrote: Brian E Carpenter wrote: As far as I can tell, the proposal places the burden for ensuring atomicity entirely on the server. However, PATCH is explicitly not idempotent. If a client issues a PATCH, and the server executes the PATCH, but the client then fails to receive an indication of success due to an extraneous network glitch (and TCP reset?), then what prevents the client issuing the same PATCH again? In other words, absent a two-phase commit, there appears to be no transactional integrity. How is this different from PUT or POST? If you need repeatability of the request, just make the request conditional using if-match... PATCH seems more dangerous than those simply because it is partial update of a resource, and I don't feel it's sufficient to say that there might be a way of detecting that it has failed to complete, if you're executing a series of patches that build on one another. Talking about transactional integrity in the IETF has always been hard, for some reason. But something described as patch is exactly where you need it, IMHO. Brian ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: IETF Plenary Discussions
At Wed, 11 Nov 2009 18:56:45 -0500, Russ Housley wrote: I did not take the comment as disrespectful. A timer might be a very good experiment. And indeed used to be common practice. -Ekr ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Logging the source port?
Stéphane, In GEOPRIV, where we have a need to specify a specific endpoint in order to request its geolocation, we were specifically asked by carriers looking at CGN to add a port field to the identifier space: http://tools.ietf.org/html/draft-ietf-geopriv-held-identity-extensions-01#section-3.3.3 --Richard On Nov 13, 2009, at 2:49 PM, Stephane Bortzmeyer wrote: At the Transport Area meeting, Alain Durand, presenting draft-ford-shared-addressing-issues mentioned that we may well have now to always log the source port of a TCP request, not only the source IP address (which may well be shared), if we want traceability. Does anyone know if it is possible with the typical TCP servers? For instance, I find no way to do it with Apache http://httpd.apache.org/docs/2.2/mod/mod_log_config.html. ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: publishing some standards immediately at Draft-Standard status?
A related point, but from another direction: My impression currently is that Full Standards never get revised again. Unless I'm wrong, we may have to revise this practice if we go to a model with only one standards category, because sometimes it indeed makes sense to update a document. Regards, Martin. On 2009/11/13 3:33, Eliot Lear wrote: On 11/12/09 5:23 PM, Donald Eastlake wrote: If you read the definitions and theoretic criterial for Proposed versus Draft, it makes a lot of sense. Proposed is just proposed and non-injurious to the Internet. Draft required interoperability of independent implementations and is the first level where widespread implementation is recommended. This distinction makes a lot of sense. *IN THEORY* it once made a lot of sense, but please show me how it has EVER made sense in practice. The problem is the constantly escalating hurdles in practice to get to Proposed... That is A problem but IMHO not The Problem. Another problem is that I know of very few profit-making ventures that really want to devote their employees' time to an activity that gains them not one single additional bit of functionality. Eliot ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf -- #-# Martin J. Dürst, Professor, Aoyama Gakuin University #-# http://www.sw.it.aoyama.ac.jp mailto:due...@it.aoyama.ac.jp ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Jabber, ... at IETF Plenary Discussions (Re: IETF Plenary Discussions)
My understanding is that these days, most WG and BOF meetings have somebody watching jabber and bringing up comments from there to the mic. Also, jabber scribing seems to be quite popular, complementing the audio channel (which this time, as reported elsewhere, was excellent). What about jabber support (both scribing and picking up questions/comments) for the plenary? Was this just never considered, or has it been considered and rejected? Regards,Martin. -- #-# Martin J. Dürst, Professor, Aoyama Gakuin University #-# http://www.sw.it.aoyama.ac.jp mailto:due...@it.aoyama.ac.jp ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Logging the source port?
A really big NAT serving, say, eighteen million customers, can easily be so dense that if there's a bit of clock skew between a web server and the NAT operator, another customer might have used the same port at the time recorded by the web server. Therefore, I think it's safer to say that it's the NAT operator's responsibility to log enough. Umpteen million web sites will continue to use apache's common log format, so the NAT operator has to log what's needed to work with that format anyway. Arnt ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
Brian E Carpenter wrote: On 2009-11-13 20:19, Julian Reschke wrote: Brian E Carpenter wrote: As far as I can tell, the proposal places the burden for ensuring atomicity entirely on the server. However, PATCH is explicitly not idempotent. If a client issues a PATCH, and the server executes the PATCH, but the client then fails to receive an indication of success due to an extraneous network glitch (and TCP reset?), then what prevents the client issuing the same PATCH again? In other words, absent a two-phase commit, there appears to be no transactional integrity. How is this different from PUT or POST? If you need repeatability of the request, just make the request conditional using if-match... PATCH seems more dangerous than those simply because it is partial update of a resource, and I don't feel it's sufficient to say that there might be a way of detecting that it has failed to complete, if you're executing a series of patches that build on one another. POST can be a partial update as well, for the simple reason that POST can be *anything*. As a matter of fact, people are using POST right now, as PATCH was removed in RFC 2616. Talking about transactional integrity in the IETF has always been hard, for some reason. But something described as patch is exactly where you need it, IMHO. PATCH does not need to be special, and shouldn't be special. That being said, it wouldn't hurt to clarify that PATCH can be made repeatable (idempotent) by making it conditional. Best regards, Julian ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Jabber, ... at IETF Plenary Discussions (Re: IETF Plenary Discussions)
Martin J. Dürst a écrit : My understanding is that these days, most WG and BOF meetings have somebody watching jabber and bringing up comments from there to the mic. Also, jabber scribing seems to be quite popular, complementing the audio channel (which this time, as reported elsewhere, was excellent). What about jabber support (both scribing and picking up questions/comments) for the plenary? Was this just never considered, or has it been considered and rejected? Well I have seen once (San Diego was that?) during the Plenary a professional (court) scriber on jabber, with a special keyboard, I was impressed: speed, precision, efficiency. Live jabber scribing, if done well, is extremely useful not only for remote attendees but even for people in the physical room: names of speakers can be seen (I heard this IETF has badge readers to display names - but I doubt it's fast enough, I believe it slow), foreign speakers read and understand better what's being said on mike, etc. Live jabber scribing also maintains people (who cared to join the jabber room) more concentrated on the meeting itself, while still watching the laptop screen. Two or three virtual rooms? Usually there's only one virtual room reserved for a meeting on jabber; however, there are at least two different activities on jabber: (1) scribing and (2) discussing about the ongoing meeting. Sometimes there is (3) people specifically ask to relay messages on the mike. For the plenary, I have seen sometimes people setting up an additional chat room, because the jabber room was too crowded and not fast enough for some fast and very humourous remarks. I think also that, in general, for IETF, we may need a code of practice/conduct/rules for behaving on jabber rooms. There are some established practices I have seen. E.g. recently I've seen a scribe asking jabber participants to prefix their comment with mic if they want it relayed on mike, which I find very useful. I for myself try to follow a common practice when jabber scribing: mark slide advance and title, scribe only the comments not the presentation, use name acronyms like AP for Alexandru Petrescu, etc. Some times the practice is different. Different people use different practice. When the same practice is used, scribing and mike relaying is easier. For example, last time scribing a fellow scriber helped me with the names following my way of abbreviating names, which I found extremeley useful. Some people don't consider useful the way I scribe, because I tend to scribe every word being said; and some Chairs prefer to have already the synthesis of the message exchange done on the log - it's easier for them when building the official minutes. I.e. I scribe AP: bit 1 must be 0; BR: no, bit 0 must be 1, whereas the minutes builder prefers to see something like AP and BR discussed about bits and values. Some other people like the detailed jabber logs. I never knew exactly what to do as jabber scribe, I just follow instinct. But I believe these could be discussed in order to have a more efficient jabber chat use - not only for Plenary, but for all meetings. I love audio, I love jabber - without it my attendance would be reduced to 0. Alex ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: our pals at ICANN, was Circle of Fifths
On Mon, Nov 09, 2009 at 01:16:37PM -0800, David Conrad wrote: On Nov 6, 2009, at 9:30 AM, Phillip Hallam-Baker wrote: Clearly the root operators are responsible to and accountable to the Internet community. Err, no. First, the root server operators are all independent actors performing a service for the Internet community for their own reasons. They are formally responsible and accountable to different communities, e.g., the folks who run C are responsible to their share holders and the folks who run A and J do so under a cooperative agreement with the US government. well A is certainly run under agreement with the DoC. J on the other hand... Secondly, there are no formal terms of responsibilities nor accountability to the Internet community. In the past, specific root servers have been operated abysmally poorly and there was nothing that could be done by the Internet community to force root server operators to change the way they do things. With one arguable exception (that of VeriSign) there are no service level agreements, no penalties for failure to perform, and no formal commitments whatsoever. There is some intimation that L might be covered under a similar type of instrument. But I have no real time to investigate further. How exactly is that being accountable to the Internet community? I'm pretty sure you have the right direction here, that the operators are accountable to their communities. I've a tough time with a workable definition of Internet Community though. DNSSEC with a single root of trust would transform it from constitutional monarch to absolute monarch. I have no idea what this means. As I'm sure you are aware, DNSSEC merely allows folks to validate data hasn't been modified between the point in which the data is signed and the validator. If folks don't want to trust the ICANN/IANA KSK and/or VeriSign ZSK, they're free to import the individual trust anchors however they choose. There is no magic here. Regards, -drc ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf -- --bill Opinions expressed may not even be mine by the time you read them, and certainly don't reflect those of any other entity (legal or otherwise). ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
RFC levels and I-D levels
Hi everyone, I'd like to support the notion that standards levels for RFCs simply do not matter. In my own domain (IPsec), we try to spend time on keeping the protocol viable (by adding useful stuff and trying to avoid cruft) rather than working on mostly pointless PS-DS-FS projects. Whether we're doing a good job is for others to judge... On the other hand, I think we're not doing well on the other side of the spectrum, Internet Drafts. The industry and even other SDOs (3GPP is a case in point) are too happy to implement expired drafts, which as we know may be lower quality, less secure, and/or non-interoperable. In fact in recent years we have made it even easier to access and use long expired drafts, and this is hurting us. I'd like to make two related proposals to deal with this issue: - Make the tools URL for an I-D into the mainstream way to access I-Ds. Specifically, include this URL in the I-D announcement mail. This would have the benefit of pointing people to related RFCs and to relevant IPR statements. - Only non-expired drafts should be directly accessible from the tools area, or in fact have a stable IETF URL. Expired drafts will still be kept around, but will require a (freely available) tools login. So non-IETFers will have to spend 5 minutes' effort to reach them. This would go a long way towards having I-Ds as temporary documents, like they used to be, while still letting us work with the old documents when necessary. Thanks, Yaron ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: publishing some standards immediately at Draft-Standard status?
On Fri Nov 13 09:42:32 2009, Martin J. Dürst wrote: A related point, but from another direction: My impression currently is that Full Standards never get revised again. Unless I'm wrong, we may have to revise this practice if we go to a model with only one standards category, because sometimes it indeed makes sense to update a document. No, they do - consider the recent revising of RFC 821 and RFC 822. Just to throw in some different experience, though, the XSF (which does the non-core work on standardization of XMPP) has a three level standards track, documented in XEP-0001 - that's http://www.xmpp.org/extensions/xep-0001.html - which starts off with Experimental, then Draft, then Final. XEPs, unlike RFCs, get edited in-place, and there's no real equivalent to Internet Drafts - protoXEPs do exist, but they're usually not much more that sketches of what is hoped to be achieved. Experimental XEPs are conceptually the equivalent of adopted working-group drafts, therefore - albeit the XSF has no individual submission as such. Experimental XEPs can be changed by their authors without oversight or review. Draft is roughly centered between the IETF's Proposed Standard and Draft Standard states, with the difference that the specifications are intented to be well-reviewed and stable by Draft, but haven't had to be field-proven yet. However, to hit this state they do need oversight and review, and any changes from this point on need the same. Final is - roughly speaking - a cheap Full Standard. Ive no doubt we're not quite as rigorous as the IETF is, here. Final doesn't mean immutable, either, it can still be edited. This mechanism works within the XSF for a number of reasons, not all of which are applicable to the IETF. For example, the XMPP community as a whole is highly engaged with the standards process, by comparison with a lot of the IETF's target communities. Thus making work more visible at an earlier stage within the standards community overflows into the wider XMPP community very easily. In turn, this supports a shorter standards-track (the XSF's is, from this perspective, three stage rather than the IETF's four-stage). We took a serious effort, there, too, to ensure that even at Experimental stage, a protocol can be implemented and deployed as safely as possible - this eliminates the cases where, for example, an IMAP I-D has been deployed and subsequently changed in incompatible ways. Again, this supports wider buy-in to the specification's development, by avoiding penalizing implementors. Finally, use of stable identifiers for a particular work item means that implementors who are less well engaged can track items of interest more easily. There may well be ways that people feel the XSF is worse than the IETF in terms of the design of it's standards process, but as a whole, having observed both cases quite closely over the past few years, the XSF has a tendency to use its process for positive goals - ie, it helps us more than it hinders. I'm not even sure whether the differences truly matter compared with the differences that a highly engaged developer community makes, but I do think that if there are perceived problems with the IETF's standards process, then it's worth looking at other examples that do seem to function well. Dave. -- Dave Cridland - mailto:d...@cridland.net - xmpp:d...@dave.cridland.net - acap://acap.dave.cridland.net/byowner/user/dwd/bookmarks/ - http://dave.cridland.net/ Infotrope Polymer - ACAP, IMAP, ESMTP, and Lemonade ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
Julian, On 2009-11-13 23:35, Julian Reschke wrote: Brian E Carpenter wrote: On 2009-11-13 20:19, Julian Reschke wrote: Brian E Carpenter wrote: As far as I can tell, the proposal places the burden for ensuring atomicity entirely on the server. However, PATCH is explicitly not idempotent. If a client issues a PATCH, and the server executes the PATCH, but the client then fails to receive an indication of success due to an extraneous network glitch (and TCP reset?), then what prevents the client issuing the same PATCH again? In other words, absent a two-phase commit, there appears to be no transactional integrity. How is this different from PUT or POST? If you need repeatability of the request, just make the request conditional using if-match... PATCH seems more dangerous than those simply because it is partial update of a resource, and I don't feel it's sufficient to say that there might be a way of detecting that it has failed to complete, if you're executing a series of patches that build on one another. POST can be a partial update as well, for the simple reason that POST can be *anything*. As a matter of fact, people are using POST right now, as PATCH was removed in RFC 2616. I wasn't involved in reviewing RFC2616 (or in the original design of POST, even though it happened about 20 metres from my office at that time). But yes, POST also lacks transactional integrity. Incidentally, that recently caused me to donate twice as much as I intended to the relief fund for the tsunami in Samoa, although the direct cause was a network glitch. Talking about transactional integrity in the IETF has always been hard, for some reason. But something described as patch is exactly where you need it, IMHO. PATCH does not need to be special, and shouldn't be special. That being said, it wouldn't hurt to clarify that PATCH can be made repeatable (idempotent) by making it conditional. I would make that a SHOULD and, as I said originally, be more precise about how to use strong ETags to achieve it. Brian ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: RFC levels and I-D levels
Yaron, - Only non-expired drafts should be directly accessible from the tools area, or in fact have a stable IETF URL. Everything at tools.ietf.org is volunteer-maintained and unofficial. That's why the I-D announcements contain the URL that they do, which only allows you to recover the given version until it expires or is updated. If you quote http://tools.ietf.org/id/draft-something or http://tools.ietf.org/html/draft-something *without* the version number, you get the latest version. That's what I usually quote for external (non-IETF) use, so that the recipient will always see the latest. Expired drafts will still be kept around, but will require a (freely available) tools login. This would be highly inconvenient, and the Google cache defeats its purpose anyway. I think we have an ideal setup right now (for I-Ds that is; I have given up worrying about the standards track, since the IETF is clearly incapable of consensus on this aspect). Brian ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
On Nov 13, 2009, at 2:26 PM, Brian E Carpenter wrote: On 2009-11-13 23:35, Julian Reschke wrote: Brian E Carpenter wrote: On 2009-11-13 20:19, Julian Reschke wrote: Brian E Carpenter wrote: As far as I can tell, the proposal places the burden for ensuring atomicity entirely on the server. However, PATCH is explicitly not idempotent. If a client issues a PATCH, and the server executes the PATCH, but the client then fails to receive an indication of success due to an extraneous network glitch (and TCP reset?), then what prevents the client issuing the same PATCH again? In other words, absent a two-phase commit, there appears to be no transactional integrity. How is this different from PUT or POST? If you need repeatability of the request, just make the request conditional using if-match... PATCH seems more dangerous than those simply because it is partial update of a resource, and I don't feel it's sufficient to say that there might be a way of detecting that it has failed to complete, if you're executing a series of patches that build on one another. POST can be a partial update as well, for the simple reason that POST can be *anything*. As a matter of fact, people are using POST right now, as PATCH was removed in RFC 2616. I wasn't involved in reviewing RFC2616 (or in the original design of POST, even though it happened about 20 metres from my office at that time). But yes, POST also lacks transactional integrity. Incidentally, that recently caused me to donate twice as much as I intended to the relief fund for the tsunami in Samoa, although the direct cause was a network glitch. That doesn't have anything to do with POST lacking transactions. Any server is fully capable of detecting repeated transactions just by looking at the data sent. Talking about transactional integrity in the IETF has always been hard, for some reason. But something described as patch is exactly where you need it, IMHO. PATCH does not need to be special, and shouldn't be special. That being said, it wouldn't hurt to clarify that PATCH can be made repeatable (idempotent) by making it conditional. I would make that a SHOULD and, as I said originally, be more precise about how to use strong ETags to achieve it. No. The patch format may be designed to be repeatable (append). The patch format may include its own conditions (context diffs, before/after metadata, etc.). HTTP is neither a filesystem nor a database, so don't expect the protocol to interfere with legitimate communication just because some resources might require transactions for tightly-coupled behavior. That is not the protocol's job. Roy ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
Roy, Just a couple more comments below, and then I will have said all I can usefully say. On 2009-11-14 12:57, Roy T. Fielding wrote: On Nov 13, 2009, at 2:26 PM, Brian E Carpenter wrote: On 2009-11-13 23:35, Julian Reschke wrote: Brian E Carpenter wrote: On 2009-11-13 20:19, Julian Reschke wrote: Brian E Carpenter wrote: As far as I can tell, the proposal places the burden for ensuring atomicity entirely on the server. However, PATCH is explicitly not idempotent. If a client issues a PATCH, and the server executes the PATCH, but the client then fails to receive an indication of success due to an extraneous network glitch (and TCP reset?), then what prevents the client issuing the same PATCH again? In other words, absent a two-phase commit, there appears to be no transactional integrity. How is this different from PUT or POST? If you need repeatability of the request, just make the request conditional using if-match... PATCH seems more dangerous than those simply because it is partial update of a resource, and I don't feel it's sufficient to say that there might be a way of detecting that it has failed to complete, if you're executing a series of patches that build on one another. POST can be a partial update as well, for the simple reason that POST can be *anything*. As a matter of fact, people are using POST right now, as PATCH was removed in RFC 2616. I wasn't involved in reviewing RFC2616 (or in the original design of POST, even though it happened about 20 metres from my office at that time). But yes, POST also lacks transactional integrity. Incidentally, that recently caused me to donate twice as much as I intended to the relief fund for the tsunami in Samoa, although the direct cause was a network glitch. That doesn't have anything to do with POST lacking transactions. Any server is fully capable of detecting repeated transactions just by looking at the data sent. Not in this case. The service was instantiated once, received a legitimate series of POSTs, including the final one triggered by a SUBMIT button, and then the final response code to the client (a standard browser) vanished, for whatever reason. [Digression about why this failure was probably caused by a NAT deleted.] The human client simply sees a browser timeout and has no possible way of knowing whether the SUBMIT executed. As it turns out, my credit card bill later showed that it did. Talking about transactional integrity in the IETF has always been hard, for some reason. But something described as patch is exactly where you need it, IMHO. PATCH does not need to be special, and shouldn't be special. That being said, it wouldn't hurt to clarify that PATCH can be made repeatable (idempotent) by making it conditional. I would make that a SHOULD and, as I said originally, be more precise about how to use strong ETags to achieve it. No. The patch format may be designed to be repeatable (append). The patch format may include its own conditions (context diffs, before/after metadata, etc.). HTTP is neither a filesystem nor a database, so don't expect the protocol to interfere with legitimate communication just because some resources might require transactions for tightly-coupled behavior. That is not the protocol's job. Given that HTTP took the stateless approach that it did, I don't expect the protocol to embed transactional integrity. But I do expect the protocol spec to describe in normative terms how to detect a relatively likely type of failure. I agree that whether that is needed depends on the use case for PATCH. Brian ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: IETF Plenary Discussions
FWIW, this source is my hacked up version of the original that commented out a bunch of stuff I don't need when I use this. YMMV. (And thanks to the guys who originally put this together! It's been helpful many times.) Lars On 2009-11-11, at 18:46, Scott Brim wrote: Tony Hansen allegedly wrote on 11/12/2009 11:11 AM: Didn't Harald put up a timer sometimes during open mike? See attached ... discussion-timer.htmlATT1..txt smime.p7s Description: S/MIME cryptographic signature ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
On Nov 13, 2009, at 4:16 PM, Brian E Carpenter wrote: On 2009-11-14 12:57, Roy T. Fielding wrote: On Nov 13, 2009, at 2:26 PM, Brian E Carpenter wrote: I wasn't involved in reviewing RFC2616 (or in the original design of POST, even though it happened about 20 metres from my office at that time). But yes, POST also lacks transactional integrity. Incidentally, that recently caused me to donate twice as much as I intended to the relief fund for the tsunami in Samoa, although the direct cause was a network glitch. That doesn't have anything to do with POST lacking transactions. Any server is fully capable of detecting repeated transactions just by looking at the data sent. Not in this case. The service was instantiated once, received a legitimate series of POSTs, including the final one triggered by a SUBMIT button, and then the final response code to the client (a standard browser) vanished, for whatever reason. [Digression about why this failure was probably caused by a NAT deleted.] The human client simply sees a browser timeout and has no possible way of knowing whether the SUBMIT executed. As it turns out, my credit card bill later showed that it did. Yes, and that is exactly what would happen even if HTTP supported full two-phase commit, with all of its painful consequences, if the response to commit was lost. This is an application problem, not a protocol requirement. It can be fixed (to the degree that any fix is possible in a distributed system) by providing the client with a status resource within the same page as the SUBMIT, such that the client can see the change in server state when it takes place even if the specific 200 response is lost. In other words, provide the client with the information necessary to check the bill before the final transaction is submitted. ... Given that HTTP took the stateless approach that it did, I don't expect the protocol to embed transactional integrity. But I do expect the protocol spec to describe in normative terms how to detect a relatively likely type of failure. I agree that whether that is needed depends on the use case for PATCH. The use case for PATCH is requesting relative state changes to a known resource. The way to detect the resulting state of the resource in case of communication error is to perform a GET (perhaps with Range) on the same resource. That is one of the few benefits of defining PATCH instead of just reusing POST. Roy ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf
Re: Gen-ART LC review of draft-dusseault-http-patch-15.txt
On Fri, 13 Nov 2009, Roy T. Fielding wrote: Yes, and that is exactly what would happen even if HTTP supported full two-phase commit, with all of its painful consequences, if the response to commit was lost. This is an application problem, not a protocol requirement. It can be fixed (to the degree that any fix is possible in a distributed system) by providing the client with a status resource within the same page as the SUBMIT, such that the client can see the change in server state when it takes place even if the specific 200 response is lost. In other words, provide the client with the information necessary to check the bill before the final transaction is submitted. It can also be checked on the server is a serial number is included in the POSTed data which can be used to determine if the original post was already processed. Dave Morris ___ Ietf mailing list Ietf@ietf.org https://www.ietf.org/mailman/listinfo/ietf