Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On 13 August 2013 16:37, John Napiorkowski jjn1...@yahoo.com wrote: [...] The main issue that I see is that we have too many ways to do exactly the same thing (return JSON for AJAX endpoints) and no clear reason why any of them are better for a given purpose. Additionally, some of them are a bit verbose, and Catalyst already has reputation for being the long way to do simple things. However it does seem to me after we've all talked about it quite a bit that at this point there really doesn't seem to be a really exciting approach that would be useful (talking about a common way to handle the serialize / format stuff, like allowing res-body to take a ref and convert it to JSON or XML, etc.) At least in terms of something that belongs in Catalyst core. Why not lets pull that part out of the spec and make it a separate research project, and for now continue to let the community play with various approaches. I think there appears to be less controversy on the request side, in terms of building in alternative content parsing, and possibly a first go at subroutine attribute content type negotiation. Any thoughts on the request side of the proposal? Ultimately, from my own experience of doing XML / JSON / whatever-encoded requests and responses, is: · I needed to be able to specify, *once I'd dispatched the request* to be able to - deserialise the request - set a flag in the request or response to indicate that the latter needed serialising Being able to put an attribute on an action, e.g. : DeserializeUsing(method_name) SerializeResponseUsing(other_method_name) would be kinda cool, although I settled for pushing callbacks onto a list in $c-stash for the latter, and deserialising in part of a chain that was shared by all the requests that needed it. Sometimes you want a handful of AJAXy actions in amongst mostly-normal templated HTML page, though, so being able to label just one or two actions in a controller that way would be neat. All quite unnecessary, can do it by hand where needed, etc., but handy. ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On 14 August 2013 10:53, Will Crawford billcrawford1...@gmail.com wrote: [...] Also, making it easier to restore a session id after deserialising a request. I was asked to implement an API that passes session IDs with an XML request body, rather than in the URL or a cookie. It turned out to be quite hard to restore the session, because even if I didn't write to the session at any point before that, anything that called -sessionid (to see if there was one) caused this: sub _load_sessionid { my $c = shift; return if $c-_tried_loading_session_id; $c-_tried_loading_session_id(1); i.e. once you've checked for a session id, it won't try to load it later, even if you try to set a session id. We worked around this, at mst's suggestion, by creating a separate app and mounting it in myapp.psgi; would have preferred to just be able to call a manually_set_this_session_id_and_restore_it method, but couldn't see one that was part of the public Session api. ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Wednesday, August 14, 2013 5:53 AM, Will Crawford billcrawford1...@gmail.com wrote: On 13 August 2013 16:37, John Napiorkowski jjn1...@yahoo.com wrote: [...] The main issue that I see is that we have too many ways to do exactly the same thing (return JSON for AJAX endpoints) and no clear reason why any of them are better for a given purpose. Additionally, some of them are a bit verbose, and Catalyst already has reputation for being the long way to do simple things. However it does seem to me after we've all talked about it quite a bit that at this point there really doesn't seem to be a really exciting approach that would be useful (talking about a common way to handle the serialize / format stuff, like allowing res-body to take a ref and convert it to JSON or XML, etc.) At least in terms of something that belongs in Catalyst core. Why not lets pull that part out of the spec and make it a separate research project, and for now continue to let the community play with various approaches. I think there appears to be less controversy on the request side, in terms of building in alternative content parsing, and possibly a first go at subroutine attribute content type negotiation. Any thoughts on the request side of the proposal? Ultimately, from my own experience of doing XML / JSON / whatever-encoded requests and responses, is: · I needed to be able to specify, *once I'd dispatched the request* to be able to - deserialise the request - set a flag in the request or response to indicate that the latter needed serialising I think the first half of the proposed spec would offer support for this, although at the application not controller or action level. I'd like to see it 'down the stack' but I think also its a fair iteration and adds a lot of value to have it globally in the request object. Eventually we'll expose at the action level a req-body_fh for when you need to have more custom handling. My main issue is that it feels not worth the trouble since the most popular PSGI handers typically buffer request input anyway. So for now making this a request object function seems like the correct approach. Being able to put an attribute on an action, e.g. : DeserializeUsing(method_name) SerializeResponseUsing(other_method_name) would be kinda cool, although I settled for pushing callbacks onto a list in $c-stash for the latter, and deserialising in part of a chain that was shared by all the requests that needed it. Sometimes you want a handful of AJAXy actions in amongst mostly-normal templated HTML page, though, so being able to label just one or two actions in a controller that way would be neat. All quite unnecessary, can do it by hand where needed, etc., but handy. Yes, and this is the part where for these types of services (RESTful service, basic JSON RPC, ajax endpoints) the View part of MVC can often feel not particularly useful. Or maybe it should be but we are not doing it right :) This is why the initial proposal laid out some ideas about just serializing directly in the response object (the flip side of deserializing globally in the request object). However unlike with the request object, there's no underlying architectural reason driving this. I did a mock up similar to the subroutine attribute approach you outlined, FWIW, there's a working prototype on github, which I've shared earlier, http://github.com/jjn1056/CatalystX-Example-Todo which is just an action role. John ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Wednesday, August 14, 2013 6:15 AM, Will Crawford billcrawford1...@gmail.com wrote: On 14 August 2013 10:53, Will Crawford billcrawford1...@gmail.com wrote: [...] Also, making it easier to restore a session id after deserialising a request. I was asked to implement an API that passes session IDs with an XML request body, rather than in the URL or a cookie. It turned out to be quite hard to restore the session, because even if I didn't write to the session at any point before that, anything that called -sessionid (to see if there was one) caused this: sub _load_sessionid { my $c = shift; return if $c-_tried_loading_session_id; $c-_tried_loading_session_id(1); i.e. once you've checked for a session id, it won't try to load it later, even if you try to set a session id. We worked around this, at mst's suggestion, by creating a separate app and mounting it in myapp.psgi; would have preferred to just be able to call a manually_set_this_session_id_and_restore_it method, but couldn't see one that was part of the public Session api. Ultimately I'd like to refactor the Catalyst main request building flow to be layered using PSGI Middleware or similar (or at least that is my thinking now) and part of that would allow us to using middleware for some of th stuff we use plugins for (like authentication, etc). I think over the long term the PLugin approach has not really scaled well and often we find its hard to fix things in Catalyst core because some popular plugins would break. I think we could make this more robust, but that is probably not anytime so, although one step might be to try and refactor the catalyst dispatcher into psgi style middleware (make it easier to plugin in a new one, for example) John ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
RE: [Catalyst] More detailed proposal for changes related to content negotiation and REST
Hi John / Alex Am I missing something here? In my model I create and return the JSON... use JSON qw(encode_json); my %json_hash ; ... add some hash stuff to be made into JSON ... # return JSON return encode_json \%json_hash -- In my controller I output it to the view / browser $c-response-body( $c-model('My::Model')-my_method_JSON ); - With an AJAX call I retrieve the JSON // // GET SOME JSON VIA AJAX // / var myJSON; function myAJAXFunction() { // call AJAX makeRequest('/my_controller/my_method_JSON ','',set_JSON); } function setJSON(json) { // store JSON returned from AJAX call myJSON = $.parseJSON(json); } This is working fine, what shift in paradigm are you trying to create and how would it affect / alter what I currently do, or indeed make it any easier / simpler, I can't see it getting much simpler than the few lines of code I already have, can it? Or am I doing something wrong? Cheers, Craig Chant. -Original Message- From: John Napiorkowski [mailto:jjn1...@yahoo.com] Sent: 12 August 2013 21:27 To: The elegant MVC web framework Subject: Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNeg otiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotia tion The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex Alex, I think you put your finger on one of the major uneasiness I (and others) have around the idea of having in the global application model all these registered formatters. Yes, in theory it feels like we are cheating on the View side of MVC. I have personally always thought that Catalyst doesn't exactly get it right the way it is (I think C and V are actually a little too detached for one thing) and that leads to odd code sometimes. The commonly used Catalyst::Action::Renderview is a bit too detached for my taste. And what we call a View tends to mostly just be a View handler (or factory I guess). On the other hand the basic idea of separation of concerns is sound. I think the main thing is that this is not intended to replace view, but for the simple cases where people just want to quickly serialize data (like for all those ajax endpoints you need to do nowadays, not full on RESTful APIs but quick and dirty communication between the back and front end. Maybe that's not a great thing for Catalyst (and honestly I put this out there in the hopes of provocation some strong reactions. Personally I prefer to use templates even for creating JSON output, I think you get cleaner separation that is easier to maintain over time (I really don't like when I see something like -body (json_encode $sql-run-get_all_rows). That feels fragile to me. On the other hand I see the attraction of some of the more lightweight web frameworks where they make it easy to just spew out JSON. This is partly why I sketched out an action/controller level alternative, with the proposed response-body_format thing and the proposed new action subroutine attributes (just to recap) sub myaction :Local { My ($self, $c) = @_; # ... # ... $c-response-format( 'application/json' = sub { json_encode $stuff }, # ... # ... ); } I think this approach feels similar to how some other frameworks operate. Some offer more sugary syntax for the common stuff, perhaps $c-response -json( sub { ... } ) -html ( sub { ... } ). - ...
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
Craig, Great points and thanks for the code example (I wish more stuff like this was floating around in Catalyst oriented blogs, etc.) Just stepping back a bit, the purpose of the research spike was to see what, if any, features could be incorporated into Catalyst core to improve and streamline how we work with some of the contemporary use cases around AJAX and REST. Its very possible that the answer to that is no, there's nothing that belongs in Catalyst core around this. I think the reason for the research is that a lot of modern web frameworks do provide such support out of the box, and the fact that Catalyst doesn't have it at first might seem like a deficiency in the framework. Other reasons for baking this into core might come to mind (and have various levels of validity) -- having a common approach could result in the benefits of DRY (better, more secure code, less stuff for newbies to figure out) -- For the common cases it would help people new to a project (instead of having to figure out how this project handles AJAX, or the five different ways its handled, for example). -- a common, core approach might help focus the community a bit by removing a common use case that has no clearly agreed upon approach. I'm personally not in love with the 'gigantic core' approach, but since Catalyst already incorporates support for classic form parameters it does I think make sense (for at least the 5.9 series) for us to support common AJAX use cases. And I do think some of the things people would like to do with REST, such as match actions based on content negotiation are valid. The main issue that I see is that we have too many ways to do exactly the same thing (return JSON for AJAX endpoints) and no clear reason why any of them are better for a given purpose. Additionally, some of them are a bit verbose, and Catalyst already has reputation for being the long way to do simple things. However it does seem to me after we've all talked about it quite a bit that at this point there really doesn't seem to be a really exciting approach that would be useful (talking about a common way to handle the serialize / format stuff, like allowing res-body to take a ref and convert it to JSON or XML, etc.) At least in terms of something that belongs in Catalyst core. Why not lets pull that part out of the spec and make it a separate research project, and for now continue to let the community play with various approaches. I think there appears to be less controversy on the request side, in terms of building in alternative content parsing, and possibly a first go at subroutine attribute content type negotiation. Any thoughts on the request side of the proposal? Thanks everyone for joining in! John On Tuesday, August 13, 2013 4:16 AM, Craig Chant cr...@homeloanpartnership.com wrote: Hi John / Alex Am I missing something here? In my model I create and return the JSON... use JSON qw(encode_json); my %json_hash ; ... add some hash stuff to be made into JSON ... # return JSON return encode_json \%json_hash -- In my controller I output it to the view / browser $c-response-body( $c-model('My::Model')-my_method_JSON ); - With an AJAX call I retrieve the JSON // // GET SOME JSON VIA AJAX // / var myJSON; function myAJAXFunction() { // call AJAX makeRequest('/my_controller/my_method_JSON ','',set_JSON); } function setJSON(json) { // store JSON returned from AJAX call myJSON = $.parseJSON(json); } This is working fine, what shift in paradigm are you trying to create and how would it affect / alter what I currently do, or indeed make it any easier / simpler, I can't see it getting much simpler than the few lines of code I already have, can it? Or am I doing something wrong? Cheers, Craig Chant. -Original Message- From: John Napiorkowski [mailto:jjn1...@yahoo.com] Sent: 12 August 2013 21:27 To: The elegant MVC web framework Subject: Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNeg otiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On 2013-08-13 17:37, John Napiorkowski wrote: Craig, Great points and thanks for the code example (I wish more stuff like this was floating around in Catalyst oriented blogs, etc.) Just stepping back a bit, the purpose of the research spike was to see what, if any, features could be incorporated into Catalyst core to improve and streamline how we work with some of the contemporary use cases around AJAX and REST. Its very possible that the answer to that is no, there's nothing that belongs in Catalyst core around this. I think the reason for the research is that a lot of modern web frameworks do provide such support out of the box, and the fact that Catalyst doesn't have it at first might seem like a deficiency in the framework. Other reasons for baking this into core might come to mind (and have various levels of validity) -- having a common approach could result in the benefits of DRY (better, more secure code, less stuff for newbies to figure out) -- For the common cases it would help people new to a project (instead of having to figure out how this project handles AJAX, or the five different ways its handled, for example). -- a common, core approach might help focus the community a bit by removing a common use case that has no clearly agreed upon approach. I'm personally not in love with the 'gigantic core' approach, but since Catalyst already incorporates support for classic form parameters it does I think make sense (for at least the 5.9 series) for us to support common AJAX use cases. And I do think some of the things people would like to do with REST, such as match actions based on content negotiation are valid. The main issue that I see is that we have too many ways to do exactly the same thing (return JSON for AJAX endpoints) and no clear reason why any of them are better for a given purpose. Additionally, some of them are a bit verbose, and Catalyst already has reputation for being the long way to do simple things. I fully agree with you on those points! However it does seem to me after we've all talked about it quite a bit that at this point there really doesn't seem to be a really exciting approach that would be useful (talking about a common way to handle the serialize / format stuff, like allowing res-body to take a ref and convert it to JSON or XML, etc.) At least in terms of something that belongs in Catalyst core. Why not lets pull that part out of the spec and make it a separate research project, and for now continue to let the community play with various approaches. I think there appears to be less controversy on the request side, in terms of building in alternative content parsing, and possibly a first go at subroutine attribute content type negotiation. Until now res-body was populated by the View, not the controller directly, besides some exception cases like streaming files from disk. Any thoughts on the request side of the proposal? The request param parsing seems to be decoupled from the response completely at the moment as I see no problem in starting with adding support for different request content-types first, then look at the response side and how those two can be coupled. In RFC2616 [1] content negotiation includes not only content-type but also language and encoding. We already cored the encoding handling which is a step into the right direction. If for example the API of an application is only capable of providing data in English but the website should be I18N aware that is something the dispatcher has to be able to handle. I've never seen agent-driven negotiation as described in the RFC section 12.2 on the web so I'd let that out of scope for now. Transparent negotiation (section 12.3) sounds like something a Plack middleware might be best suited for. What do you think about test-driven development for this changes? Lets write test cases how we want Catalyst to act before starting to think how to implement it? Thanks everyone for joining in! John -- Alex (abraxxa) [1] http://tools.ietf.org/html/rfc2616#page-71 On Tuesday, August 13, 2013 4:16 AM, Craig Chant cr...@homeloanpartnership.com wrote: Hi John / Alex Am I missing something here? In my model I create and return the JSON... use JSON qw(encode_json); my %json_hash ; ... add some hash stuff to be made into JSON ... # return JSON return encode_json \%json_hash -- In my controller I output it to the view / browser $c-response-body( $c-model('My::Model')-my_method_JSON ); - With an AJAX call I retrieve the JSON // // GET SOME JSON VIA AJAX // / var myJSON; function myAJAXFunction() { // call AJAX makeRequest('/my_controller/my_method_JSON ','',set_JSON); } function setJSON(json) { // store JSON returned from
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Tuesday, August 13, 2013 12:42 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-13 17:37, John Napiorkowski wrote: Craig, Great points and thanks for the code example (I wish more stuff like this was floating around in Catalyst oriented blogs, etc.) Just stepping back a bit, the purpose of the research spike was to see what, if any, features could be incorporated into Catalyst core to improve and streamline how we work with some of the contemporary use cases around AJAX and REST. Its very possible that the answer to that is no, there's nothing that belongs in Catalyst core around this. I think the reason for the research is that a lot of modern web frameworks do provide such support out of the box, and the fact that Catalyst doesn't have it at first might seem like a deficiency in the framework. Other reasons for baking this into core might come to mind (and have various levels of validity) -- having a common approach could result in the benefits of DRY (better, more secure code, less stuff for newbies to figure out) -- For the common cases it would help people new to a project (instead of having to figure out how this project handles AJAX, or the five different ways its handled, for example). -- a common, core approach might help focus the community a bit by removing a common use case that has no clearly agreed upon approach. I'm personally not in love with the 'gigantic core' approach, but since Catalyst already incorporates support for classic form parameters it does I think make sense (for at least the 5.9 series) for us to support common AJAX use cases. And I do think some of the things people would like to do with REST, such as match actions based on content negotiation are valid. The main issue that I see is that we have too many ways to do exactly the same thing (return JSON for AJAX endpoints) and no clear reason why any of them are better for a given purpose. Additionally, some of them are a bit verbose, and Catalyst already has reputation for being the long way to do simple things. I fully agree with you on those points! However it does seem to me after we've all talked about it quite a bit that at this point there really doesn't seem to be a really exciting approach that would be useful (talking about a common way to handle the serialize / format stuff, like allowing res-body to take a ref and convert it to JSON or XML, etc.) At least in terms of something that belongs in Catalyst core. Why not lets pull that part out of the spec and make it a separate research project, and for now continue to let the community play with various approaches. I think there appears to be less controversy on the request side, in terms of building in alternative content parsing, and possibly a first go at subroutine attribute content type negotiation. Until now res-body was populated by the View, not the controller directly, besides some exception cases like streaming files from disk. I've gone both ways on this a lot and there is some acceptance when the view is going to be very simple (such as json encoding of a perl data reference) that letting the controller take on some view aspects is allow. It is how prior art (Catalyst::Action::REST, and the DBIC API controller work). Finding the sweet spot for easy and useful is what I'd aim at, and not yet finding :) Any thoughts on the request side of the proposal? The request param parsing seems to be decoupled from the response completely at the moment as I see no problem in starting with adding support for different request content-types first, then look at the response side and how those two can be coupled. Yeah, that side of things is pretty straight up, and even hacking into HTTP::Body is easy, if we wanted to go that way. In RFC2616 [1] content negotiation includes not only content-type but also language and encoding. We already cored the encoding handling which is a step into the right direction. Right, although most of the services I work with tend to focus on the content type a lot, and generally we use the language stuff for translations inside the templates, rather than run different actions, although that would certainly be valid. If for example the API of an application is only capable of providing data in English but the website should be I18N aware that is something the dispatcher has to be able to handle. Yup, I've given this a lot of thought and figure its something that should be part of one of the catalyst l18N systems, at least for now. I can see some use for dispatch on language accept, just not as much as for encoding and content types. I've never seen agent-driven negotiation as described in the RFC section 12.2 on the web so I'd let that out of scope for now. Yeah, and actually I think Catalyst could do this, just it would be a manual
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap On Friday, August 9, 2013 5:38 PM, John Napiorkowski jjn1...@yahoo.com wrote: On Friday, August 9, 2013 4:52 PM, Bill Moseley mose...@hank.org wrote: On Fri, Aug 9, 2013 at 12:11 PM, John Napiorkowski jjn1...@yahoo.com wrote: What's the use case you have in mind? Something like first check for something like 'application/vnd.mycompany.user+json' and then fall back to 'application/(?:vmd.*+)?json' if you don't find it? Is that an actual case you've come across? Ya, that's kind of what I was thinking. Or also having a final fallback parser that tries to figure out the type by other means than just looking at the Content type provided in the request. Or even a '.' final match-anything that does some special logging. It would be easy enough to find out if application/json was in the array more than once by mistake. Seems like a reasonable use case then, although I would encourage future development to aim at putting more specificity in the controller, rather than rely on the global application. The primary reason to have anything here at all is to have a base people can build on. I do fear the globalness of it, but it seems not an unreasonable compromise based on how Catalyst actually works today. We've spoken before about the parsing larger incoming and chunked data thing before. I would love to address this, but right now it seems like something we need better agreement on in the psgi level. For example, since star man already buffers incoming input, it feels silly to me to have catalyst then try to re-stream that. You've already paid the full price of buffering in terms of memory, and performance right? Or am I not understanding? I added a Plack middleware to handle chunked encoded requests -- I needed it for the Catalyst dev server and for Apache/mod_perl. Yes, Starman already de-chunks and buffers and works perfectly. Apache actually de-chunks the request, but doesn't update the Content-Length header and leaves on the Transfer-Encoding: chunked header. So, sadly, I do flush this to a temporary file only to get the content-length to make Catalyst happy. Right, so I think in the end we all agreed it was psgi that should be responsible for dealing with chunks or whatever (based on the http level support of the server). The only think would be could there be some sane approach that exposed the input stream as a non blockable file handle that has not already been read into a buffer (memory or otherwise). I do see the possible advantage there for processing efficiently large POST or PUT. However again this needs to be done at the PSGI level, something like input.stream or similar. That would smooth over chucked versus non chunked and expose a readable stream of the input that has not yet been buffered. I'd really like to have something at the Catalyst level that sanely acheives this end, but I think part of the price we paid when going to PSGi at the core, is that most of the popular plack handlers are pre loading and buffering input, even large request input. This seems to be an area where it might behoove us to work with the psgi group to find something stable. Even the optional psgix.io isn't always going to work out, since some people don't want to support that in the handler (its a somewhat vague definition I guess and makes people uncomfortable). Until them, or until someone helps me understand that my thinking is totally wrong on this score, it seems the best thing to do is to put this out of scope for now. That way we can move on supporting a goodly number of real use cases. Agreed. I intended to say that $_ equals a string that is the buffered request body. This way we can reserve other args for handling the future streaming case. I was actually pondering something were the sub ref returns a sub ref that gets called over and over to do the parse. I just
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex On Friday, August 9, 2013 5:38 PM, John Napiorkowski jjn1...@yahoo.com wrote: On Friday, August 9, 2013 4:52 PM, Bill Moseley mose...@hank.org wrote: On Fri, Aug 9, 2013 at 12:11 PM, John Napiorkowski jjn1...@yahoo.com wrote: What's the use case you have in mind? Something like first check for something like 'application/vnd.mycompany.user+json' and then fall back to 'application/(?:vmd.*+)?json' if you don't find it? Is that an actual case you've come across? Ya, that's kind of what I was thinking. Or also having a final fallback parser that tries to figure out the type by other means than just looking at the Content type provided in the request. Or even a '.' final match-anything that does some special logging. It would be easy enough to find out if application/json was in the array more than once by mistake. Seems like a reasonable use case then, although I would encourage future development to aim at putting more specificity in the controller, rather than rely on the global application. The primary reason to have anything here at all is to have a base people can build on. I do fear the globalness of it, but it seems not an unreasonable compromise based on how Catalyst actually works today. We've spoken before about the parsing larger incoming and chunked data thing before. I would love to address this, but right now it seems like something we need better agreement on in the psgi level. For example, since star man already buffers incoming input, it feels silly to me to have catalyst then try to re-stream that. You've already paid the full price of buffering in terms of memory, and performance right? Or am I not understanding? I added a Plack middleware to handle chunked encoded requests -- I needed it for the Catalyst dev server and for Apache/mod_perl. Yes, Starman already de-chunks and buffers and works perfectly. Apache actually de-chunks the request, but doesn't update the Content-Length header and leaves on the Transfer-Encoding: chunked header. So, sadly, I do flush this to a temporary file only to get the content-length to make Catalyst happy. Right, so I think in the end we all agreed it was psgi that should be responsible for dealing with chunks or whatever (based on the http level support of the server). The only think would be could there be some sane approach that exposed the input stream as a non blockable file handle that has not already been read into a buffer (memory or otherwise). I do see the possible advantage there for processing efficiently large POST or PUT. However again this needs to be done at the PSGI level, something like input.stream or similar. That would smooth over chucked versus non chunked and expose a readable stream of the input that has not yet been buffered. I'd really like to have something at the Catalyst level that sanely acheives this end, but I think part of the price we paid when going to PSGi at the core, is that most of the popular plack handlers are pre loading and buffering input, even large request input. This seems to be an area where it might behoove us to work with the psgi group to find something stable. Even the optional psgix.io isn't always going to work out, since some people don't want to support that in the handler (its a somewhat vague definition I guess and makes people uncomfortable). Until them, or until someone helps me understand that my thinking is totally wrong on this score, it seems the best thing to do is to put this out of scope for now. That
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex Alex, I think you put your finger on one of the major uneasiness I (and others) have around the idea of having in the global application model all these registered formatters. Yes, in theory it feels like we are cheating on the View side of MVC. I have personally always thought that Catalyst doesn't exactly get it right the way it is (I think C and V are actually a little too detached for one thing) and that leads to odd code sometimes. The commonly used Catalyst::Action::Renderview is a bit too detached for my taste. And what we call a View tends to mostly just be a View handler (or factory I guess). On the other hand the basic idea of separation of concerns is sound. I think the main thing is that this is not intended to replace view, but for the simple cases where people just want to quickly serialize data (like for all those ajax endpoints you need to do nowadays, not full on RESTful APIs but quick and dirty communication between the back and front end. Maybe that's not a great thing for Catalyst (and honestly I put this out there in the hopes of provocation some strong reactions. Personally I prefer to use templates even for creating JSON output, I think you get cleaner separation that is easier to maintain over time (I really don't like when I see something like -body (json_encode $sql-run-get_all_rows). That feels fragile to me. On the other hand I see the attraction of some of the more lightweight web frameworks where they make it easy to just spew out JSON. This is partly why I sketched out an action/controller level alternative, with the proposed response-body_format thing and the proposed new action subroutine attributes (just to recap) sub myaction :Local { My ($self, $c) = @_; # ... # ... $c-response-format( 'application/json' = sub { json_encode $stuff }, # ... # ... ); } I think this approach feels similar to how some other frameworks operate. Some offer more sugary syntax for the common stuff, perhaps $c-response -json( sub { ... } ) -html ( sub { ... } ). - ... - ... ; and I guess we could say there's a shortcut to forward to a View instead $c-response -json(JSON) -html (TTHTML). - ... - ... ; But that can all be worked out after the basic thought is in place. and again, some other frameworks (some java system) they use annotations similar to our action level subroutine attributes. I think we also try to hit that with the proposed Provides/Consumes attributes. The main thing is I can't see a way to properly do content negotiatin with ssubroutine attributes given the exiting catalyst dispatcher (basically the system is mostly a first match win) Perhaps that is all we need, and we can skip idea of needing default global body formatters? Or maybe we'd prefer to think about leveraging more of Web::Dispatch, and mst has this great notion of setting response filters, which we could get for free if we use web-dispatch. Instead of setting a global point for the encoding, we could control is more granularly that way. I guess ultimately it comes down to a question over do we need a full on view for handling REST and straight up data encoding. Personally I do think there is a use case here that Catalyst isn't hitting right, and I am pretty sure some of the ideas in the stand alone Catalyst-Action-REST do apply but I'd like to see that more native and probably scoped more tightly (I don't
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
I'm not sure if it fits this discussion, but I think automatic JSON serialization is problematic since the serializer often has to guess if a value needs to be serialized as a string or a number. This is not a problem if you consume it with Perl or Javascript but it might be problematic for some other languages (I have not seen it though). Regards, Marius On 12 August 2013 22:26, John Napiorkowski jjn1...@yahoo.com wrote: On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex Alex, I think you put your finger on one of the major uneasiness I (and others) have around the idea of having in the global application model all these registered formatters. Yes, in theory it feels like we are cheating on the View side of MVC. I have personally always thought that Catalyst doesn't exactly get it right the way it is (I think C and V are actually a little too detached for one thing) and that leads to odd code sometimes. The commonly used Catalyst::Action::Renderview is a bit too detached for my taste. And what we call a View tends to mostly just be a View handler (or factory I guess). On the other hand the basic idea of separation of concerns is sound. I think the main thing is that this is not intended to replace view, but for the simple cases where people just want to quickly serialize data (like for all those ajax endpoints you need to do nowadays, not full on RESTful APIs but quick and dirty communication between the back and front end. Maybe that's not a great thing for Catalyst (and honestly I put this out there in the hopes of provocation some strong reactions. Personally I prefer to use templates even for creating JSON output, I think you get cleaner separation that is easier to maintain over time (I really don't like when I see something like -body (json_encode $sql-run-get_all_rows). That feels fragile to me. On the other hand I see the attraction of some of the more lightweight web frameworks where they make it easy to just spew out JSON. This is partly why I sketched out an action/controller level alternative, with the proposed response-body_format thing and the proposed new action subroutine attributes (just to recap) sub myaction :Local { My ($self, $c) = @_; # ... # ... $c-response-format( 'application/json' = sub { json_encode $stuff }, # ... # ... ); } I think this approach feels similar to how some other frameworks operate. Some offer more sugary syntax for the common stuff, perhaps $c-response -json( sub { ... } ) -html ( sub { ... } ). - ... - ... ; and I guess we could say there's a shortcut to forward to a View instead $c-response -json(JSON) -html (TTHTML). - ... - ... ; But that can all be worked out after the basic thought is in place. and again, some other frameworks (some java system) they use annotations similar to our action level subroutine attributes. I think we also try to hit that with the proposed Provides/Consumes attributes. The main thing is I can't see a way to properly do content negotiatin with ssubroutine attributes given the exiting catalyst dispatcher (basically the system is mostly a first match win) Perhaps that is all we need, and we can skip idea of needing default global body formatters? Or maybe we'd prefer to think about leveraging more of Web::Dispatch, and mst has this great notion of setting response filters, which we could get for free if we use web-dispatch. Instead of
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Monday, August 12, 2013 4:56 PM, Marius Olsthoorn olst...@gmail.com wrote: I'm not sure if it fits this discussion, but I think automatic JSON serialization is problematic since the serializer often has to guess if a value needs to be serialized as a string or a number. This is not a problem if you consume it with Perl or Javascript but it might be problematic for some other languages (I have not seen it though). Any more votes in favor of dropping the automatic global serialization stuff? If so I will edit the spec and resubmit. Lets not be afraid to kill bad ideas! John Regards, Marius On 12 August 2013 22:26, John Napiorkowski jjn1...@yahoo.com wrote: On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex Alex, I think you put your finger on one of the major uneasiness I (and others) have around the idea of having in the global application model all these registered formatters. Yes, in theory it feels like we are cheating on the View side of MVC. I have personally always thought that Catalyst doesn't exactly get it right the way it is (I think C and V are actually a little too detached for one thing) and that leads to odd code sometimes. The commonly used Catalyst::Action::Renderview is a bit too detached for my taste. And what we call a View tends to mostly just be a View handler (or factory I guess). On the other hand the basic idea of separation of concerns is sound. I think the main thing is that this is not intended to replace view, but for the simple cases where people just want to quickly serialize data (like for all those ajax endpoints you need to do nowadays, not full on RESTful APIs but quick and dirty communication between the back and front end. Maybe that's not a great thing for Catalyst (and honestly I put this out there in the hopes of provocation some strong reactions. Personally I prefer to use templates even for creating JSON output, I think you get cleaner separation that is easier to maintain over time (I really don't like when I see something like -body (json_encode $sql-run-get_all_rows). That feels fragile to me. On the other hand I see the attraction of some of the more lightweight web frameworks where they make it easy to just spew out JSON. This is partly why I sketched out an action/controller level alternative, with the proposed response-body_format thing and the proposed new action subroutine attributes (just to recap) sub myaction :Local { My ($self, $c) = @_; # ... # ... $c-response-format( 'application/json' = sub { json_encode $stuff }, # ... # ... ); } I think this approach feels similar to how some other frameworks operate. Some offer more sugary syntax for the common stuff, perhaps $c-response -json( sub { ... } ) -html ( sub { ... } ). - ... - ... ; and I guess we could say there's a shortcut to forward to a View instead $c-response -json(JSON) -html (TTHTML). - ... - ... ; But that can all be worked out after the basic thought is in place. and again, some other frameworks (some java system) they use annotations similar to our action level subroutine attributes. I think we also try to hit that with the proposed Provides/Consumes attributes. The main thing is I can't see a way to properly do content negotiatin with ssubroutine attributes given the
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On 2013-08-12 22:26, John Napiorkowski wrote: On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex Alex, First thanks for the extensive and quick answer! I think you put your finger on one of the major uneasiness I (and others) have around the idea of having in the global application model all these registered formatters. Yes, in theory it feels like we are cheating on the View side of MVC. I have personally always thought that Catalyst doesn't exactly get it right the way it is (I think C and V are actually a little too detached for one thing) and that leads to odd code sometimes. The commonly used Catalyst::Action::Renderview is a bit too detached for my taste. And what we call a View tends to mostly just be a View handler (or factory I guess). On the other hand the basic idea of separation of concerns is sound. I think the main thing is that this is not intended to replace view, but for the simple cases where people just want to quickly serialize data (like for all those ajax endpoints you need to do nowadays, not full on RESTful APIs but quick and dirty communication between the back and front end. Maybe that's not a great thing for Catalyst (and honestly I put this out there in the hopes of provocation some strong reactions. Personally I prefer to use templates even for creating JSON output, I think you get cleaner separation that is easier to maintain over time (I really don't like when I see something like -body (json_encode $sql-run-get_all_rows). That feels fragile to me. On the other hand I see the attraction of some of the more lightweight web frameworks where they make it easy to just spew out JSON. What I have done in my largest Catalyst app, that feeds data to ExtJS using Catalyst::Controler::DBIC::API, is a custom DBIC ResultClass that transforms the values in the HRI-like data structure before it gets serialized to JSON. That extra step of transforming data before serializing it belongs imho into the View, not the controller, because different output formats might need different transformations e.g. datetime format. If content negotiation is the feature to add I'd define which View is responsible for which mime-type. The View class would then become more than a mere glue for e.g. JSON encoding. That should be definable per app, overridden by the Controller config which can be overridden by an action too by e.g. setting a stash or other config var. This is partly why I sketched out an action/controller level alternative, with the proposed response-body_format thing and the proposed new action subroutine attributes (just to recap) sub myaction :Local { My ($self, $c) = @_; # ... # ... $c-response-format( 'application/json' = sub { json_encode $stuff }, # ... # ... ); } I think this approach feels similar to how some other frameworks operate. Some offer more sugary syntax for the common stuff, perhaps $c-response -json( sub { ... } ) -html ( sub { ... } ). - ... - ... ; and I guess we could say there's a shortcut to forward to a View instead $c-response -json(JSON) -html (TTHTML). - ... - ... ; That isn't introspect-able which might be handy for the debug screen to see which action takes what content-type. But that can all be worked out after the basic thought is in place. and again, some other frameworks (some java system) they use annotations similar to our action level subroutine attributes. I think we also try
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Monday, August 12, 2013 5:56 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 22:26, John Napiorkowski wrote: On Monday, August 12, 2013 2:33 PM, Alexander Hartmaier alexander.hartma...@t-systems.at wrote: On 2013-08-12 16:58, John Napiorkowski wrote: Hey Bill (and the rest of you lurkers!) I just updated the spec over at https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation/blob/master/README.pod I decided to remove regular expression matching from the body parser stuff, which I think resolves the merging questions (since now there will not be a possibility that more that one can match the request). I think we can add this back eventually using some standard request content negotiation, using mime type patterns and quality settings, so that we can have some rules that dictate what is the best match, rather than try to invent our own. For example: https://metacpan.org/module/HTTP::Headers::ActionPack::ContentNegotiation The idea would be to not reinvent. I think we could instead of doing an equality match here we just use something like this to figure out what the best match is works pretty well. Thoughts? jnap Hi John, I thought about it for the last few days and wonder why the, lets call it rendering, of the data isn't left to the view as defined by the MVC pattern? I'd expect that a different view is used depending on the negotiated content-type. How do other MVC frameworks choose the view to use? Should a single action be able to limit the output format or is controller level granular enough? Best regards, Alex Alex, First thanks for the extensive and quick answer! I think you put your finger on one of the major uneasiness I (and others) have around the idea of having in the global application model all these registered formatters. Yes, in theory it feels like we are cheating on the View side of MVC. I have personally always thought that Catalyst doesn't exactly get it right the way it is (I think C and V are actually a little too detached for one thing) and that leads to odd code sometimes. The commonly used Catalyst::Action::Renderview is a bit too detached for my taste. And what we call a View tends to mostly just be a View handler (or factory I guess). On the other hand the basic idea of separation of concerns is sound. I think the main thing is that this is not intended to replace view, but for the simple cases where people just want to quickly serialize data (like for all those ajax endpoints you need to do nowadays, not full on RESTful APIs but quick and dirty communication between the back and front end. Maybe that's not a great thing for Catalyst (and honestly I put this out there in the hopes of provocation some strong reactions. Personally I prefer to use templates even for creating JSON output, I think you get cleaner separation that is easier to maintain over time (I really don't like when I see something like -body (json_encode $sql-run-get_all_rows). That feels fragile to me. On the other hand I see the attraction of some of the more lightweight web frameworks where they make it easy to just spew out JSON. What I have done in my largest Catalyst app, that feeds data to ExtJS using Catalyst::Controler::DBIC::API, is a custom DBIC ResultClass that transforms the values in the HRI-like data structure before it gets serialized to JSON. That extra step of transforming data before serializing it belongs imho into the View, not the controller, because different output formats might need different transformations e.g. datetime format. If content negotiation is the feature to add I'd define which View is responsible for which mime-type. The View class would then become more than a mere glue for e.g. JSON encoding. That should be definable per app, overridden by the Controller config which can be overridden by an action too by e.g. setting a stash or other config var. I do think that I'd like the controller side to be responsible for deciding what view to hookup, since the controller's primary responsibility is to do all the glue up. However yeah, its overlapping concerns for the http header stuff, content type and length, etc., its rational for the view to have some dominion over that (although I am against http status in the view). I do like your thinking here, the idea is that the application would introspect each view and find out what content type that view wants to handle. The only missing bit is that currently the definition of a View is pretty lightweight. We could probably create a Catalyst::ViewRole::HTTP to handle the content type side of content negotiation, then Catalyst could review the Views and if that role or a duck type based on that role exists, then use that. Sounds like something that could work
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Thu, Aug 8, 2013 at 9:27 PM, John Napiorkowski jjn1...@yahoo.com wrote: https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation I currently extend HTTP::Body in a similar way that you describe. But I have them a separate classes so I just do: use My::Multipart; then in that I hack in my type for HTTP::Body: package My::MultiPart; use HTTP::Body; $HTTP::Body::TYPES-{'multipart/form-data'} = 'My::MultiPart'; As you propose, mapping a mime type to a sub seems pretty flexible. I assume the sub could return a filehandle. File uploads still need to stream the uploads to disk while making the parameters available as HTTP::Body does now. I like the regex mimetype keys, but should they be an array instead of a hash so can set the order checked? I think we must consider large inputs/streams.You say $_ is the request content. Is that the full request with headers? Is the request already de-chunked, for example? Or am I thinking too low level? In some apps I'm using Catalyst::Action::REST and others I have some custom code where I use HTTP::Body to parse JSON. I'm mixed about having the request data end up in $c-req-parameters vs $c-req-data.I don't really see why form-data/urlencoded should be treated differently than other encodings like JSON. I not quite sure about $c-res-body( \%data ); I think body should be the raw body. What does $c-res-body return? The serialized json? The original hashref? If a parser dies what kind of exception is thrown? You say they would not set any response status, but wouldn't we want to catch the error and then set a 400? (I use exception objects that carry http status, a message to return in the body and a message used for logging at a given level.) I'm not sure what to think of the Provides and Consumes controller attributes. It's one thing to realize early that the client cannot handle any response I'm willing to provide (json, yaml, whatever), but I'm worried this would blur separation of concerns. That is, by the time the controller runs we would have already decoded the request and want to just work with the data provided. And would we want different data returned if the client is asking for json vs yaml? Maybe I'm missing the point of that. BTW - I practice I find it pretty handy to be able to specify/override response encoding via a query param. -- Bill Moseley mose...@hank.org ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Friday, August 9, 2013 12:00 PM, Bill Moseley mose...@hank.org wrote: On Thu, Aug 8, 2013 at 9:27 PM, John Napiorkowski jjn1...@yahoo.com wrote: https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation I currently extend HTTP::Body in a similar way that you describe. But I have them a separate classes so I just do: use My::Multipart; then in that I hack in my type for HTTP::Body: package My::MultiPart; use HTTP::Body; $HTTP::Body::TYPES-{'multipart/form-data'} = 'My::MultiPart'; Yes, I do this as well. It feels a bit icky though, but this is a possible approach for us as all As you propose, mapping a mime type to a sub seems pretty flexible. I assume the sub could return a filehandle. File uploads still need to stream the uploads to disk while making the parameters available as HTTP::Body does now. Yes, my original thinking is the subref would return something that would populate body_data. I suppose that could be an object or file handle. This is one good reason to distinguish body_data from body parameters (which is expected to be a hash ref). I'm not really sure the best thing to do here with multipart and particularly with the file upload canonical type 'multipart/formdata'. My thinking is that if someone wants to roll their own, they can, be everthing goes to body_data, otherwise one can simple just use the existing built in support for this, which really isn't broken. Now, I am not sure what to do when the multipart contains json, for example (as in a file upload but instead of a binary file plus urlencoded params, the params are application/json. I'm very tempted to say this is out of scope for the first version. Really this does point to some underlying inflexibility in the existing design. I like the regex mimetype keys, but should they be an array instead of a hash so can set the order checked? I guess I was thinking to prevent people from listing 'application/json' more than once by accident. And there's some issues with the best way to merge (for example you might parse differently in dev from production... What's the use case you have in mind? Something like first check for something like 'application/vnd.mycompany.user+json' and then fall back to 'application/(?:vmd.*+)?json' if you don't find it? Is that an actual case you've come across? I guess I was thinking also that we do want to make the global functionality a bit limited, so that in the next iteration we can support something more fine grained, at the controller level perhaps. I'd hate to introduce such terrible globalness to Catalyst which in general has be decent in avoiding that. I think we must consider large inputs/streams. You say $_ is the request content. Is that the full request with headers? Is the request already de-chunked, for example? Or am I thinking too low level? We've spoken before about the parsing larger incoming and chunked data thing before. I would love to address this, but right now it seems like something we need better agreement on in the psgi level. For example, since star man already buffers incoming input, it feels silly to me to have catalyst then try to re-stream that. You've already paid the full price of buffering in terms of memory, and performance right? Or am I not understanding? I'd really like to have something at the Catalyst level that sanely acheives this end, but I think part of the price we paid when going to PSGi at the core, is that most of the popular plack handlers are pre loading and buffering input, even large request input. This seems to be an area where it might behoove us to work with the psgi group to find something stable. Even the optional psgix.io isn't always going to work out, since some people don't want to support that in the handler (its a somewhat vague definition I guess and makes people uncomfortable). Until them, or until someone helps me understand that my thinking is totally wrong on this score, it seems the best thing to do is to put this out of scope for now. That way we can move on supporting a goodly number of real use cases. I intended to say that $_ equals a string that is the buffered request body. This way we can reserve other args for handling the future streaming case. I was actually pondering something were the sub ref returns a sub ref that gets called over and over to do the parse. In some apps I'm using Catalyst::Action::REST and others I have some custom code where I use HTTP::Body to parse JSON. I'm mixed about having the request data end up in $c-req-parameters vs $c-req-data. I don't really see why form-data/urlencoded should be treated differently than other encodings like JSON. I think my idea with using a new request attribute 'body_data' was so that we'd cleanly separate classic style params. also, there is no certainty or expectation that body_data contain a hash ref, it could be
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Fri, Aug 9, 2013 at 12:11 PM, John Napiorkowski jjn1...@yahoo.comwrote: What's the use case you have in mind? Something like first check for something like 'application/vnd.mycompany.user+json' and then fall back to 'application/(?:vmd.*+)?json' if you don't find it? Is that an actual case you've come across? Ya, that's kind of what I was thinking. Or also having a final fallback parser that tries to figure out the type by other means than just looking at the Content type provided in the request. Or even a '.' final match-anything that does some special logging. It would be easy enough to find out if application/json was in the array more than once by mistake. We've spoken before about the parsing larger incoming and chunked data thing before. I would love to address this, but right now it seems like something we need better agreement on in the psgi level. For example, since star man already buffers incoming input, it feels silly to me to have catalyst then try to re-stream that. You've already paid the full price of buffering in terms of memory, and performance right? Or am I not understanding? I added a Plack middleware to handle chunked encoded requests -- I needed it for the Catalyst dev server and for Apache/mod_perl. Yes, Starman already de-chunks and buffers and works perfectly. Apache actually de-chunks the request, but doesn't update the Content-Length header and leaves on the Transfer-Encoding: chunked header. So, sadly, I do flush this to a temporary file only to get the content-length to make Catalyst happy. I'd really like to have something at the Catalyst level that sanely acheives this end, but I think part of the price we paid when going to PSGi at the core, is that most of the popular plack handlers are pre loading and buffering input, even large request input. This seems to be an area where it might behoove us to work with the psgi group to find something stable. Even the optional psgix.io isn't always going to work out, since some people don't want to support that in the handler (its a somewhat vague definition I guess and makes people uncomfortable). Until them, or until someone helps me understand that my thinking is totally wrong on this score, it seems the best thing to do is to put this out of scope for now. That way we can move on supporting a goodly number of real use cases. Agreed. I intended to say that $_ equals a string that is the buffered request body. This way we can reserve other args for handling the future streaming case. I was actually pondering something were the sub ref returns a sub ref that gets called over and over to do the parse. I just don't want file uploads in memory. (Oh, I have another post coming on that -- thanks for the reminder.) I not quite sure about $c-res-body( \%data ); I think body should be the raw body. What does $c-res-body return? The serialized json? The original hashref? I'm not sure I like it either. I would say body returns whatever you set it to, until the point were encoding happens. It does feel a bit flaky, but I can't actually put my finger on a real code smell here. Any other suggestions? This is certainly a part of the proposal that is going to raise doubt, but I can't think of something better, or assemble problematic use cases in my head over it either. I don't really mind adding to $c-stash-{rest}. It's kind of a staging area to put data until it's ready to be encoded into the body. I might get it partially loaded with data and then never use it and return some other body. Noting precludes that, of course. Ya, tough one. If a parser dies what kind of exception is thrown? You say they would not set any response status, but wouldn't we want to catch the error and then set a 400? (I use exception objects that carry http status, a message to return in the body and a message used for logging at a given level.) How people do exceptions in Perl tends to be nearly religious, and I didn't want to hold this up based on figuring that stuff out :) I was thinking to just raise an exception and let the existing Catalyst stuff do its thing. I'm just thinking not to add anything special for this type of error, but just do the existing behavior, for better or worse. Agreed. If I were to write everything from scratch again I'd be doing $c-throw_not_found or $c-throw_forbidden with exception objects as the code ends up much cleaner and sane. But, everyone has their own approaches. since request-body_data is intended to be lazy, we won't run that parse code until you ask for the data. We don't need to parse the data to do the basic match here, this is just based on the HTTP meta data, no the actual content. I think for common cases this is fine (I realize that yet again this might not be the best approach for multipart uploads...) Another tough one.Just seems like PUT /user should accept the same data
Re: [Catalyst] More detailed proposal for changes related to content negotiation and REST
On Friday, August 9, 2013 4:52 PM, Bill Moseley mose...@hank.org wrote: On Fri, Aug 9, 2013 at 12:11 PM, John Napiorkowski jjn1...@yahoo.com wrote: What's the use case you have in mind? Something like first check for something like 'application/vnd.mycompany.user+json' and then fall back to 'application/(?:vmd.*+)?json' if you don't find it? Is that an actual case you've come across? Ya, that's kind of what I was thinking. Or also having a final fallback parser that tries to figure out the type by other means than just looking at the Content type provided in the request. Or even a '.' final match-anything that does some special logging. It would be easy enough to find out if application/json was in the array more than once by mistake. Seems like a reasonable use case then, although I would encourage future development to aim at putting more specificity in the controller, rather than rely on the global application. The primary reason to have anything here at all is to have a base people can build on. I do fear the globalness of it, but it seems not an unreasonable compromise based on how Catalyst actually works today. We've spoken before about the parsing larger incoming and chunked data thing before. I would love to address this, but right now it seems like something we need better agreement on in the psgi level. For example, since star man already buffers incoming input, it feels silly to me to have catalyst then try to re-stream that. You've already paid the full price of buffering in terms of memory, and performance right? Or am I not understanding? I added a Plack middleware to handle chunked encoded requests -- I needed it for the Catalyst dev server and for Apache/mod_perl. Yes, Starman already de-chunks and buffers and works perfectly. Apache actually de-chunks the request, but doesn't update the Content-Length header and leaves on the Transfer-Encoding: chunked header. So, sadly, I do flush this to a temporary file only to get the content-length to make Catalyst happy. Right, so I think in the end we all agreed it was psgi that should be responsible for dealing with chunks or whatever (based on the http level support of the server). The only think would be could there be some sane approach that exposed the input stream as a non blockable file handle that has not already been read into a buffer (memory or otherwise). I do see the possible advantage there for processing efficiently large POST or PUT. However again this needs to be done at the PSGI level, something like input.stream or similar. That would smooth over chucked versus non chunked and expose a readable stream of the input that has not yet been buffered. I'd really like to have something at the Catalyst level that sanely acheives this end, but I think part of the price we paid when going to PSGi at the core, is that most of the popular plack handlers are pre loading and buffering input, even large request input. This seems to be an area where it might behoove us to work with the psgi group to find something stable. Even the optional psgix.io isn't always going to work out, since some people don't want to support that in the handler (its a somewhat vague definition I guess and makes people uncomfortable). Until them, or until someone helps me understand that my thinking is totally wrong on this score, it seems the best thing to do is to put this out of scope for now. That way we can move on supporting a goodly number of real use cases. Agreed. I intended to say that $_ equals a string that is the buffered request body. This way we can reserve other args for handling the future streaming case. I was actually pondering something were the sub ref returns a sub ref that gets called over and over to do the parse. I just don't want file uploads in memory. (Oh, I have another post coming on that -- thanks for the reminder.) Well, Catalyst doesn't but I think Starman might depending on the size of the incoming. However I think you can override that with a monkey patch. I not quite sure about $c-res-body( \%data ); I think body should be the raw body. What does $c-res-body return? The serialized json? The original hashref? I'm not sure I like it either. I would say body returns whatever you set it to, until the point were encoding happens. It does feel a bit flaky, but I can't actually put my finger on a real code smell here. Any other suggestions? This is certainly a part of the proposal that is going to raise doubt, but I can't think of something better, or assemble problematic use cases in my head over it either. I don't really mind adding to $c-stash-{rest}. It's kind of a staging area to put data until it's ready to be encoded into the body. I might get it partially loaded with data and then never use it and return some other body. Noting precludes that, of course. Ya, tough one.
Out of office 10/4-10/5 (was: [Catalyst] More detailed proposal for changes related to content negotiation and REST)
Thank you for your email. I will be out of the office Thursday and Friday, returning Monday the 8th. I will respond to your email at that time. Please call our office for any immediate concerns: 616-459-0782. -- Steve Schafer Matsch Systems Phone: 616-477-9629 Mobile: 616-304-9440 Email: st...@matsch.com Web: http://www.matsch.com/ ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Out of office 10/4-10/5 (was: Out of office 10/4-10/5 (was: [Catalyst] More detailed proposal for changes related to content negotiation and REST))
Thank you for your email. I will be out of the office Thursday and Friday, returning Monday the 8th. I will respond to your email at that time. Please call our office for any immediate concerns: 616-459-0782. -- Steve Schafer Matsch Systems Phone: 616-477-9629 Mobile: 616-304-9440 Email: st...@matsch.com Web: http://www.matsch.com/ ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
Out of office 10/4-10/5 (was: Out of office 10/4-10/5 (was: Out of office 10/4-10/5 (was: [Catalyst] More detailed proposal for changes related to content negotiation and REST)))
Thank you for your email. I will be out of the office Thursday and Friday, returning Monday the 8th. I will respond to your email at that time. Please call our office for any immediate concerns: 616-459-0782. -- Steve Schafer Matsch Systems Phone: 616-477-9629 Mobile: 616-304-9440 Email: st...@matsch.com Web: http://www.matsch.com/ ___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/
[Catalyst] More detailed proposal for changes related to content negotiation and REST
Hey All, I had a bit of time on a few plane rides recent to get a chance to write down all the various thoughts and ideas I've heard around what could be a minimally useful expansion of how Catalyst handles content types beyond classic form data, etc as well as some features to make it easier to use Catalyst for building web services. These are new core features not intended to compete so much with existing tools like Catalyst::Action::REST but instead to provide some minimal and hopefully correct support out of the box, to help beginners find enough in Catalyst to stick with it. And hopefully better core support will make it easier to improve specific tools (like CAR). I wrote it up in pod format and put it out on the github organization for any and all to comment, add tickets to, fork and send me corrections, etc. I don't specifically see this as targeted for the Hamburg release, but most of it I think is straightforward, so I'd be happy to help mentor anyone that wanted to run with it. https://github.com/perl-catalyst/CatalystX-Proposals-REStandContentNegotiation John___ List: Catalyst@lists.scsys.co.uk Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst Searchable archive: http://www.mail-archive.com/catalyst@lists.scsys.co.uk/ Dev site: http://dev.catalyst.perl.org/