Gary Godfrey wrote: > So, the general idea is that everything is wsgi until it can't be. So > a TurboGears controller is wsgi middleware which (by default) takes one > item off the url and uses it to wsgi call self methods. It may also > have other uses, like capturing a Form Validation Exception and calling > appropriate error methods. This also imples that the @expose decorator > is really a wsgi adapter.
This definitely works better in a object-path based dispatch. It won't work for Routes-based dispatch under most conditions as Routes matches and will act on the entire URL, not just the front section of it, so its not quite clean to pop off the front. If the person is careful, they can use Routes-based dispatch, as Ian has talked about, by making a Route specifically for the purpose that has a section at the beginning and a url_info remainder that goes to PATH_INFO. > The fun part about this is that a (for instance) Authorization check > just needs to be simple wsgi middleware and it works on whole > controllers as well as with individual methods with a single routine. Though I think it'd be a little tricky to setup Authorization that worked solely with environ, rather than letting you ask for specific permissions for controller access, as a library method would. The thing with middleware is that for it to truly be portable (and useful middleware), it really should be able to function based solely off the environ and/or content passing up/down the middleware chain. There's several things that lose features if moved purely to middleware, like Auth functions, where you might want to do, Auth.user_can(permission='edit_people'), even though the environ might not contain anything having to do with users or permissions. So while WSGI sounds great, and it is, for a lot of things, it definitely has its place, and library functions work great for a lot of this as well. > What also will begin to happen (or at least should happen) is that > we'll start using the environ dict in a more expanded fashion. I'm > thinking of something akin to Zope3's Interfaces, but not so formal. > So, "Identity" middleware will set a "wsgi.Identity" which will contain > UserName, Groups, Roles, Permissions, etc. Then, a Authorization > check is completely separate from the Identity mechinism. I suspect > we'll need something like PEPs just for this (it will have to move far > faster than PEPs do - especially initially). Vision: a TurboGears > application should run under Zope that has a wsgi interface and may > even be able to handle simple permissions. But why? Why cloud up environ needlessly? A thread-local global with a library that any app can use up and down the chain accomplishes the same thing, without clouding up environ. > 1) How to communicate upstream? For instance, if I want to logout, I > need to let the Identity middleware know that. Can I set something > magic in environ? Do I have to throw an exception? Not an issue with a thread-local global and a handy library package. Alternatively, the Identity middleware could put a logout function in environ that you could call, which would trigger the appropriate cleanup. Notice however, that the Identity middleware would need to know what session system is being used so that it could clean the session as needed for logging out. This isn't necessarily difficult, as with session middlware, it'd be present in the environ somewhere and Identity middleware would just need to know where it is. environ can be loaded up with objects and functions. So rather than sending a message upstream, we can just keep some of the upstream functions around downstream as needed. > 2) What if I need to "fork" mutiple wsgi requests. This could easily > happen on a "three column" web page where the right hand side contains > the summary and the main area contains the real thing. I suspect it > just means intercepting the start_response() call, but I'm not sure. It's a bit more complicated I think. This is also what Ian Bicking proposed using HTML overlays for, as it'd make it a bit more feasible to assemble different sections of a page with different wsgi apps. > 3) The wsgi standard says that no inspection of wsgi applications is > allowed. This is unforunate because we currently rely on things like > an "expose" property on controller methods which are callable. At the > very least, I'd like to start an informal standard that a 'wsgi' > property be on all wsgi callables. That should be enough to prevent > uncallables in Controllers from getting called by Evil URLs. Alternatively, you could enforce a policy, ie Python's, and declare that private methods use _ in front. In Pylons, after looking at more than a few different Controller styles, I went with the callable style utilized by Aquarium. It's incredibly 'Pythonic', and super-flexible. That is, the controller is required to be a callable, and is called with the method name as the first arg, the rest of the methods normal args as the remainder. It's sort of like CP's default method, except that since its called everytime, it provides a very nice spot to put in controller-wide setup stuff, possible alter which method is to be called, do authentication checks for multiple methods in one go, etc. Consider the current TG solution, where you might have to drop the same decorator on a dozen different methods requiring the user to be logged in for all of them. Talk about repeating yourself. If you had a __call__ style, you could say that being logged in is necessary for all the methods except 'login', in a mere 2 lines of code by checking the method name against a list of what requires it. It also provides a handy point for modifying method calls. Another few lines of code in a __call__ function, and you could have it setup where it first checks for method_HTTP_METHOD, then method. That'd make it a snap to split functions depending on the request method, ie, it checks for a method_POST first, then method, if the request method is POST. Anyways, I obviously rather like this style, especially as it provides for incredible amounts of flexibility and customizability, all at the developers fingertips. It also changes the point of where the callable is in the model, so that you call the controller, and let it figure out how to proceed. It'd also solve the problem you mention by letting the controller ensure an evil URL doesn't call a method it shouldn't. Finally, if each controller is a wsgi app, does that mean it has to setup the thread-local globals again for the current application? I think my main issue with making each controller a wsgi app, is that it really isn't. If you consider a WSGI app to be a stand-alone application (which I think it should be), then it doesn't make sense. Controllers depend on other controllers being where they are, because controllers are all part of a single application. WSGI apps don't care about anything outside of them-self. Does each controller have to parse environ, and setup a request object again? There's a lot of setup stuff a framework does when it starts handling a WSGI call, is that all going to be pushed into each controller? It's for those reasons, that using wsgi for controller calls just doesn't make a lot of sense to me. However, I can see cases where it'd be very useful for controllers to be called using the wsgi interface, which looks identical to the controllers being WSGI aps, however thread-local globals and such are setup by a primary WSGI app (RT, or whatever). So it looks like they're WSGI apps, because they're called with the WSGI interface, but each controller is aware its in a greater whole, the application. Cheers, Ben --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "TurboGears" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/turbogears -~----------~----~----~----~------~----~------~--~---

