Re: [pylons-discuss] Beaker successor
On Dec 12, 2013, at 11:59 PM, Mike Orr sluggos...@gmail.com wrote: It's certainly worth reviewing how much of Beaker's structure makes sense in 2013. It's based on the Myghty container API, how important is that? I've been respecting it because Mike B thought it was good, but it is odd to retain something that ultimately came from Perl, that was a side effect of a template engine that went mutant and swelled into a quasi-framework. well that was before the Catalyst thing and I did at least three major “de-embarassmant” refactorings through Beaker, but yeah. Particularly, the way the Session was bolted onto Beaker’s backends was never really that great. So, what we need is something that conforms to Pyramid's Session API, and has switchable backends. Dogpile provides some of the backends, and perhaps other backends can be hooked in through it. There’s definitely been interest in putting a Session front-end on dogpile’s backends.Someone may have even said they were working on it, but I haven’t heard anything in awhile on that front. It doesn't necessarily need to be multi-framework or Myghty container API compatible. Middleware has been almost completely replaced by Pyramid Tweens, and nobody has objected, and non-Pyramid people haven't been writing parallel middleware. So the same thing can happen with a session library. it would be super-nice if some kind of “session” thing existed that could be used in any HTTP-like context. that is, don’t hardcode it to tweens necessarily… like a step backwards, and forcing people into Redis if they want sessions seems odd too. I can see people just not choosing Pyramid if its only out-of-the-box session choice was Redis. And it also feels like a lot of Pyramid developers decided to deemphasize sessions and Beaker without telling me, so that bothers me to a bit. not sure about others but I haven’t used backend-based Sessions in a long time, I put a few key tokens in a cookie-based Session (for which I am using Beaker, that part of the implementation was written by Ben and has almost no connection to the Myghty part of things) and then any state that is more significant is modeled in the database explicitly, some of it keyed to the session id.This might be why it’s hard to get traction on a backend-based Session system within the pyramid community, we tend to be more formalist about storing significant structures. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: dogpile.cache 0.5.0 released
yes there is the memory backend for now but it is a straight python dictionary, there's no management of size or anything like that. you can of course build your own similar backend with whatever bells and whistles you like. If there is some existing Python lib out there for smart in-memory cache like memcached, I'd love to provide a backend for it. On Jun 25, 2013, at 6:39 AM, Arndt Droullier ar...@dvelectric.de wrote: Hi, short dogpile question: is there a backend that supports in process storage? I mean caching python objects without pickling or serialization? Thanks, Arndt. 2013/6/22 Michael Bayer mike...@zzzcomputing.com Hey all - dogpile.cache 0.5.0 is now available. dogpile.cache is a caching API built around the concept of a dogpile lock, which allows continued access to an expiring data value while a single thread generates a new value. It is intended as the next generation of caching API to replace Beaker as a caching solution. Version 0.5.0 has an emphasis on support for multi key APIs featured in Redis and some memcached clients. Changelog is available at: http://dogpilecache.readthedocs.org/en/latest/changelog.html#change-0.5.0 . Download dogpile.cache at: https://pypi.python.org/pypi/dogpile.cache -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out. -- DV Electric / Arndt Droullier / Nive cms cms.nive.co -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out.
Re: dogpile.cache 0.5.0 released
I'm not really sure, I don't actually get to use Pyramid very often :) On Jun 25, 2013, at 2:57 PM, Jonathan Vanasco jvana...@gmail.com wrote: Mike - I'm thinking about forking pyramid_beaker into pyramid_dogpile ( so i can just drop beaker ). Would there be any feature requests if I proceed ? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out.
Re: dogpile.cache 0.5.0 released
OK guess you're talking about DBM backend, you can send rw_lockfile=False and dogpile_lockfile=False to it as the docs mention and it will not use a file based lock.There's also ways to get your own lock implementation in there with a backend subclass, though sending in a custom lock should be made easier. the backend here could be made to use lockfile (https://pypi.python.org/pypi/lockfile/), however this implementation lacks the ability to do a read/write lock where multiple readers can acquire it simultaneously. On Jun 27, 2013, at 9:05 AM, artee artur@gmail.com wrote: Mike, are there any chances to provide support for Windows platform? Maybe some duck typing for fcntl dependency ;) regards, Artur On Saturday, June 22, 2013 5:46:39 AM UTC+2, mike bayer wrote: Hey all - dogpile.cache 0.5.0 is now available. dogpile.cache is a caching API built around the concept of a dogpile lock, which allows continued access to an expiring data value while a single thread generates a new value. It is intended as the next generation of caching API to replace Beaker as a caching solution. Version 0.5.0 has an emphasis on support for multi key APIs featured in Redis and some memcached clients. Changelog is available at: http://dogpilecache.readthedocs.org/en/latest/changelog.html#change-0.5.0 . Download dogpile.cache at: https://pypi.python.org/pypi/dogpile.cache -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out.
dogpile.cache 0.5.0 released
Hey all - dogpile.cache 0.5.0 is now available. dogpile.cache is a caching API built around the concept of a dogpile lock, which allows continued access to an expiring data value while a single thread generates a new value. It is intended as the next generation of caching API to replace Beaker as a caching solution. Version 0.5.0 has an emphasis on support for multi key APIs featured in Redis and some memcached clients. Changelog is available at: http://dogpilecache.readthedocs.org/en/latest/changelog.html#change-0.5.0 . Download dogpile.cache at: https://pypi.python.org/pypi/dogpile.cache -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss. For more options, visit https://groups.google.com/groups/opt_out.
Mako 0.8.0 Released
Hey gang - Mako 0.8.0 is now released. The biggest change is that the codebase now runs in place for all Python versions from 2.4 (yes, 2.4 still) through 3.3 without any 2to3 step. This feature has actually been sitting in the source repo for many months, but it's time to put it out there and see how far it flies :). Other changes include a performance enhancement to XML and/or markupsafe-absent escaping, a couple of bug fixes and support for using the __future__ namespace within a template. Thanks everyone for making Mako one of the routinely used libraries in Python! Download is at: http://www.makotemplates.org/download.html. 0.8.0 - [feature] Performance improvement to the legacy HTML escape feature, used for XML escaping and when markupsafe isn't present, courtesy George Xie. - [bug] Fixed bug whereby an exception in Python 3 against a module compiled to the filesystem would fail trying to produce a RichTraceback due to the content being in bytes. [ticket:209] - [bug] Change default for compile()-reserved_names from tuple to frozenset, as this is expected to be a set by default. [ticket:208] - [feature] Code has been reworked to support Python 2.4- Python 3.xx in place. 2to3 no longer needed. - [feature] Added lexer_cls argument to Template, TemplateLookup, allows alternate Lexer classes to be used. - [feature] Added future_imports parameter to Template and TemplateLookup, renders the __future__ header with desired capabilities at the top of the generated template module. Courtesy Ben Trofatter. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discuss+unsubscr...@googlegroups.com. To post to this group, send email to pylons-discuss@googlegroups.com. Visit this group at http://groups.google.com/group/pylons-discuss?hl=en. For more options, visit https://groups.google.com/groups/opt_out.
Mako 0.7.3 Released
hey lists - Mako 0.7.3 is now available. This is a bugfix release which includes the following fixes listed below. Download Mako 0.7.3 at: http://www.makotemplates.org/download.html 0.7.3 - [bug] legacy_html_escape function, used when Markupsafe isn't installed, was using an inline-compiled regexp which causes major slowdowns on Python 3.3; is now precompiled. - [bug] AST supporting now supports tuple-packed function arguments inside pure-python def or lambda expressions. [ticket:201] - [bug] Fixed Py3K bug in the Babel extension. - [bug] Fixed the filter attribute of the %text tag so that it pulls locally specified identifiers from the context the same way as that of %block and %filter. - [bug] Fixed bug in plugin loader to correctly raise exception when non-existent plugin is specified. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: dogpile cache_on_arguments default key generation
On Sep 18, 2012, at 2:25 PM, Jonathan Vanasco wrote: i'm not sure what i looked at either, but that's the answer i was hoping for. i think i saw line in region.cache_on_arguments ( key = key_generator(*arg, **kw) ) and missed the valueerror in util i also have a much older version of dogpile, as my line numbers are way off ah, well the whole thing is barely out of alpha so I'd definitely track the latest code -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: happy weekend, and welcome to my RTFM question about view_config and permissions
On Jun 20, 2012, at 9:44 PM, Chris McDonough wrote: In other words, it's Pyramid's job to figure out whether the current user can execute the view based on the ACL. If the ACL is computed based on who is logged in, things get weird pretty fast... it's just a divide by zero error sort of, and it'd be a better idea to implement a custom authorization policy if you want to think of things that way. here's whats not clear about Pyramid, and had to be illustrated for me by an expert.In order to compute the principals themselves, based on the current user as well as database data associated with *the user*, you either need to write your own AuthenticationPolicy, and do it in effective_principals, or use that callback argument accepted by the provided AuthenticationPolicy. The Pyramid docs make it very easy to see the example of the context with __acl__ and the dynamic logic inside of it, but then it's not at all clear about adding dynamic rules to authentication policies. It makes the impression to me, as I also saw in some of the examples I was shown on IRC, that the __acl__ hook is the place where we do any kind of database manipulation of security info for some request. When really, even though that works perfectly fine, there are really *two* hooks that the user has to be aware of: 1. the __acl__ is where it is appropriate to do database lookups based on the *resource being requested* 2. the authenticationpolicy custom class or callback is where it is appropriate to do database lookups based on the *user logged in* This dichotomy should be present any time either of the two sides of the coin are discussed. It leads to confusion (at least for me) that dynamic __acl__'s are stressed as a means to apply dynamic rules to resources, but there's not really much mention of what the correct means to apply dynamic rules to principals. Keeping in mind that when mere mortals read documentation, you can't depend on correct terminology alone implanting the right idea, as a newbie doesn't have strong neural pathways and automatic recognition of new terms. You need to break out the hand puppets, even for me. Additionally, another concept that I've had trouble with, since I am not using traversal/zodb, is: 3. pyramid is organized around the idea that a web request is looking at *one thing at a time*. That's why there's a context. You aren't allowed to make context into whatever you want (such as when I just made it into a security token). It is meant to correspond to the *thing you are looking at*. Pyramid is more opinionated than I initially expected here. 4. If the thing you are looking at is not actually a *single thing*, and is actually *many things*, such as any kind of search result, listing of information, list of articles, word cloud, or especially a composed page with lots of heterogeneous elements (see http://www.mlb.com for an example of this), Pyramid gives you two choices: a. invent *one thing* that represents that collection. When using traversal/zodb, this is perfectly natural. When using relational databases and routes, it's often completely awkward and artificial, since you're essentially making yourself a fake model object, just so you have a place to stick __acl__ . b. invent some other system. I know the context is discussed quite a bit in url dispatch and security. But I think the docs have a hard job here, as the context is already a little bit of a bolted-on concept when you're dealing with routes. When we use routes, we often tend to think of the *view* as the thing that is protected, not the *model*. We want to put security stuff on our views in day-to-day, simplistic cases.Such as, only users with a certain permission can access the administration pages.How I need to come up with model-like objects that have __acl__ but really have nothing to do with my actual model I've found to be quite awkward to get my head around. I hit a lot of conceptual dissonance on the IRC channel as Pyramid veterans seem to really see model-level security as something quite natural (and then they say, because I use traversal and zodb where there's always a single model object anyway). Model-level security isn't something I get involved in unless I'm building some very fine-grained and elaborate system - usually those end up being hierarchical management types of things which brings us back to traversal/zodb as a strong force here. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: How to get request object inside decorator?
hey Max - The Pyramid devs agree that it should accept a tuple argument. I think it's a good idea too. They're just looking for someone to provide a patch at this point. - mike On Jun 21, 2012, at 10:18 AM, Max Avanov wrote: What are you talking about? This is not just my stupid ideological behavior. My current project has 30+ different subpackages, each of thes has view-modules. For now, I have to make 30+ extra imports just because hmm... I still don't get why I have to. On Thursday, June 21, 2012 6:10:07 PM UTC+4, Chris Rossi wrote: Don't let the perfect be the enemy of the good. Chris On Thu, Jun 21, 2012 at 9:59 AM, Max Avanov maxim.ava...@gmail.com wrote: But I want to. I really do. And view_config doesn't allow me to do so. You should understand me. I don't want to have extra imports in my project. I want transparent support from the framework. This example makes sense for me: from pyramid.view import view_config @view_config(decorator=(decorator1, decorator2, ...)) But this is not: from pyramid.view import view_config # Why should I do this for each of my view modules? from somewhere import chain_decorators @view_config(decorator=chain_decorators(decorator1, decorator2, ...)) On Thursday, June 21, 2012 4:21:24 PM UTC+4, Chris McDonough wrote: On 06/21/2012 07:29 AM, Max Avanov wrote: No! View callable functions must accept at least a request argument. There will never be something this that will work as a view callable: This is my typo. I was talking about a regular generic view callable. I still don't get how to rewrite these @authenticate_form and @https (as an example) - https://github.com/Pylons/pylons/blob/master/pylons/decorators/secure.py - to be able to do the common: @view_config() @https() @autnenticate_form def view(request) - or - def view(context, request) - or - def view(self) without passing it to view_config Why you don't want to pass the decorator to view_config via decorator= I have no idea, given that dealing with the differences is the entire purpose of that machinery and the code to support a chain of decorators is entirely boilerplate. But assuming you didn't, and assuming this isn't an entirely theoretical exercise which we're beating to death, you could write a decorator that assumed *one* signature which also set __module__, and __doc__ and on the function returned from the decorator: from functools import wraps def adecorator(wrapped): def inner(request): print request.url return wrapped(request) return wraps(wrapped, ('__module__', '__doc__'))(decorator) @view_config() @adecorator def view(request): - C On Thursday, June 21, 2012 2:39:57 AM UTC+4, Chris McDonough wrote: On 06/20/2012 06:13 PM, Max Avanov wrote: So I'm lost as to what you mean by no other way to get access to request object Because I must - either to follow the official approach provided by Michael ( a consistent signature no matter whether the actual view is a method, or a function that accepts either (context, request) or just (request)...) with the consequent @view_config(decorator=...) and the chained code snipped. - or use the classic way: @decorator1 @decorator2 @decoratorN @view_config def func() For classic way I use the decorator package - http://micheles.googlecode.com/hg/decorator/documentation.html http://micheles.googlecode.com/hg/decorator/documentation.html - But the classic way allows me only one generic approach to get the request object - via get_current_request, right? No! View callable functions must accept at least a request argument. There will never be something this that will work as a view callable: def func(): ... It just wont work. A view callable must be: def func(request): ... An alternate view callable signature optionally accepts (context, request) but if your code doesn't use that signature for any of your view callables, you won't care. Pyramid view callables can also be methods of classes, but if your code doesn't use view classes, you won't care about that either. If you *do* care about reusing a decorator across all of these view callable conventions, however, you can use the decorator= argument to view_config. The point of the decorator= argument to view_config is to provide genericness by accepting a decorator that can use a single common call
Re: Need help on caching, background jobs manual ORM cache refresh (Pyramid project)
On Jun 7, 2012, at 1:46 PM, Mike Orr wrote: On Thu, Jun 7, 2012 at 10:11 AM, Jason ja...@deadtreepages.com wrote: Do you know if Pyramid's default session cache will be changing from beaker to dogpile in the near future? Pyramid has no default session backend. 'pyramid_beaker' is an add-on. I suppose 'pyramid_dogpile' will appear when somebody gets around to writing it. But it won't be recommended until Dogpile has been stable for a while. Last I heard Dogpile was in alpha, but it may be further along now. well also Dogpile doesn't do HTTP sessions. Just data caching. We're still stuck with Beaker for that, until someone wants to change things (I only use it for client side sessions). -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Updating a session mid-request
On Jun 8, 2012, at 5:56 PM, Jonathan Vanasco wrote: I think you're dealing with either a locking issue or a bug in beaker. check out the comments here: https://bitbucket.org/bbangert/beaker/issue/101/naive-dog-pile-effect-implementation. if there's a race issue with locks: I think you misunderstand the action of the dogpile lock. The next client which arrives, sees that the lock is acquired, then returns the previous value. There is no waiting of any kind. the dogpile lock is not used with Beaker's session implementation, so this issue is not relevant to anything to do with Pyramid's request.session. that said, there's a crap-ton of bugs reported with Beaker's server side session implementation and I don't think anyone is fixing them. I don't recommend it, and I'd stick with client side sessions that store just some identification tokens. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Need help on caching, background jobs manual ORM cache refresh (Pyramid project)
For caching, I'd use dogpile.cache: https://bitbucket.org/zzzeek/dogpile.cache/ Which is specifically the replacement for Beaker caching. It is much simpler and more performant. SQLAlchemy 0.8 will convert the beaker caching examples to use dogpile instead.Attached is a tutorial script from a recent tutorial i gave which illustrates a typical dogpile/SQLAlchemy caching configuration, in the spirit of the Beaker caching example. On Jun 5, 2012, at 3:36 PM, Learner wrote: Thanks Jason. I will take your suggestion. But your hint about beaker caching is helpful. cheers -Bkumar On Jun 5, 4:22 pm, Jason ja...@deadtreepages.com wrote: On Tuesday, June 5, 2012 7:49:10 AM UTC-4, Learner wrote: Hello Pyramid gurus, I have been searching for quick tutorials on caching, background jobs ORM related topics. I found quite a few resources which seem to be very informative. Since I am new to both Python Pyramid, I thought I will seek experienced people opinion, before I go ahead and use anything I found on web. Any help is very much appreciated. 1. Caching: The simple use case is:- I want to show top 10 or 20 articles on my wiki application. Before I render the data I would like to cache the db result upon first query execution and cache it. Cache to refresh automatically after every 1 hour or so. As far as caching is concerned you will be better off caching the result of your view. Beaker cache has decorators for caching individual functions/methods for a specified period of time (look for cache_region decorator) this way not only will the database results be cached, but also the processing required to turn them into the template values. I don't know if there is a way to also cache the rendered template with Pyramid. 2. Background Jobs: I am using SQLAlchemy in my application. All the data needed for the application comes from XML/CSV files. Is there any way in Pyramid I can create a background job and schedule it to run every 30 minutes or so?. Job will look at one particular folder everytime it is run, and if there are any xml/csv files job will pick it up and process them. Since this is simple ETL job, SQLAlchemy is not aware of the DB changes. So does this confuse any of the ORM caching mechanism and show the dirty data? If so, how would I be able to notify ORM to rebuild its caching? Thanks for your time. Are the XML files parsed and then the data is inserted into a database that Pyramid uses? Perhaps a cron job would be better suited to that? If you are using caching then the data will not be refreshed in Pyramid until the cache refreshes. If you are using beaker you can force the cache to refresh on the next hit. Are you sure you need all this caching though? It seems unnecessarily complicated. Pyramid is very fast, SQLAlchemy is very fast, your database will probably be caching the query plans as well so it's going to be very fast. I would recommend building your application with no caching, and then adding it later if it is needed. That way you can worry about getting the loaded data displaying correctly (especially since you're data setup is a little more complex) before having to figure out a caching system. -- Jason -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. ### slide:: Transparent Caching # Illustrate using MapperOption objects to send caching directives # to a custom Query subclass. # ### slide:: # dogpile.cache is a new caching system which replaces Beaker. # # https://bitbucket.org/zzzeek/dogpile.cache # # Create a dogpile cache region from dogpile.cache.region import make_region regions = { default:make_region().configure( 'dogpile.cache.memory' ) } ### slide:: -*- no_clear -*- regions['default'].set(some key, some value) regions['default'].get(some key) ### slide:: -*- no_clear -*- regions[default].backend._cache ### slide:: # the dogpile (and Beaker) model allows you to pass # a callable that generates a value def generate_a_value(): print generating ! return some value regions[default].get_or_create(some other key, generate_a_value) ### slide:: -*- no_clear -*- regions[default].get_or_create(some other key, generate_a_value) ### slide:: -*- no_clear -*- regions[default].delete(some other key) regions[default].get_or_create(some other key, generate_a_value) ### slide:: # A Query which accesses a Dogpile cache. # The parameters of the cache are derived partially from the # structure of the query. from sqlalchemy.orm.query import Query class CachingQuery(Query): def __iter__(self): override __iter__ to change where data
happy weekend, and welcome to my RTFM question about view_config and permissions
I've made a 40% effort to figure this one out but at least I've figured many other things out without bugging the list (the irc channel is another story ;) ) Here's a route in application.py: config.add_route('some_admin_thing', '/admin_something', factory=AdminUserACL) Here's the general idea of AdminUserACL: class AdminUserACL(object): @property def __acl__(self): # this is programmatic based on who is logged in, # but the end result might be: return [ (Allow, Authenticated, access), (Allow, Authenticated, useradmin) ] def __init__(self, request): # pull out the admin user from request, do things So a view that wants to require the useradmin permission looks like: @view_config(route_name='some_admin_thing', renderer='json', request_method='GET', permission='useradmin') def some_admin_thing(request): # ... But the thing is, all views of this route should require useradmin permission.I don't like that I have to split the declaration of authorization in two places (factory on add_route(), permission on view_config()).If I try to put permission or view_permission on the add_route(), it wants to know the view at that point, implying I wouldn't be able to use view_config() in the first place. Plus it appears view_permission on add_route() is deprecated. Since what I want to do seems natural here, yet it's all explicitly disallowed/discouraged, it suggests my understanding of things is incorrect ? The goal here is declare all authorization in one place. To me, factory and permission are both dealing with authorization and it isn't clear why add_route() can't have some default notion of permission, agnostic of individual views which is applied to those views. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
alembic 0.3.2 released
Hey lists - I've just put out Alembic 0.3.2. This version features initial support for Oracle, and some bug fixes. There's plenty more to do with Alembic so keep those pull requests coming in ! Thanks all for the help on this project. Alembic 0.3.2: http://pypi.python.org/pypi/alembic/ - [feature] Basic support for Oracle added, courtesy shgoh. #40 - [feature] Added support for UniqueConstraint in autogenerate, courtesy Atsushi Odagiri - [bug] Fixed support of schema-qualified ForeignKey target in column alter operations, courtesy Alexander Kolov. - [bug] Fixed bug whereby create_unique_constraint() would include in the constraint columns that are added to all Table objects using events, externally to the generation of the constraint. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: need a namespace package guru....
that seems odd, do you have that same issue even if you use pip ? namespace packages are supported without setuptools as it falls back onto pkgutil.extend_path. also --no-site-packages is the default now and not using it is a little crazy... On Apr 15, 2012, at 3:01 PM, Mike Orr wrote: I don't know much about namespace packages, but I'd avoid them as they lead to occasional problems on some systems. For instance, I can't install Pyramid without --no-site-packages on some versions of Ubuntu, because the zope. namespace is split between the system site-packages and the virtualenv's, and Python apparently can't handle this. Py2exe is not Setuptools aware yet, so it may not be compatible with namespace packages. (So far I haven't had a problem with Pylons and Py2exe -- including SQLAlchemy -- as long as all imports are listed explicitly rather than using Routes autoload, but it may be when I switch an application to Pyramid.) On Sat, Apr 14, 2012 at 2:40 PM, Michael Bayer mike...@zzzcomputing.com wrote: it seems like my entire concept of having root and root.subpackage, the way sqlalchemy and sqlalchemy.orm do, is just entirely wrong from a namespace package point of view. While I think what I'm doing does actually work, this is not what anyone had in mind with namespace packages. I'm tempted to just merge these two things together. But I think I'll just move dogpile into dogpile.core and just get in line with everyone else. On Apr 14, 2012, at 4:08 PM, Michael Bayer wrote: I think I released the dogpile.cache stuff incorrectly, actually installing the packages it seems like I got the namespace package stuff wrong.I have the pkg_resources.declare_namespace directive in the dogpile/__init__.py of dogpile.cache but not of dogpile, and it appears that this needs to be exactly the opposite. Can someone help me set up dogpile and dogpile.cache correctly? Here's a paste of what *seems* to work: http://paste.pocoo.org/show/581532/ the questions I have are: 1. Is it OK if I have __version__ and a few other things in dogpile/__init__.py of the root project ? this seems to work, as dogpile is always imported first, but the docs at http://packages.python.org/distribute/setuptools.html#namespace-packages might suggest otherwise: You must NOT include any other code and data in a namespace package’s __init__.py. Even though it may appear to work during development, or when projects are installed as .egg files, it will not work when the projects are installed using “system” packaging tools – in such cases the __init__.py files will not be installed, let alone executed. 2. since dogpile is always imported first it seems like I don't need anything in the dogpile/__init__.py of that of dogpile.cache ? 3. or is dogpile not always imported first in this scenario ?how would I see that ? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- Mike Orr sluggos...@gmail.com -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
need a namespace package guru....
I think I released the dogpile.cache stuff incorrectly, actually installing the packages it seems like I got the namespace package stuff wrong.I have the pkg_resources.declare_namespace directive in the dogpile/__init__.py of dogpile.cache but not of dogpile, and it appears that this needs to be exactly the opposite. Can someone help me set up dogpile and dogpile.cache correctly? Here's a paste of what *seems* to work: http://paste.pocoo.org/show/581532/ the questions I have are: 1. Is it OK if I have __version__ and a few other things in dogpile/__init__.py of the root project ? this seems to work, as dogpile is always imported first, but the docs at http://packages.python.org/distribute/setuptools.html#namespace-packages might suggest otherwise: You must NOT include any other code and data in a namespace package’s __init__.py. Even though it may appear to work during development, or when projects are installed as .egg files, it will not work when the projects are installed using “system” packaging tools – in such cases the __init__.py files will not be installed, let alone executed. 2. since dogpile is always imported first it seems like I don't need anything in the dogpile/__init__.py of that of dogpile.cache ? 3. or is dogpile not always imported first in this scenario ?how would I see that ? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: need a namespace package guru....
it seems like my entire concept of having root and root.subpackage, the way sqlalchemy and sqlalchemy.orm do, is just entirely wrong from a namespace package point of view. While I think what I'm doing does actually work, this is not what anyone had in mind with namespace packages. I'm tempted to just merge these two things together. But I think I'll just move dogpile into dogpile.core and just get in line with everyone else. On Apr 14, 2012, at 4:08 PM, Michael Bayer wrote: I think I released the dogpile.cache stuff incorrectly, actually installing the packages it seems like I got the namespace package stuff wrong.I have the pkg_resources.declare_namespace directive in the dogpile/__init__.py of dogpile.cache but not of dogpile, and it appears that this needs to be exactly the opposite. Can someone help me set up dogpile and dogpile.cache correctly? Here's a paste of what *seems* to work: http://paste.pocoo.org/show/581532/ the questions I have are: 1. Is it OK if I have __version__ and a few other things in dogpile/__init__.py of the root project ? this seems to work, as dogpile is always imported first, but the docs at http://packages.python.org/distribute/setuptools.html#namespace-packages might suggest otherwise: You must NOT include any other code and data in a namespace package’s __init__.py. Even though it may appear to work during development, or when projects are installed as .egg files, it will not work when the projects are installed using “system” packaging tools – in such cases the __init__.py files will not be installed, let alone executed. 2. since dogpile is always imported first it seems like I don't need anything in the dogpile/__init__.py of that of dogpile.cache ? 3. or is dogpile not always imported first in this scenario ?how would I see that ? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: What about a pyramid collective ?
On Apr 12, 2012, at 9:58 AM, Michael Merickel wrote: On Thu, Apr 12, 2012 at 7:41 AM, Domen Kožar do...@dev.si wrote: Having something like djangopackages.com + pypi classifier would achieve the same goal. Pull requests are also easy to make, I would propose rather to have a good read about preferred way of contributing to package maintainers. I'm much more +1 on maintaining and improving http://pyramid.opencomparison.org/ (djangopackages). I've also requested on catalog-sig a Framework :: Pyramid trove classifier. I think in the era of DVCS it doesn't make much sense to attempt to manage organizations and commit access rights. You and your maintainers own the project repo, other people can submit pull requests. What is more important is that the source repositories are easy to find, for which I think the opencomparison page does a good job. I think the opencomparison page needs much more visibility within the community. my initial impression is leaning this way as well, my concern with a central repo that everyone publishes towards is that you get a lot of lemons, and the system doesn't provide any way to newcomers to distinguish between the first-class, recommended approaches versus half-baked ideas that will get people into trouble.You'd then say, OK well someone needs to curate the collection of things - easier said than done. There's a lot of old recipes on the SQLAlchemy wiki I'd love to blow away, but , well one of my users went through all the trouble to write it and I don't want to upset him, and well OK maybe it's somewhat useful if not out of date, so it just stays up there, as a sort-of-not-quite-useful thing.Plus you need someone curating all these things in the first place.The repo starts looking like a stale graveyard for discarded ideas. For a project that is desperate to provide a consistent, simple story for newcomers (something I think Pyramid and SQLAlchemy share), these open ended repositories just add to the confusion, pushing up a large list of highly varied approaches into a flattened presentation. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: pyramid async setup
On Apr 12, 2012, at 3:27 PM, binadam wrote: On Thursday, April 12, 2012 12:24:42 PM UTC-7, binadam wrote: Hello all, I used the following cookbook for an asynchronous setup: http://michael.merickel.org/2011/6/21/tictactoe-and-long-polling-with-pyramid/ but ran into problems. First I'd be interested to know if anyone has successfully done this (in production environment) using the following packages (or something similar): pyramid gunicorn + gevent postgresql (with psycopg2 made green) I should also add sqlalchemy I'm playing with this right now. It's working for me, here are the two things I'm observing so far. 1. it might be better to use NullPool with create_engine(), not sure yet. This eliminates all connection pooling. I'm not sure if theres's some kind of twinge with using a psycopg2 connection in a greenlet that it wasn't created in, the statement at http://initd.org/psycopg/docs/advanced.html#support-to-coroutine-libraries doesn't seem to say this, but I am seeing it hang more often if I don't use NullPool. 2. then it runs great, but watching this go, I can see that there might be a greater chance of old fashioned deadlocks occurring, it's not clear yet. Try running ps -ef | grep post or select from pg_stat_activity to see if anything is just locking. script is attached with gevent I can run through about 55K rows of work in 53 seconds, with threads it takes 66 seconds. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To view this discussion on the web visit https://groups.google.com/d/msg/pylons-discuss/-/SvFCNJmz7A0J. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups "pylons-discuss" group. To view this discussion on the web visit https://groups.google.com/d/msg/pylons-discuss/-/SvFCNJmz7A0J. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. from sqlalchemy import Column, Integer, create_engine, ForeignKey, \ String, Numeric from sqlalchemy.orm import Session, relationship from sqlalchemy.ext.declarative import declarative_base import random from decimal import Decimal Base = declarative_base() class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String(100), nullable=False) type = Column(String(50), nullable=False) __mapper_args__ = {'polymorphic_on':type} class Boss(Employee): __tablename__ = 'boss' id = Column(Integer, ForeignKey('employee.id'), primary_key=True) golf_average = Column(Numeric) __mapper_args__ = {'polymorphic_identity':'boss'} class Grunt(Employee): __tablename__ = 'grunt' id = Column(Integer, ForeignKey('employee.id'), primary_key=True) savings = Column(Numeric) employer_id = Column(Integer, ForeignKey('boss.id')) employer = relationship(Boss, backref=employees, primaryjoin=Boss.id==employer_id) __mapper_args__ = {'polymorphic_identity':'grunt'} def runit(engine): sess = Session(engine) # create 1000 Boss objects. bosses = [ Boss( name=Boss %d % i, golf_average=Decimal(random.randint(40, 150)) ) for i in xrange(1000) ] sess.add_all(bosses) # create 1 Grunt objects. grunts = [ Grunt( name=Grunt %d % i, savings=Decimal(random.randint(500, 1500) / 100) ) for i in xrange(1) ] # Assign each Grunt a Boss. Look them up in the DB # to simulate a little bit of two-way activity with the # DB while we populate. Autoflush occurs on each query. while grunts: boss = sess.query(Boss).\ filter_by(name=Boss %d % (101 - len(grunts) / 100)).\ first() for grunt in grunts[0:100]: grunt.employer = boss grunts = grunts[100:] sess.flush() report = [] # load all the Grunts, print a report with their name, stats, # and their bosses' stats. for grunt in sess.query(Grunt): report.append(( grunt.name,
dogpile.cache 0.1.0 released
Hello lists - As some of you know, I've been promising an update to the Beaker caching system for some months now, building on a pair of libraries dogpile and dogpile.cache (see the original blog post at http://techspot.zzzeek.org/2011/10/01/thoughts-on-beaker/ for background). Following the effort that started at about that time, I'm pleased to announce the initial alpha release of dogpile.cache now available from Pypi. dogpile.cache builds on the dogpile locking system, which implements the idea of allow one creator to write while others read in the abstract. Overall, dogpile.cache is intended as a replacement to the Beaker caching system, the internals of which are written by the same author. All the ideas of Beaker which work are re-implemented in dogpile.cache in a more efficient and succinct manner, and all the cruft (Beaker's internals were first written in 2005) relegated to the trash heap. Key, nifty features of dogpile.cache include a really straightforward API, significant performance improvements over Beaker, a pluggable, key-distributed locking system, including a memcached-based lock out of the box, and registration of new and/or modified cache backends as a daily matter of routine, directly or via setuptools entrypoints. There's a decent README up now at http://pypi.python.org/pypi/dogpile.cache and you can read all the docs at http://dogpilecache.readthedocs.org/. I'm hoping to get some testers and initial feedback. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: [sqlalchemy] dogpile.cache 0.1.0 released
On Apr 9, 2012, at 11:49 AM, Wichert Akkerman wrote: On 2012-4-9 17:28, Michael Bayer wrote: There's a decent README up now at http://pypi.python.org/pypi/dogpile.cache and you can read all the docs at http://dogpilecache.readthedocs.org/. I'm hoping to get some testers and initial feedback. Can you shed some light on how this differs from retools (http://readthedocs.org/docs/retools/en/latest/), other than that dogpile does not support redis and retools only supports redis? Basically the caching and distributed locking features of retools should be a dogpile backend, and in fact they can be if retools wants to publish a dogpile.cache backend.Though looking at the source he'd have to rip out some more rudimental functions out of CacheRegion.load(), which seems to be totally inlined right now. The redis lock itself could be used but needs a wait flag. It seems like we'd almost be better off taking the stats logic wired into load and just making that an optional feature of dogpile, so that all the backends could get at hit/miss stats equally. The get/set/lock things i see in retools would only give us like a dozen lines of code that can actually be reused, and putting keys into redis and locking are obviously almost the same as a memcached backend in any case. It's just a question of, which project wants to maintain the redis backend. He appears to have a region invalidate feature taking advantage of being able to query across a range in redis, that's not something we can generalize across backends so I try not to rely on things like that, but dogpile can expose this feature via the backend directly. The function decorators and the dogpile integration in retools are of course derived from Beaker the same way dogpile's are and work similarly. Dogpile's schemes are generalized and pluggable. It seems like Ben stuck with the @decorate(short-term, namespace) model we first did in Beaker, whereas with dogpile.cache though the API looks more like flask-cache (http://packages.python.org/Flask-Cache/), where you have a cache object that provides the decorator. Queue/job/etc appears to be something else entirely, I'd ask how that compares to celery-redis and other redis-queue solutions. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: [sqlalchemy] dogpile.cache 0.1.0 released
On Apr 9, 2012, at 12:57 PM, Wichert Akkerman wrote: Would you also be willing to accept a pull request (assuming you use git, otherwise a patch?) that adds a redis backend to dogpile directly? sure It seems like we'd almost be better off taking the stats logic wired into load and just making that an optional feature of dogpile, so that all the backends could get at hit/miss stats equally. The get/set/lock things i see in retools would only give us like a dozen lines of code that can actually be reused, and putting keys into redis and locking are obviously almost the same as a memcached backend in any case. It's just a question of, which project wants to maintain the redis backend. The statistics are certainly very useful. He appears to have a region invalidate feature taking advantage of being able to query across a range in redis, that's not something we can generalize across backends so I try not to rely on things like that, but dogpile can expose this feature via the backend directly. Region invalidate is very very useful in my experience: for us it allows us to have a management-system invalidate caches for a running website without having to make it aware of all the implementation details of the site. We can now simple say 'invalidate everything related to magazines' instead of invalidating 15 different functions separately (which will get out of sync as well). great, just not something that's possible with memcached, for example. The function decorators and the dogpile integration in retools are of course derived from Beaker the same way dogpile's are and work similarly. Dogpile's schemes are generalized and pluggable. A problem I have with the Beaker and retools decorators is that they make it very hard to include context from a view into a cache key. For example for complex views it is very common that you want to cache a helper method for the view class, but you want the context and things like request.application_url to be part of the cache key, but those are never passed to the method. That leads to code like this: class MyView: @some_cache_decorator def _slow_task(self, context_id): # Do something def slow_task(self): return self._slow_task(self.context.id) one approach I used to take was to use a decorator which could take a function parameter which returned extra cache keys. You could use that like this: class MyView: def _cachekey(self, *a, **kw): return (self.request.application_url, self.context.id) @some_cache_decorator(extra_keys=_cachekey) def slow_task(self): # Do things here Well I've got the function that comes up with the cache key as pluggable. So you can make your own that interprets the namespace parameter as a tuple or other, if not a plain string as its usual function, then snatches whatever you'd like from each function call. It seems like Ben stuck with the @decorate(short-term, namespace) model we first did in Beaker, whereas with dogpile.cache though the API looks more like flask-cache (http://packages.python.org/Flask-Cache/), where you have a cache object that provides the decorator. That sounds like the dogpile approach does not supported environments where you have multiple copies of the same application in the same process space but using different configurations? That's a rare situation, but to some people appears to be to important. assuming you're talking about beaker's middleware, all it did was stick a CacheManager object in the wsgi environ. You can certainly do that with dogpile cache regions too. After considering some approaches I think you could just subclass CacheRegion and overrride backend and configure to pull from the wsgi environment, some system of making it available would need to be devised (pylons made this easy with the thread local registries but I understand we don't do that with Pyramid) -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: [sqlalchemy] dogpile.cache 0.1.0 released
On Apr 9, 2012, at 3:33 PM, Michael Bayer wrote: assuming you're talking about beaker's middleware, all it did was stick a CacheManager object in the wsgi environ. You can certainly do that with dogpile cache regions too. After considering some approaches I think you could just subclass CacheRegion and overrride backend and configure to pull from the wsgi environment, some system of making it available would need to be devised (pylons made this easy with the thread local registries but I understand we don't do that with Pyramid) Here's what that could look like: class ConfigInjectedRegion(CacheRegion): A :class:`.CacheRegion` which accepts a runtime method of determining backend configuration. Supports ad-hoc backends per call, allowing storage of backend implementations inside of application specific registries, such as wsgi environments. def region_registry(self): Return a dictionary where :class:`.ConfigInjectedRegion` will store backend implementations. This is typically stored in a place like wsgi environment or other application configuration. raise NotImplementedError() def configure(self, backend, **kw): self.region_registry()[self.name] = super( ConfigInjectedRegion, self).configure(backend, **kw) @property def backend(self): return self.region_registry()[self.name] Suppose we were using Pylons' config stacked proxy. Usage is then like: from pylons import config class MyRegions(ConfigInjectedRegion): def region_registry(self): return config['dogpile_registries'] def setup_my_app(config): config['dogpile_registries'] = {} I haven't used Pyramid much yet but whatever system it has of set up these config/request things for the lifespan of this request, you'd inject onto the MyRegions object. I could commit ConfigInjectedRegion as it is, but I'd want to check that this use case works out. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: TypeError: render_unicode() keywords must be strings
On Jan 7, 2012, at 2:03 PM, Raoul Snyman wrote: Hi, I'm running Pyramid 1.0 with the Mako templating engine via flup on a host that only provides FastCGI. I've installed everything into a virtual environment. I'm getting the following error when accessing the site: URL: redacted File '/home/redacted/venv/lib/python2.6/site-packages/weberror/errormiddleware.py', line 162 in __call__ app_iter = self.application(environ, sr_checker) File '/home/redacted/venv/lib/python2.6/site-packages/pyramid-1.0-py2.6.egg/pyramid/router.py', line 158 in __call__ response = view_callable(context, request) File '/home/redacted/venv/lib/python2.6/site-packages/pyramid-1.0-py2.6.egg/pyramid/config.py', line 2839 in _rendered_view context) File '/home/redacted/venv/lib/python2.6/site-packages/pyramid-1.0-py2.6.egg/pyramid/renderers.py', line 294 in render_view request=request) File '/home/redacted/venv/lib/python2.6/site-packages/pyramid-1.0-py2.6.egg/pyramid/renderers.py', line 322 in render_to_response result = self.render(value, system_values, request=request) File '/home/redacted/venv/lib/python2.6/site-packages/pyramid-1.0-py2.6.egg/pyramid/renderers.py', line 318 in render result = renderer(value, system_values) File '/home/redacted/venv/lib/python2.6/site-packages/pyramid-1.0-py2.6.egg/pyramid/mako_templating.py', line 131 in __call__ result = template.render_unicode(**system) TypeError: render_unicode() keywords must be strings any chance you can use pdb.post_mortem() to get in there and just see what's actually in **system ? I know what the error says, but I've gone through my code and I've stripped out every single last unicode string I could find, and I still get this error. I don't get it when running locally on paster, and I don't get it on my development server running Apache/mod_wsgi. I really don't know where else to look, as the exception occurs before my code starts running. Any help would be appreciated, and if you need more information, please let me know. -- Raoul Snyman B.Tech Information Technology (Software Engineering) E-Mail: raoul.sny...@gmail.com Web: http://www.saturnlaboratories.co.za/ Blog: http://blog.saturnlaboratories.co.za/ Mobile: 082 550 3754 Registered Linux User #333298 (http://counter.li.org) -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Jan 3, 2012, at 2:32 PM, Mike Orr wrote: Reflection is more difficult because you can't map classes to tables until they've been reflected, and I'm not sure how it affects declarative syntax. I've recently worked out a way to do this: http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DeclarativeReflectedBase and in 0.7.5 or #2356, it will be slightly easier still: http://www.sqlalchemy.org/trac/ticket/2356 So some of the global code you have to put in an init function, at minimum the table definitions and mapper calls, and at maximum the entire declarative classes. Again the Akhet manual discusses this. but yes the reflected approach still needs some point at which you say, reflect and map ! even if declarative is used. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Dec 29, 2011, at 12:20 AM, Ahmed wrote: Without it I would expect the library to dynamically define the User class based on a Base and Session that I supply to it, it would then return me the new User class using that Base.metadata, and I could track that User class within my app. Before I heard about sqlahelper, I was starting to code using this approach. However, the way sqlalchemy functions now, makes it impossible to go further with this approach if you will be using relationships. (Mixins were a great addition to sqlalchemy 0.7 which gives us more flexibility to using sqlalchemy as a third party lib. However, there are still missing bits.) I wouldn't have resorted to sqlahelper if not for two things (might be actually one): groups = relationship(Group, secondary=user_groups_table) the secondary argument only accepts passing a Table object. if it accepts a string (a table name to look up, for example, that is evaluated at mapping time) I wouldn't have resorted to passing my base around. The obstacle is that to build a table you have to have your base at hand. It's documented that you can pass a lambda: relationship(Group, secondary=lambda: Base.metadata.tables['user_groups_table']) the string is accepted there as well, just not documented (this is fixed in r58937c3f4abe and is building now): relationship(Group, secondary=user_groups_table) So with that a given, do we still have a strong need for sqlahelper ? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Dec 28, 2011, at 2:19 AM, Ahmed wrote: I guess being able to refer to the bases from third party libraries is important. user_groups = Table('user_groups', base.metadata, Column('user_id', Integer, ForeignKey('User.id')), Column('group_id', Integer, ForeignKey('Group.id')) ) class UserMixin(object) @declared_attr def groups(self): return relationship(Group, secondary=user_groups) So you see, without the ability to refer to my application base, I cannot add this part of to the boilerplate code in the separate That's fine, I'm talking more about class hierarchies that need to share the same base. Your system is based on mixins, which means they have less of an assumption about schema. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Dec 28, 2011, at 1:36 AM, Mike Orr wrote: On Tue, Dec 27, 2011 at 8:05 AM, Michael Bayer mike...@zzzcomputing.com wrote: What's the use case for a Base being shared to a third party module that doesn't know about your application? This sounds like a bad idea. A Base might have any number of behaviors on it, like certain columns, naming conventions, __mapper_args__(), attribute names, that can't be anticipated by a third party library. The problem is that applications and libraries are all creating their own Bases and Sessions, and then it becomes complicated to make them work together. Regarding Session, if someone builds a third party library that works on some tables of it's own, and the author made the system use its own Session with no way to plug into the main transaction of the calling application, that's just bad design. If it calls session.commit() on its own yet is designed to work inline with the same database and within a series of events inside of a view, also bad design. It shouldn't assume transactional scope. I would think if the plugin is designed for Pyramid, it would be based around ZopeSQLAlchemy, which provides a master transaction for everyone to integrate towards. Do the third party plugins at least integrate with that ? I would say that's just something library authors would need to know how to do. They need to understand that transactions are defined by the calling application, and how a Session relates to that. It's just one click more complicated than having to know nothing at all about how transactions work. A convention here, a how to document of best practices for 3rd party stuff, would make it clear how these should be done.Using the transaction manager recommended with Pyramid would be best, assuming the transaction manager is capable of this. Also, just as a note, I've never seen a 3rd party plugin that uses SQLAlchemy before, which is using a Session, defining tables, etc. Can I see one ? Are there a lot , or like half a dozen ? Regarding Base, Base corresponds to a class hierarchy, meaning it's how your class structure is designed. SQLAHelper doesn't need to change here, it if course can expose the default Base to everyone, and that's great. A shop that has multiple apps of its own but are designed to work together can certainly have them all call into SQLAHelper's Base and that is fine. As far as a 3rd party thing, like download this library and now you have a standalone auth model/schema (is there at least *one* 3rd party thing that is *not* about auth?), that probably shouldn't use the same Base, as it implies the app can't assume any kinds of conventions or behaviors on the Base class, or if it does it means my own app now can't do X, Y, or Z because it will break the 3rd party library.So I'd rather the standard practice is to share configuration and namespaces, but not class structure except for mixins.But if someone shows me an example here why they really want to share out the Base that of course can make me more aware. The sharing of configuration can integrate with the schema of the target system by sharing MetaData().It can integrate at the relationship() level using real class objects, or by sharing _decl_class_registry, which has always been something that could be monkeypatched, but not public...so r76d872dc77b9 now contains this feature: from sqlalchemy.ext.declarative import declarative_base reg = {} Base1 = declarative_base(class_registry = reg) Base2 = declarative_base(class_registry = reg) class A(Base1): __tablename__ = 'a' id = Column(Integer, primary_key=True) class B(Base2): __tablename__ = 'b' id = Column(Integer, primary_key=True) aid = Column(Integer, ForeignKey(A.id)) as_ = relationship(A) assert B.as_.property.mapper.class_ is A If you additionally share a MetaData() between two bases, now those two will entirely share the same class name and table name registry. Also note ticket 2338: http://www.sqlalchemy.org/trac/ticket/2338 , which I'm leaning towards, would add the full module path of things to the registry, so you could say: group = relationship(myauth.classes.Group) Well, I'm inclined to do whatever MikeB suggests. But what would the module contain in this case; i.e., what would its globals be? You'd still need a module with globals in order to be a well-known rendevous point. 'helper = SQLAHelper()' as a variable in the application doesn't do that. Or would 'helper' be the global in the package? never mind that part here, if it's a module global already that's fine. One thing SQLAHelper does is that if you call ``add_engine(engine)`` without an engine name, it becomes the default engine, and the Session and Base.metadata are automatically bound to it. This is so that applications with a single database only need to make one function call to set everything up. It looks like the only way
Re: SQLAHelper 1.0 released, and a proposal
On Dec 28, 2011, at 3:28 PM, Mike Orr wrote: I would think if the plugin is designed for Pyramid, it would be based around ZopeSQLAlchemy, which provides a master transaction for everyone to integrate towards. Do the third party plugins at least integrate with that ? Question: if multiple scoped sessions are created, each using ZopeTransactionExtension, would they all automatically fit into the global commit/rollback? Can we use the same ZopeTransactionExtension *instance* for all the scoped sessions, or would they each need a separate instance? yeah I'd like to get guidance from Chris McD or Lawrence Rowe on what a multiple-session integration path might be (or if all apps should really use one Session with zope.sqlalchemy). also my apologies for screwing up it's name. Also, just as a note, I've never seen a 3rd party plugin that uses SQLAlchemy before, which is using a Session, defining tables, etc. Can I see one ? Are there a lot , or like half a dozen ? I can't remember everywhere I've seen things. There are few libraries using SQLAlchemy yet. I mainly wanted to avoid a future mess if everyone did things in different ways, and then had trouble interoperating. The API you've suggested sounds the most flexible; it won't get into anyone's way but it's there if you want it, and it can scale to multiple datbases, Bases, and Sessions. I only ask because you know it's really hard to design a system that we're going to recommend for everyone, without a good set of concrete examples of what they need. You know we've been around this block a bunch. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Dec 28, 2011, at 5:19 PM, Mike Orr wrote: On Wed, Dec 28, 2011 at 2:10 PM, Michael Merickel mmeri...@gmail.com wrote: Question: if multiple scoped sessions are created, each using ZopeTransactionExtension, would they all automatically fit into the global commit/rollback? Can we use the same ZopeTransactionExtension *instance* for all the scoped sessions, or would they each need a separate instance? yeah I'd like to get guidance from Chris McD or Lawrence Rowe on what a multiple-session integration path might be (or if all apps should really use one Session with zope.sqlalchemy). also my apologies for screwing up it's name. Sorry, I'm not Chris or Lawrence, but I can tell you that the transaction package, which the ZTE and many other transaction-aware packages support, utilizes a threadlocal manager to which each ZopeTransactionExtension (ZTE) instance joins. The short answer is that if you have multiple ScopedSession objects that are using the ZTE, they will all be controlled by the same global transaction, and when pyramid_tm does transaction.commit() all of the sessions that are marked dirty will be committed. The ZTE supports two-phase transactions, but only if the ScopedSession is initialized with twophase=True, thus ideally all sessions are done this way: DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension(), twophase=True)) Is there any downside to setting twophase=True by default? Would it make life more complex for simple applications? Also, again, I need to know whether it's safe to share a _zte instance between two sessionmakers. Because if it's not, the docs will have to warn people to use it only with the default session. Twophase should not be on by default. On a DB like PG, it significantly alters the way transactions are done and can complicate management, as when things fail it can easily leave around prepared transactions that can lock up whole sets of tables until someone goes in and rolls them back manually. The flag shouldn't be on unless someone knows what they're doing and really wants that behavior. -- Mike Orr sluggos...@gmail.com -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Dec 26, 2011, at 11:58 PM, Mike Orr wrote: The purpose of SQLAHelper is to solve the most common problem, which is sharing a single Base and Session across multiple modules, including those written by third parties who don't know about your application. What's the use case for a Base being shared to a third party module that doesn't know about your application? This sounds like a bad idea. A Base might have any number of behaviors on it, like certain columns, naming conventions, __mapper_args__(), attribute names, that can't be anticipated by a third party library. The only reason I can think of, is the well you can't link via ForeignKey or relationship() with a separate base. But this is not really true, for a variety of reasons. That said I have a feeling some libraries are already doing this but I wish we could work out a better way than using inheritance as a third party plugin point - one of Ben's key rationales for dumping Pylons altogether is that it was built on this model for extensibility. But when you start getting multiple databases in the application, it may get beyond what the shared Base can provide. In that case, you can decide whether one database is general-purpose, used by several modules and third-party libraries, while another database is single-purpose, used only by one module. Then there's a clear answer: use SQLAHelper's Base for the shared database, and your own Base for the single-purpose database. For instance, the second database may be specifically for site statistics, searching, an external database you're consulting, etc. These would all be single-purpose databases, which wouldn't have to be shared. Why not standardize SQLAHelper on a one by default, many if desired model ? Also as an alternative to the getter/setter style, why not just: helper = SQLAHelper() # default objects: base, session, engine: default_base = helper.base default_session = helper.session helper.engine = create_engine(...) # namespaces for each, default points to the default: helper.bases.default is helper.base helper.sessions.default is helper.session helper.engines.default is helper.engine # alternates to the default: helper.bases.generic_base = Base() helper.sessions.alternate_session = scoped_session(...) helper.engines.alt1 = create_engine(...) I think multiple sessions should be supported. My current app uses two simultaneously as many classes are represented in two different databases at the same time - one is the live database the other is historical.An application that switches between master and slave databases at the session level needs to do this also. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: SQLAHelper 1.0 released, and a proposal
On Dec 27, 2011, at 12:18 PM, Chris Withers wrote: # alternates to the default: helper.bases.generic_base = Base() but this doesn't solve my use case of having a base shared across multiple packages, only some of which may be installed, and some of which may be third party. why not ?the helper here is just a namespace. are you referring to being able to get at the namespace from 3rd parties without any interaction? OK so you do some kind of magic global lookup thing, maybe with entrypoints, but that's not really what I'm talking about here. Yep: http://packages.python.org/mortar_rdb/api.html#mortar_rdb.getSession ...just register each ression with a different name. what's with the camelCase + getter/setter thing? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: how to use several Bases with config.include and a shared DBSession
On Dec 17, 2011, at 4:40 AM, Chris Withers wrote: On 16/12/2011 23:48, Michael Bayer wrote: I was just looking to express (and top post, its just easier) that right now the sharing the base patterns arent' nailed down, but that it can be whatever. We can make it work whatever way people think should become a best practice. Though usually things go better when I come up with the best practice myself after getting a really clear view of the use cases. More of a SA comment that anything else, but I wish there was one source of metadata, the MetaData object, rather than having some in the MetaData object and some in the Base... Right but like I said you might have a hierarchy of classes sharing a base, but then several groups of tables that are same-named, spread out, using different metadatas. The current app I work on is like this - one Base but actually four separate MetaData objects. I think the better solution would just be a better documented/unified system of string lookup patterns. All these registries are just about looking up strings, so that you don't have to cross-import things. Nothing else.There is always a way to do things without using them at all, just less convenient. Here's the enterprisey way: INameRegistry- ClassNameRegistry / TableNameRegistry - CompositeNameRegistry - Base -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Is there any case where chameleon is more preferred?
On Dec 17, 2011, at 8:28 AM, Joshua Partogi wrote: Hi there, I am left undecided whether to use mako or chameleon. From what I have observed, it seems that chameleon is the default template language in pyramid (CMIIW). Is there any case where chameleon is more preferred when using pyramid? I actually like mako but I am afraid there are some mako functionality that is not supported in pyramid. There's certainly nothing in Mako that doesn't work in Pyramid. Mako is quite simple and self contained. The caching is the only part that has even some degree of potential for framework integration but this is also very simple. Now does Pyramid have some features tailored towards Chameleon ? I'm not totally sure but again I bet not really. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Model validation
On Dec 13, 2011, at 6:09 AM, Chris McDonough wrote: On Tue, 2011-12-13 at 01:59 -0800, rihad wrote: You are presuming that there is a one true form library that does everything well that is completely satisfactory in all cases. I can tell you from pretty hard experience that this is not true. To the extent that I recommend a single form library, I tend to recommend Deform because a) I wrote it and b) it's a Pylons project. But I always qualify my recommendation of it with it's great for autogenerating forms, but it's not so great if you want pixel-control over the layout. For that, I tend to recommend pyramid_simpleform (also a Pylons project). Plus, I despise autogeneration of form HTML - it's not compatible with the organizations I work in, where layout is hand-designed by client side developers. What works for some kinds of organizations doesn't work for others. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Model validation
I definitely use validation libs, currently we're still on FormEncode (colander or flatland will be next), but the rendering I do using Mako defs which can be customized and laid out at the template level which is a lot more WYSIWYG than using code generators. A full example of my approach is at : http://techspot.zzzeek.org/2008/07/01/better-form-generation-with-mako-and-pylons/ On Dec 13, 2011, at 12:19 PM, Benjamin Sims wrote: Michael, I wonder if you might give an idea of how you handle that? Do you manually put forms into templates and validate yourself, or do you still use a validation library but with custom templates. Just wondering as I have tried both before. Ben On 13 December 2011 15:48, Michael Bayer mike...@zzzcomputing.com wrote: On Dec 13, 2011, at 6:09 AM, Chris McDonough wrote: On Tue, 2011-12-13 at 01:59 -0800, rihad wrote: You are presuming that there is a one true form library that does everything well that is completely satisfactory in all cases. I can tell you from pretty hard experience that this is not true. To the extent that I recommend a single form library, I tend to recommend Deform because a) I wrote it and b) it's a Pylons project. But I always qualify my recommendation of it with it's great for autogenerating forms, but it's not so great if you want pixel-control over the layout. For that, I tend to recommend pyramid_simpleform (also a Pylons project). Plus, I despise autogeneration of form HTML - it's not compatible with the organizations I work in, where layout is hand-designed by client side developers. What works for some kinds of organizations doesn't work for others. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Model validation
On Dec 12, 2011, at 2:40 PM, Chris McDonough wrote: I've seen two people add code to the same package that does the same thing because nobody really knows how it works anymore. This reads like the argument people make for off-the-shelf/out-of-the-box, rather than against it. Unless you mean, they had to write a bunch of spaghetti to work around the limitations of the package. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Production.ini
On Nov 30, 2011, at 11:49 AM, Jonathan Vanasco wrote: You may be over-thinking this. The *easiest* ways i've found to overcome this are : 1. stuff the login/passwords into a file that is not in source control 2. stuff the login/passwords into environment variables careful with environment variables though - regular shell variables can be viewed in a ps listing (-E flag ). If OTOH you mean like AWS instance variables that are read in from the EC2 api, that's more private. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: What happens to the Pylons documentation ?
It also says pylons 0.9.7 documentation at the top. This is the 1.0 Official Pylons Documentation link straight off of http://docs.pylonsproject.org/en/latest/docs/pylons.html . Seems like more than one build is generating to the same place perhaps. Curious what the rationale for readthedocs is versus plain old packages.python.org ? I've found the latter a lot more straightforward to work with as I can place files straight up and use whatever Sphinx themes I want without dealing with RTD's own build process.Not sure how you got custom themes to work RTD either.. On Nov 25, 2011, at 2:46 AM, Michael Merickel wrote: http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/ On Fri, Nov 25, 2011 at 1:28 AM, Chris McDonough chr...@plope.com wrote: On Fri, 2011-11-25 at 10:15 +0300, Denis Denis wrote: That's how I see the documentation, I and my colleagues (see Appendix). Please correct it, because we often have to work with it. Somebody provide a URL, please. 2011/11/24 Michael Merickel mmeri...@gmail.com All of the documentation was recently moved to readthedocs.org and this is likely an artifact of that move. On Thu, Nov 24, 2011 at 2:51 AM, denny dope evilempi...@gmail.com wrote: About three days hanging documentation where half of the Ukrainian and half in English, what is it? What's going on with the project? Ukrainians will develop it further? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- Michael -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss +unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss +unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- Michael -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: What is wrong with Pyramid? Open Letter to Community
On Nov 22, 2011, at 11:13 AM, Cem Ikta wrote: Hi Thomas, You have not understood what I meant! If you look in java, ruby on rails or php frameworks, you will find lots of tutorials with ajax, crud, database, orm, session management etc. Why is there very little tutorial (quick tutorial and wiki tutorial) in pyramid? It is enough ? It is always difficult for beginners to learn. I'd note there are tutorial applications such as the one here, which is currently undergoing some much needed enhancement work (will be up soon): http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/wiki2/index.html -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Mobile Browser Detection
On Nov 22, 2011, at 4:46 PM, Jonathan Vanasco wrote: I've been trying to figure out the best way to handle mobile browser detection. I was hoping to find something in the Pyramid stack, but didn't. After searching online, I found a handful of various projects - with the bulk of them aimed at Django. I would aim patches at those Django projects, which would aim to decouple the Django-specific bits from the browser string portion of it, so that us lowly non-Django users might be treated to just a taste of the fruits of Django's enormous community. The Django community can be pretty thick-headed about hey this doesn't have to be just for Django! and it's time they had some encouragement. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: cleaning up after a request, always?
On Nov 21, 2011, at 2:09 PM, Michael Merickel wrote: The biggest argument I have for transaction management is a product of how views interact with the Pyramid rendering system. Templates are rendered *after* you have returned from your view unless you explicitly call render() or render_to_response(). If you call commit() in your view, that data is persisted whether or not the template renders. This is typically a bad thing. This is a common thing I talk to folks about and I'm on the other side of that, at least in terms of the content of the template being ready to be sent to the client.SQLAlchemy's default behavior is to expire all data after a commit - the transaction is gone, so if you access your objects subsequent to that, a new transaction is started and the data is reloaded. ORM objects in SQLAlchemy (as well as in most other ORMs) are *proxy* objects - they load new data from the database as needed. if there's no transaction, a new one is begun. Assuming your template does anything regarding model objects, it will hit their attributes and begin a *new* transaction and all the data will load all over again. You can disable SQLAlchemy's expire-on-commit behavior, but even then if your template hits mymodel.some_new_attribute that wasn't already loaded, you're still starting up a new transaction. So it's very likely that rendering the template after the commit in a traditional proxy-model approach will mean you have *two* transactions. In the traditional model of the template accessing objects that are essentially lazy proxies, the rendering step itself IMHO should be within the single transaction. Then you commit, then if successful you actually return the content. Of course if you're doing some other kind of template model, such as converting the full span of data into a JSON type of structure first, then the template renders from a fixed structure (either server side or client side), then you've changed the boundaries, thats fine. But I still find great utility in the model of templates rendering directly from model proxy objects, and for that I prefer only a single transaction be used. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: cleaning up after a request, always?
On Nov 20, 2011, at 10:59 PM, Wyatt Baldwin wrote: On Sunday, November 20, 2011 3:17:54 PM UTC-8, Iain Duncan wrote: On Sun, Nov 20, 2011 at 2:48 PM, Iain Duncan iaindun...@gmail.com wrote: Hey folks, I'm using the pattern of making sqlalchemy sessions in a request factory, but I've mucked up and the session's aren't always getting close. I think it's happening when some methods short circuit and raise HTTPExceptions. Wondering what's the 'right way' to make sure that the request.session object always gets closed at the end of the request lifecycle? Here's how I did it, it seems to working fine. If anyone can tell me if this is wrong, that would be great. If it's a nice way of doing it ( I like the way it's contained nicely in my request factory ) it might be an example worth adding to the request factory examples: my request factory: class Request(Request): build the request object to be used by Pyramid def cleanup(self): end of lifecycle cleanup for request # ask model to close the session self.model.cleanup() @reify def model(self): model = self.registry.getAdapter(self, IAbstractModel) # make sure cleanup gets called at end of lifecycle self.add_finished_callback( Request.cleanup ) return model I use a similar approach in my request factory: @reify def db_session(self): db_session = self.make_db_session() self.add_finished_callback(lambda request: db_session.close()) return db_session def make_db_session(self): return self.registry.settings['db.session_factory']() How come you guys aren't using ZopeTransactionExtension for this , i.e. http://docs.pylonsproject.org/projects/pyramid/en/1.2-branch/tutorials/wiki2/basiclayout.html#content-models-with-models-py ? My understanding is that session scope should be entirely handled in that case. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Multiple transactions within request
On Nov 15, 2011, at 7:09 AM, Vlad K. wrote: Why I didn't think of this earlier? Transaction complains if you use session.commit() or session.begin_nested() directly, wants you to use transaction.commit() and transaction.savepoint() instead and it just didn't occur to me to try session.rollback() nevertheless (and in my mind transaction.abort() == session.rollback() which now I see is NOT the same), and trying savepoint.rollback() fails, I assumed session.rollback() was called by Transaction since the SQL debug output clearly shows savepoint rollback being emitted, so I went to search for another solution. Aside from me being silly for not trying this before (and it is even suggested by the InvalidRequestError!), it is a bit illogical to have to use transaction.savepoint() and then use session.rollback() instead of savepoint.rollback(). glad you figured this out. Now we need to adjust zope.sqlalchemy's API and/or documentation so that the SAVEPOINT use case is made clear.I would think that since SAVEPOINTs can be per-connection, perhaps zope.sqlalchemy would support begin_nested() on individual sessions...or maybe not. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Multiple transactions within request
On Nov 14, 2011, at 2:44 PM, Chris McDonough wrote: Out of curiosity, why are you committing in the middle of view logic? It's none of my business really, but session.flush() would seem to get you what you want and would work fully within the one-request-one-commit policy. hear, hear ! -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Multiple transactions within request
On Nov 14, 2011, at 10:14 PM, Chris McDonough wrote: On Mon, 2011-11-14 at 20:30 +0100, Vlad K. wrote: For worse I'd say because using SQLAlchemy directly works just fine and as expected, without the need to reload the data after failed session. By the way, are you sure about this? I've heard that when you commit or abort a raw SQLA session, the outcome is the same. Objects loaded from a finished SQLA session become invalidated without the expire_on_commit=False (not the default) argument to the sessionmaker. Am I wrong about that? The SQLA session invalidates everything after a commit or a rollback. If expire_on_commit=False, then you're OK after the commit, but still if the rollback happens, everything is expired.There's a way to turn that off too but then you're really in the you're doing it wrong area. Overall, the use case here I thought was to use begin_nested(), i.e. SAVEPOINT. Nothing gets expired when you commit on a SAVEPOINT since you're still within the original transaction. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: How to change session options in a user scope?
looking at the source it seems like you could call _set_cookie_values() and _update_cookie_out() manually to re-send the cookie. also I'd advise using CookieSession overall, its way more efficient and scalable. I'd love to rip the plain Session out of beaker altogether. On Apr 19, 12:26 pm, Max Avanov maxim.ava...@gmail.com wrote: The Beaker's session object accepts a cookie_expires parameter from the system-wide config. What should I do if I want to implement remember me option for per-user scope? I mean the following behaviour: # In development.ini the cookie_expires option is set to True def authenticate(various_credentials, remember=False): ... if remember: session.cookie_expires = expiration_date session.invalidate() session[SESSION_KEY] = user_identity session.save() So, I have to call invalidate() first in order to properly set an expiration date for the current user session. Otherwise (i.e. without invalidate() call) the session will use the expire_cookies=True mode. It acts like a shared object and I don't even know is this a thread- safe way to change cookie_expires? How to get it done properly? -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: [sqlalchemy] sqlalchemy 0.5.8 or 0.6.0?
Please see the list of enhancements in SQLAlchemy 0.6 at http://www.sqlalchemy.org/trac/wiki/06Migration . Pylons itself does not make use of any deprecated features in SQLAlchemy. On May 30, 2010, at 12:02 PM, Krishnakant Mane wrote: Hello, I am using pylons for my web application development. Currently pylons is in version 1.0 and 0.9.7 is also going stable. I want to know which is the correct version of sqlalchemy for both versions of Pylons. I know it might not make that much of a difference but there are some changes in syntax, so I wonder if any of those changes affect the way sqlalchemy is used in Pylons. Besides, I would like to know if the last release of version 0.6 possesses any performance bennifits over 0.5.8 or is it just a release for cleanner syntax? Happy hacking. Krishnakant. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-disc...@googlegroups.com. To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
Re: Best practice/schemes for database locking using Pylons and SQLAlchemy?
On May 6, 2009, at 6:43 PM, Jeremy Burton wrote: Pylons works superbly out-of-the-box with SQLAlchemy if your web application solely responds to HTTP requests. However, it seems to me that most non-trivial web applications (including mine) will inevitably need to have additional threads performing other tasks, e.g. mail send/receive, that also need to access the database. yes. As soon as this happens, you run into the database locking issue. why is that ? the paster application already runs many threads to serve many requests simultaneously - the transactional capabilities of the database handle that concurrency.Additional worker threads need not be any different in their transactional behavior, and can still be good neighbors (i.e. no long running transactions, commit well-defined units of work). Explicit locking is not implied by this use case. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons Book - SimpleSite 2 (Chapter 14) create error
eoc wrote: InterfaceError: (InterfaceError) Error binding parameter 1 - probably unsupported type. u'SELECT nav.id AS nav_id, nav.name AS nav_name, nav.path AS nav_path, nav.section AS nav_section, nav.before AS nav_before, nav.type AS nav_type \nFROM nav \nWHERE nav.path = ? AND nav.section = ? AND nav.type = ? \n LIMIT 1 OFFSET 0' [u'john', ['section'], 'page'] I can't tell you whats wrong with the code, but the general thing going wrong is the wrong kind of data being sent to the statement. the binds there should be strings, and in the log would look like: ['john', 'section', 'page'] so if theres something like query.filter(Foo.x == [some array]), you'd get an erroneous result like that. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Beaker error
these seem like filesystem failures of some kind.what is special about the filesystem where the lockfiles are getting created ? special includes how its mounted, no NFS or similar in use, etc. Mike Orr wrote: On Tue, Mar 3, 2009 at 4:52 PM, Philip Jenvey pjen...@underboss.org wrote: On Mar 2, 2009, at 11:24 AM, Mike Orr wrote: I've started getting an intermittent Beaker error. It happens in the base controller when I pass ``session.id`` to a generic logging routine. However, I have three sites with the same logging code, and it's only happening on one of the sites. It occurs on a variety of URLs. Here's the exception: x = self.do_acquire_write_lock(wait) Module beaker.synchronization:260 in do_acquire_write_lock return False else: fcntl.flock(filedescriptor, fcntl.LOCK_EX) return True fcntl.flock(filedescriptor, fcntl.LOCK_EX) TypeError: argument must be an int, or have a fileno() method. It'd be helpful to know what the value of filedescriptor is when this happens. I'd like to know that too, but the local variables aren't included in the email traceback. :) -- Mike Orr sluggos...@gmail.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Beaker error
what else is different about this app versus the other two ? do all three get similar load ? Mike Orr wrote: All the disk partitions are local ext3 filesystems; there are no network drives. It's a Dell blade server running Ubuntu 7.04. I have never gotten a disk error on it before. On Thu, Mar 5, 2009 at 9:35 AM, Michael Bayer mike...@zzzcomputing.com wrote: these seem like filesystem failures of some kind. what is special about the filesystem where the lockfiles are getting created ? special includes how its mounted, no NFS or similar in use, etc. Mike Orr wrote: On Tue, Mar 3, 2009 at 4:52 PM, Philip Jenvey pjen...@underboss.org wrote: On Mar 2, 2009, at 11:24 AM, Mike Orr wrote: I've started getting an intermittent Beaker error. It happens in the base controller when I pass ``session.id`` to a generic logging routine. However, I have three sites with the same logging code, and it's only happening on one of the sites. It occurs on a variety of URLs. Here's the exception: x = self.do_acquire_write_lock(wait) Module beaker.synchronization:260 in do_acquire_write_lock return False else: fcntl.flock(filedescriptor, fcntl.LOCK_EX) return True fcntl.flock(filedescriptor, fcntl.LOCK_EX) TypeError: argument must be an int, or have a fileno() method. It'd be helpful to know what the value of filedescriptor is when this happens. I'd like to know that too, but the local variables aren't included in the email traceback. :) -- Mike Orr sluggos...@gmail.com -- Mike Orr sluggos...@gmail.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons vs Tomcat+GWT
On Jan 31, 2009, at 11:34 PM, Colin Flanagan wrote: I don't follow this logic. If the enterprise model, software development or otherwise (and does GWT really fit into that?), brought on the current economic disaster, what safeguards would the alternative (I guess in this instance, Pylons) have provided? The notion of enterprise java is an increasingly difficult word to define, almost as hard as the term art. Nevertheless, the very specific practices of financial institutions and their relation to regulatory bodies seems like a difficult simile to stylistic approaches to software development. yeah I don't make a great analogy pre-coffee. I was mostly thinking of indifference to wrongness cemented by institutions. It was widely suspected that Madoff was running a ponzi scheme. But everyone looked the other way, since people were making money - it would go against the institution to say something. Similarly, GWT produces really bloated and complex applications which all look really boring. But the framework was produced by the highest echcelons of the institution, and that alone is the only answer needed to the question of what to use. GWT is not even a great example, better examples would be Interwoven Teamsite, VBScript and Cold Fusion, selected due to their corporate roots - the notion that corporate- driven products are the better selection strictly due to their corporate roots. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons vs Tomcat+GWT
On Feb 1, 2009, at 12:19 AM, Tycon wrote: I'm not talking about facebook/youtube type sites, I'm talking about a real web application where users access information, enter information, search and analyze information, and visualize information. which one of those is not supplied by facebook ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons vs Tomcat+GWT
On Jan 31, 2009, at 4:28 PM, Tycon wrote: I'm planning on using GWT only for client side code and doing all server calls using JSON, and not using GWT's RPC mechanism. So I guess that would avoid the problem you are talking about ? or you could just use jquery...ive no idea how you'd use only the client side portion of GWT. from what I could tell it seemed like the entire server-to-client is spit out from a single monolithic compilation and there was certainly no easy way to just use the client. Correct me if I'm wrong, but neither Perl/CGI not Pylons/Rails etc CANNOT be used to create a gmail-like application, unless you resort to hand writing the entire UI (which runs wholly on the client) in javascript (good luck with that !). i think there are alternatives which would result in easier to read code. jquery can go a very long way. were written using GWT-like technology, and IMO google apps are the best example of smart efficient next generation web apps. theyre tremendously complex and reliant upon special build tools. facebook AFAIK is just php and is a more compelling client side experience than anything I've seen google do. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Is Django more popular than Pylons?
what a strange post. There are no unicode issues in WSGI, and the usage of WSGI in the generic sense doesn't complicate things to any degree - the spec is just a single function call.If there are Py3K issues in Paste, lets first make it clear that *every* application that deals explicitly with character encodings needs code changes to work with Py3K. I can assure you any issues Paste has in this area will be resolved deftly and correctly by Ian Bicking. The only price Pylons is paying is it assumes the developer would like to consider how his application should be architected, instead of those decisions being made implicitly and invisibly. This is a cultural situation created by the dominance of PHP, a decidedly don't make me think / I didn't even know there was anything to think about platform, in the LAMP world. If and when other cultures, such as that of the Java and .NET/C# communities (the theme of which would be, we know how to code, let's do this exactly the way we think it should be), decide to embrace Python more fully, projects like Pylons will establish a more prominent userbase. The most popular web frameworks in the Java community, such as Struts2 (nothing like Struts1) and Spring MVC, translate conceptually to a WSGI stack very directly. On Jan 23, 8:16 am, Mario Ruggier ma...@ruggier.org wrote: On Jan 19, 2009, at 8:05 PM, Mike Orr wrote: On Sun, Jan 18, 2009 at 4:05 PM, walterbyrd walterb...@iname.com wrote: And if so, why? Everybody who uses Pylons knows that other frameworks exist and had maybe tried one or two others, but has made a conscious choice that they like Pylons' style better. Hi Mike, I think I understand perfectly the intention of what you are saying here, but the last almost off-handish reference to style made me do a double-take on what you mean... What I do not understand is that given all the noisy promises of an ideal world where all python web applications are built following wsgi and installed with setuptools, the difference we are talking about cannot be simply written off as a matter of style, but more architectural and philosophical. Pylons has, with the best of intentions, tried to embrace the new open-architecture as fully as possible. And, it pays and will continue to pay a fairly high price for that choice... Example of past price paid, just look at the number of what-should-be- a-non-issue installation problems in the mail archive. Example of price to pay, iiuc, apparently wsgi/paste/whatever has some unicode issues, so pylons has to wait for those to be fixed and third-party released to be able to even consider 3.0? Excuse me? I fully respect the choices that pylons makes, and almost always I am fine with them. There is anyway always a judgement call between wide- open genericity and narrower-scoped simplicity, and there is no right balance. Pylons probably errs towards the first, and django towards the second. But simplicity is very slippery, and very easily lost. The promise of generic inter-operational components more often than not exacts a higher price than what it gives back. How have the wsgi promises of inter-changeable web app building blocks measured up against the overhead from added complexities and issues? If you take for example qp, one of the few non-wsgi framework around, it strikes an amazing balance between simplicity and genericity, and it is not hindered by possibly-interfering impositions of a generic api such as wsgi. It can be used with or without the Durus object database that accompanies it, but it can (probably) just as easily also be used with sqlalchemy or any other ORM. QP also adopts the more robust single-thread multi- process approach to building apps, a choice that wsgi deems (pls correct and excuse me if I am saying something silly here!) to not particularly cater for. But, deployment of a qp app cannot be easier... SCGI works like a charm e.g. over apache, and is even more charming over lighttpd that has builtin support for it. Its framework api is grokkable in minutes... plus, a small additional fact, qp + durus (and the associated templating utility, qpy) have been available for python 3.0 since --day-1--, that is since the official first release date of python 3.0. All I am saying is that buying into a new way of doing things is fine but one has to be able to look back and sans-emotions admit what has actually worked and what has not. And, if at the beginning it the motivation was philosophical, playing it down in hindsight to a matter of style indicates to me that it has not all worked as well as hoped. A lot of Django fans have done the same of course, but a lot of other Django fans have not really looked into any other frameworks, they just came to Django from Rails or PHP because they heard about it first and didn't look any further. But this is a
Re: SQLAlchemy models isn't abstract layer / data model
On Jan 21, 2:00 am, Jan Koprowski jan.koprow...@gmail.com wrote: In MVC *M*odel should be abstract layer of data and hide data representation and real method access giving: universal, good described, readable and simple interface to manage this data: add, remove, get all, get one etc... methods. Commonly I use Pylons. In the last lesson of my University of Technology our lecturer show us Django and IMHO Django Models is much more coherent then SQLAlchemy based model. Why ? In Django i see only class methods like save or get etc... where i don't know what is happend under the class method. This could be database, object database or simple text file or even binary file. All what I need is use the save method and when I want to move my database from, The difference between myobject.save() and somepersistencething.add (object); somepersistencething.commit() is that the latter allows abstraction of the concept that your objects are even saved at all - they are not part of any particular hierarchy tied to their persistence implementation. As a bonus, the latter API allows an atomic demarcation of many persistence operations whereas the former does not. Neither example indicates the slightest thing to do with what happened under the method and I don't see what you're referring to. So I say SQLAlchemy's model is more coherent. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy models isn't abstract layer / data model
On Jan 21, 11:33 am, Jan Koprowski jan.koprow...@gmail.com wrote: @Paweł Stradomski University of Technology in Gdańsk Ok. I understood this. But - i must wrote this methods. Simple CRUD isn't something bad. SQLAlchemy could support something like this: class User(SQLAlchemy.CRUD): pass and give optionaly something like in django. This will be nice. Now i do for example something like this. def __new__(cls, *args, **kwargs): if 'username' in kwargs.keys(): uid = getpwnam(kwargs.get('username')).pw_uid if 'uid' in kwargs.keys(): uid = kwargs.get('uid') # @todo - poprawic czytelnosc if uid: if meta.Session.query(Informations).get(uid) == None: return object.__new__(cls, *args, **kwargs) else: return meta.Session.query(Informations).get(uid) else: return object.__new__(cls, *args, **kwargs) IMHO this could be standard for all classes (why not). I understood this SQLAlchemy forces you to write your own class methods wrap meta.Session - but this isn't cool :P because i waste my time :P there is absolutely no reason in the world you are forced to write class methods, except for the fact that you want that particular pattern. However, if you want that pattern, its utterly absurd to believe you have to create those methods individually for every class. They should be implemented on a base class of your choosing: Session = scoped_session(sessionmaker()) class MyActiveRecordBase(object): session = Session def __new__(cls, *args, **kwargs): if args and self.Session.query(cls).get(args) == None: return object.__new__(cls, *args, **kwargs) else: return object.__new__(cls, *args, **kwargs) def save(self): Session.add(self) Session.flush() def delete(self): Session.delete(self) Session.flush() also please be aware of the declarative extension at http://www.sqlalchemy.org/docs/05/reference/ext/declarative.html in case you find SQLA's Table construct similarly unappealing. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy models isn't abstract layer / data model
On Jan 21, 11:48 am, Paweł Stradomski pstradom...@gmail.com wrote: What exactly are you trying to achieve? one of the tenets of activerecord is that constructing an object with a primary key returns the existing object implicitly. my example meant to read: def __new__(cls, *args, **kwargs): if args: obj = self.Session.query(cls).get(args) if obj: return obj return object.__new__(cls, *args, **kwargs) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons, SQLAlchemy and deleting
this is in the documentation for Query.delete(). One complexity of this operation, as opposed to issuing a delete() on the mapped Table directly, is that Query must keep the state of the parent Session synchronized with the SQL which is emitted. To accomplish this, the delete() (and update()) operations provide two schemes of figuring out what was actually affected so that the Session may be updated. One is to select the criterion given first so that the results can be matched against what's currently in the session. Another is to evaulate the current criterion in Python only against the Session's current contents. A caveat with this is that more complex criterion aren't supported, and the database's collation behavior (how to compare different character sets, casing conventions) is not honored. The third option is to disable it entirely. In the case of a delete(), despite the warning in the current documentation this option is pretty safe for deletes. So try query()...delete(synchronize_session=False) On Jan 8, 2:21 am, Tomasz Narloch toma...@wp.pl wrote: Piotr Kęplicz pisze: Joe Riopel, środa 07 stycznia 2009 17:45: Session.query(Person).filter_by(...).delete() Isn't that code still doing the select first, to get the object, and then deleting it? No. It's a Query object turned into a DELETE statement, just like first(), all() or one() would turn it into a SELECT statement. pylons 0.9.7, sqlalchemy 0.5 This is copy paste from console: Pylons Interactive Shell Python 2.5 (release25-maint, Jul 20 2008, 20:47:25) [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] All objects from pyupo.lib.base are available Additional Objects: mapper - Routes mapper object wsgiapp - This project's WSGI App instance app - paste.fixture wrapped around wsgiapp from pyupo.model.meta import Session from pyupo.model.emailbag import EmailBag as e s = Session.query(e).filter(e.order_id == 1).filter(e.active == True).filter(e.dispatched == False) s.delete() 07:10:35,545 INFO [sqlalchemy.engine.base.Engine.0x...6acL] BEGIN 07:10:35,546 INFO [sqlalchemy.engine.base.Engine.0x...6acL] SELECT em ailbag.id AS emailbag_id FROM emailbag WHERE emailbag.order_id = %s AND emailbag.active = %s AND emailbag.dis patched = %s 07:10:35,546 INFO [sqlalchemy.engine.base.Engine.0x...6acL] [1, 1, 0] 07:10:35,547 DEBUG [sqlalchemy.engine.base.Engine.0x...6acL] Col ('emailbag_id',) 07:10:35,547 INFO [sqlalchemy.engine.base.Engine.0x...6acL] DELETE FROM emailbag WHERE emailbag.order_id = %s AND emailbag.active = %s AND emailbag.dispatched = %s 07:10:35,547 INFO [sqlalchemy.engine.base.Engine.0x...6acL] [1, 1, 0] 0L s.update({'active': False}) 07:12:15,965 INFO [sqlalchemy.engine.base.Engine.0x...6acL] SELECT emailbag.id AS emailbag_id FROM emailbag WHERE emailbag.order_id = %s AND emailbag.active = %s AND emailbag.dispatched = %s 07:12:15,965 INFO [sqlalchemy.engine.base.Engine.0x...6acL] [1, 1, 0] 07:12:15,965 DEBUG [sqlalchemy.engine.base.Engine.0x...6acL] Col ('emailbag_id',) 07:12:15,966 INFO [sqlalchemy.engine.base.Engine.0x...6acL] UPDATE emailbag SET active=%s, updated_at=CURRENT_DATE WHERE emailbag.order_id = %s AND emailbag.active = %s AND emailbag.dispatched = %s 07:12:15,966 INFO [sqlalchemy.engine.base.Engine.0x...6acL] [0, 1, 1, 0] 0L This examples say something else. What is wrong with this query? Best Regards, Tomek --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Do beaker session store supports using memcached ?
On Dec 31 2008, 5:01 pm, Tycon adie...@gmail.com wrote: if it's not a system service then why does it have it's own /etc/ init.d file (the way it's packaged for most major distributions) ? oh I always just build it from source :) .that theres an /etc/ init.d entry indicates its a decision of those distros to have a single memcached daemon running, its not a decision of memcached. I've read through the memcached website very thoroughly regarding this question and I don't see any guidelines on this issue in either direction. I wouldn't think memcached itself has much of an opinion on how its run. I would almost guarantee that high volume websites which use memcached do not multipurpose a single memcached instance across multiple unrelated applications, though...apart from the difficulty of multiple performance profiles affecting the single memcached process in different ways, why bother taking the risk that two different applications might use the same keynames for different purposes. While it is possible to have a memcached server dedicated to one application, you can't assume this is the default. with that said, of course I agree on this. I made it pretty clear in the CHANGELOG that the remove() method with the memcached backend now does flush_all(), which is not a method that's called by the library except by the erroneous usage of it in Session, and I've sent Ben a patch which changes Session to store a single dict on one key in the namespace regardless of backend so namespace.remove() would not be used. I've also advised that namespace.remove(), just like namespace.keys(), raise a NotImplementedError() when using the memcached backend. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Do beaker session store supports using memcached ?
On Dec 31, 3:19 pm, Tycon adie...@gmail.com wrote: whatever you end up doing NEVER EVER DO a flush_all on memcached. Memcached is a global system service, it is not your private scratch pad. yessir ! though I've never considered a single memcached process as a global service, like say apache. since it's not bound to a public port or anything like that. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: deleting/abusing beaker caches
On Dec 2, 10:10 am, Damian [EMAIL PROTECTED] wrote: The above may be used in a multi-page registration where I do not want users being able to tamper with things in the session. I'm guessing another option woud be to encrypt the bits of the session I don't want the user messing with, and achieve a similar result. I'm not understanding why the cache should be involved at all here. If you are storing persistent (meaning, can't randomly disappear), per- user data, that's what the HTTP session is used for. In beaker, the session uses the same backends as the cache system, except its stored in a way that is appropriate for individual user data. the cache system is specifically for website content that isn't keyed to specific identities. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: deleting/abusing beaker caches
On Dec 2, 4:14 pm, Damian [EMAIL PROTECTED] wrote: To answer your question Michael, I wanted something to store data that would not get sent back to the user (and hence could not be tampered with), but still be associated with their session, and that would expire automatically after some time. session data is stored on the server, or in the case of a cookie based session in an encrypted and signed string. the end user does not have access to either view or modify the contents of the session, so its safe to do whatever your app needs with it. if sessions didn't work this way, they'd be pretty much useless as you couldn't even check them for a login token - the end user could have placed it there ! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons 0.9.7 RC4 Release
Ben Bangert wrote: - Beaker got several important bug fixes, one for leaving file handles open, and to ensure that the new pickled format is gracefully upgraded from Beaker 1.1.2, *this* is the real deal. Don't believe what all those other Beakers told you. After years of neglect, I've finally had the opportunity to run Beaker myself in a production environment, and we hit and fixed every mysterious issue there was with file and memcached caching. After many improvements, 1.1 was put in production, and the one last nasty bug left was a lingering file issue that became apparent in a high-concurrency scenario. That is now fixed. Beaker is doing so great, we may even decide to document it. Wouldn't that be neat ! --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Beaker 1.1 Release
wow I love the tar thing. It's fine by me ! Philip Jenvey wrote: On Nov 19, 2008, at 1:47 PM, Michael Bayer wrote: also your usage is more succinct via: response = cache.get_value(request, expiretime=x, createfunc=lambda: func(self, *arg, **kw)) Mike, I've attempted a quick and dirty auto upgrade patch for the Beaker 1.0.x format to 1.1, take a look: http://pylonshq.com/pasties/1003 -- Philip Jenvey --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Beaker 1.1 Release
also your usage is more succinct via: response = cache.get_value(request, expiretime=x, createfunc=lambda: func(self, *arg, **kw)) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons: pros and cons
On Oct 15, 7:40 am, Tomasz Nazar [EMAIL PROTECTED] wrote: Also code would be a lot simpler: confs = dbsession().query(Conference).filter(XXX).all() %for conf in confs: ${conf} ${conf.author} ${conf.author.phone} etc... It's that simple with cache being used. Without that... code is more complicated. Well I think the gains with SQL level caching become problematic down the road in lots of cases, but try out that example I gave you. It'll give you exactly that behavior. It can be expanded in many ways, such as to produce regioning behavior, i.e. sess.query(Conference).cache(somekey, region=short_term).filter(XXX).all() where short_term points to some regioning object with timeout params, etc. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Beaker info
beaker will cache something forever if you use a persistent system like file-based caching and don't set any timeout on the cache. It should remain persistent across server restarts. as far as per request, I stick things on c to accomplish this. On Oct 10, 11:40 am, Wichert Akkerman [EMAIL PROTECTED] wrote: Beaker seems to be very little documentation online, so hopefully someone can help me out here. I am looking at caching the results of things like expensive function calls and database queries, and beaker seems to be usable as a caching system. What I can't seem to find is how to conveniently handle per-request and forever-lasting caching. In Zope I can do this: from plone.memoize import forever from plone.memoize import view @forever.memoize def expensive_stuff(): The result of this function is cached forever, with the function and its argument as cache keys. @view.memoize def expensive_stuff(): The result of this function is cached during this request only, with the function and its argument as cache keys. as far as I can see there is no direct alternative for beaker, is that correct? Wichert. -- Wichert Akkerman [EMAIL PROTECTED] It is simple to make things.http://www.wiggy.net/ It is hard to make things simple. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons: pros and cons
On Oct 13, 12:14 pm, Tomasz Nazar [EMAIL PROTECTED] wrote: 1) all model classes are defined in 1 file together with database mapping like everyone is saying, you can roll this however you want and however is appropriate to the type of app you're building. 2) SQLAlchemy 2nd level cache That is most frustrating for me coming from Hibernate, where it's built in. You may point to memcached or whatever, but the truth is it improves performance and coding style hugely! I would love to optimize later and be a bit lazy. Maybe the authors do not have resources -- it's a great soft anyway -- but that should be one of the 1st features on the roadmap. Hibernate makes a big deal out of second level cache because it makes too small a deal out of eager loading. Hibernate's eager loading is fundamentally broken since you can't use it on a result set that has LIMIT/OFFSET applied to it - the outer join gets wrapped inside the LIMIT. Their own docs think eager loading is a fairly rare use case, because they're coming from that old EJB mindset of entity beans stored in a giant in-memory hashtable against their primary keys, which assumes that most relations are many-to-one's which can just pull from the giant vat of identifiers when needed. This was one of the original issues SQLA sought to solve, and makes it a whole lot easier to load several levels of objects, with or without a LIMIT/ OFFSET applied to the overall results, with only one round trip. But eager loading isn't as good as no loading at all - so onto caching for SQLA. There's some reasons they're inconvenient. ORM mapped objects usually need to load additional data (i.e. lazyloading) which implies an association with an active database connection. Caching the ORM objects means you have to dance around all the lazy loaders to ensure none fire off, since there is no active database connection - furthermore, even if you re-associated them with one, now you have a concurrency issue. Secondly, its incredibly common to get some ORM mapped objects from somewhere and start using them in a transaction. If you get them from a cache, again you lose because they're global objects - you can't map them back to your transaction. So ORM caching which seems very simple and automatic quickly leads to some very broken use cases for those who don't understand what they're doing. To fix the above two issues, SQLA offers the merge(dont_load=True) method, which allows you to take your cached objects and associate a copy of them with the current Session. Though the overhead of copying objects from the cache into the Session might not be that much cheaper than the typical SQL query (and certainly doesn't save you anywhere near the overhead that view layer caching does). Here's a real simple 2nd level ORM query cache for SQLAlchemy which does the merge thing and provides an easy Query interface: http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/query_caching/query_caching.py . As of yet, nobody has sought to embark upon a SQLAlchemy 2nd Level Caching project, but if there were this might be where they'd start. If you swap out the dictionary there for a Beaker cache, there's your 2nd level Hibernate-style cache sans XML pushups. But concerning that method, I've built SQL statement caches in the past (caching objects which represent result sets mapped to SQL strings, without the ORM issues mentioned above) and not had good results due to the huge tree of SQL statements and result sets which get generated. Caches at this layer are complicated to manage, zone, and expire, and ultimately don't address enough of the performance overhead of the full request-to-render process. So the biggest reason that ORM caches aren't so critical is that caching for a web application is extremely effective at the view layer.If you've spent a lot time with Java/JSP, this might not be apparent since view layer caching support is abysmal in servlets/jsp/struts/taglibs/tiles/ etc. (more EJB fallout). Pylons and Mako offer some of the most flexible view level caching around - you can cache controller methods, functions, template renders, and individual components (defs) of templates. I recently just wrote a small %def which sits in the globally inherited layout template, wraps the middle of the page in a conditional cache which is enabled and configured by a method called in the controller - so that context-sensitive info in the header (like login info) stays context sensitive, the content-heavy middle is optionally cached, all the parameters and zoning of the cache is configured at the controller level, the ORM fetches data purely from the DB (but less often), and none of the templates have any idea whether or not they're cached or for how long. This is likely the subject of my next Pylons/Mako blog entry. View level caching usually reduces the size of data which is cached, since you are caching just the HTML, not fully pickled class instances, their
Re: Pylons: pros and cons
On Oct 13, 9:54 pm, Wayne Witzel [EMAIL PROTECTED] wrote: OK. Back to your code? What is this MemCachedMapper.. google 2 hits only. Is it your own solution, does it work, can you share? Not my solution, was a solution presented a while back on the SA mailing list. Seehttp://groups.google.com/group/sqlalchemy/msg/5d505529ee157162 I've used it in albeit non-critical, production sites. This cache has some drawbacks - it seems to be reinventing session.merge() somehow, and also only takes effect for straight get()s. A cache needs to cache the results of any SELECT statement to really be useful. The cache I've added to SQLA's examples does this but doesn't have the on-update expiration feature this one has - its much more involved to implement for a general SELECT cache and I would usually leave expiration as a manual affair. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Exposing an SQLAlchemy class model to an RIA?
im pretty pessimistic on the generically expose any model through a GUI idea since its as old as toast, and I've never seen an actual real world application using it successfully. At best, you'll see it in some arcane builder types of systems which are designed to be used by developers in the first place (and are generally just a PITA versus textual coding). Other than that, humans want to interact with an interface designed for humans. This is why we have the view. On Sep 30, 9:52 am, Dean Landolt [EMAIL PROTECTED] wrote: On Mon, Sep 29, 2008 at 11:47 AM, mario ruggier [EMAIL PROTECTED] wrote: On Mon, 29 Sep 2008 07:57:49 -0700 (PDT), Jonathan Vanasco [EMAIL PROTECTED] wrote: would you want a desktop client to have 'all' operation and access to the objects methods and fields? In principal, no; only an explicitly exposed subset. But, in practice, and as I am only just mentally exploring this idea, I do not really care if the exposing mechanism forces that all attributes and methods be exposed -- some way to limit that can always be layered on top later. Similarly, for the sake of toying with the idea, the many security issues may just be ignored. Note that the client may be any that supports rpc, so it could be desktop just as well as web. But, what makes think that this difference is important? I was looking into this kind of architecture a little while back and while I ultimately never found the time to pursue it, one very promising option is to wire up a dojo.data adapter [1] to SQLAlchemy -- there may even be an effort under way to do so through rum [2]. [1]http://dojotoolkit.org/book/dojo-book-0-9/part-3-programmatic-dijit-a... [2]http://toscawidgets.org/documentation/rum/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons + WSGI not pooling connections?
On Sep 30, 10:31 am, Wayne Witzel [EMAIL PROTECTED] wrote: Is connection pooling with SQLAlchemy under Pylons supported when running under mod_wsgi? Is there some special magic to get it working? I have the following INI settings. sqlalchemy.default.pool_size = 1 sqlalchemy.diary.pool_size = 1 sqlalchemy.default.max_overflow = 0 sqlalchemy.diary.max_overflow = 0 This works fine when running under paster. I only ever get 2 open connections to the Oracle database, one for each schema. When I use this exact same INI and run my Pylons app under Apache and mod_wsgi it seems to ignore these settings and opens a new connection everytime I refresh the page. It is always pairs and matches the number of refreshes I do. Refresh 7 times, end up with 14 .. etc. this is most likely due to the presence of individual subprocesses within your apache process. Settings like MinSpareServers and MaxClients in your http.conf will affect this behavior. Take a look at your process listing and you'll see individual entries for each httpd process. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons + WSGI not pooling connections?
Makes sense and now i clearly see the pattern of 2 connections per thread. Thanks for clearing that up. per process. big difference :). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: ***CRITICAL*** error in the Pylons / SQLA tutorial
On Sep 15, 4:21 pm, Mike Orr [EMAIL PROTECTED] wrote: It sounds like both you and MikeB are recommending this. Is the SQLAlchemy manual also going to recommend it? We set up the Pylons 0.9.6 / SA 0.4 configuration to the lowest level possible because we'd had too many problems defaulting to extensions that were too magical and/or later deprecated. declarative was introduced in 0.4 and remains pretty much the same in 0.5 though it works better in 0.5. Ben's concern with declarative is the use case of the 800-table model where you don't want everything in one big file, though I don't think there's much difference in how you deal with that whether or not declarative is in use (in both cases, some single module needs to import the full extent of mappers and tables, whether or not the mappers/tables/classes are broken up, or stated in single units). I've been waiting for SA 0.5 to be released before changing the tutorial or my own apps, to avoid having to change things multiple times as SA evolves. I guess now 0.5 is in RC status it's close enough. the point of RC status was to get people to stop worrying about API changes, but to leave room for any lurking issues that would only become apparent with widespread usage. There have been several changes in SA 0.5 including autocommit, new ways to create the session, Declarative, etc. These have made me unsure what to put into the new model. There's really no change to how to create the session. sessionmaker()/ scoped_session() are still used in the same way. We just changed the name of the transactional keyword argument to autocommit. transactional is still accepted with a warning. So its not really different in any significant way. Another issue we haven't resolved is sharing the Session/engine with middleware. What if anything should we do about that? Currently the middleware is on the hook for clearing the session when it finishes, if the app has already been called. And of course, if the middleware writes something before calling the app, the app will commit or roll it back. I suppose sharing engines doesn't matter. Should the middleware just make its own session? Is it OK to have two sessions open simultaneously in the same thread? Middleware can share an engine without issue. As far as a Session, I think its better that middleware have its own session so that there is no implicit interaction in the transactional space between middleware and applicationnobody is going to want to be surprised by that. However, this may depend heavily on what kind of middleware we're talking about.I would think its up to middleware authors to decide which approach is more appropriate. Note that upgrading the default model will mean it's no longer compatible with SA 0.4. How much is this a concern? the compatibility changes would be very slight, I'd imagine that theres an option between SQLA 0.4/0.5 for the time being. Pylons has StackedObjectProxies for config, app_globals, request, response, and tmpl_context. These are supposedly better than threadlocals in case multiple instances of the same Pylons application (or different Pylons applications?) are running in the same process. For two different applications running in one process, each has a distinct Session class configured in their model package so no stacked proxy is needed.For the two instances of the same app running in a process, you'd need a stacked object proxy only because I'd assume both instances talk to different engines, and we generally like to have one Session class associated with an engine. Both use cases are in my experience totally nonexistent - I can't imagine the point of running two applications in one Python interpreter, taking on the burden of keeping both namespaces away from each other, as well as any other weird side effects, when there are so many easy ways to run multiple Python interpreters. for reference my current base.py looks like: class BaseController(WSGIController): def __call__(self, environ, start_response): try: return WSGIController.__call__(self, environ, start_response) except: meta.Session.rollback() raise finally: meta.Session.remove() init_model: def init_model(engine): Call me before using any of the tables or classes in the model if not meta.Session: sm = orm.sessionmaker(autoflush=True, autocommit=False, expire_on_commit=False, bind=engine, extension=some extensions I'm using) meta.engine = engine meta.Session = orm.scoped_session(sm) I also *very* occasionally use a without_autoflush decorator, when I'm populating an object to be flushed with data that comes from queries: @decorator def without_autoflush(fn, self, *args, **kwargs): Disable the Session autoflush feature for the duration of the decorated method. meta.Session().autoflush=False try: return
PROPOSAL: session.id should be guaranteed, set-cookie after session access is canceled only by session.invalidate()
Hi list - A Beaker issue exists, which seems to be the result of some changes made at some point due to a request to minimize unnecessary Set- cookie headers. My proposal would be to restore the old behavior, or a compatible variant of it, to both file-based and cookie based sessions in Beaker. The issue is this. The browser requests a page, has no cookie. The controller does some logic like this: key = session.id do something with the key, but does not session.save() andthat's it ! Particularly with cookie-based sessions, the above operation is significant since you might be (as I am) storing that session id in the database, but not otherwise doing anything with the contents of the session (storing things *in* the session is very 1998 anyway). But what's wrong ? The browser now requests again, and the controller says: key = session.id do something with the key Above, *the session.id is now different* ! the contract of, please give me the unique id for this browser session has been broken. why is this ? Because the session did not honor the session.is_new() flag and send out a set-cookie. Apparently some user was offended by this behavior, for reasons unclear (if those users want to chime in on their rationale, that would be most valuable).Their argument was, since I didn't session.save(), no cookie should be sent. Well that behavior is just wrong. session.save() is used to *update* the contents of the session, not establish that a session exists. If you do some stuff with the session, save or not, you've asked it for the id - and that id is now linked, hence a set-cookie header is necessary if the id is newly generated. If you change your mind midway through the request and want to throw away that session, by far the less common use case, you say session.invalidate(). There is no implicit behavior here. The proposal therefore is: 1. when a session is accessed in any way, and the new flag is set, a Set-cookie header is emitted on response 2. if session.invalidate() is called and the new flag is set, no header is emitted. 3. session.save() only refers to the contents of the session, not its existence. Commentary is greatly appreciated here. I do have commit access to Beaker (since I originally wrote it), so if noone objects I'll be going forward with this proposal. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: beaker 1.0+ broken on GAE
Hi Ben - Here's my suggestion for a patch on this, such that the variance in the function's availability is centralized and documented (thus preventing someone else from removing it like I did !) diff -r eb6cff7e8a17 beaker/session.py --- a/beaker/session.py Tue Aug 19 16:42:19 2008 -0700 +++ b/beaker/session.py Sun Aug 24 14:57:22 2008 -0400 @@ -18,7 +18,7 @@ from beaker.cache import clsmap from beaker.exceptions import BeakerException -from beaker.util import b64decode, b64encode, Set +from beaker.util import b64decode, b64encode, Set, getpid __all__ = ['SignedCookie', 'Session'] @@ -100,7 +100,7 @@ def _create_id(self): self.id = md5.new( -md5.new(%f%s%f%s % (time.time(), id({}), random.random(), os.getpid()) ).hexdigest(), +md5.new(%f%s%f%s % (time.time(), id({}), random.random(), getpid()) ).hexdigest(), ).hexdigest() self.is_new = True if self.use_cookies: @@ -353,7 +353,7 @@ def _make_id(self): return md5.new(md5.new( -%f%s%f%d % (time.time(), id({}), random.random(), os.getpid()) +%f%s%f%d % (time.time(), id({}), random.random(), getpid()) ).hexdigest() ).hexdigest() diff -r eb6cff7e8a17 beaker/util.py --- a/beaker/util.pyTue Aug 19 16:42:19 2008 -0700 +++ b/beaker/util.pySun Aug 24 14:57:22 2008 -0400 @@ -13,6 +13,13 @@ import string import types import weakref + +if hasattr(os, 'getpid'): +getpid = os.getpid +else: +# os.getpid not supported on GAE +def getpid(): +return '' try: Set = set --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons on Google App Engine
On Jul 21, 4:56 pm, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: def render(table, _context=None, target_language=None): global generation (_out, _write) = generation.initialize_stream() (_attributes, repeat) = generation.initialize_tal() (_domain, _negotiate, _translate) = generation.initialize_i18n() (_escape, _marker) = generation.initialize_helpers() _path = generation.initialize_traversal() _target_language = _negotiate(_context, target_language) _write('table\\n') for row in table: _write('tr\\n') for column in row.values(): _write('td') _tmp1 = column _urf = _tmp1 if isinstance(_urf, unicode): _write(_urf) elif _urf is not None: _write(_escape(_urf)) _write('/td') _write('/tr') _write('/table') return _out.getvalue() Yeah, so here is the mako render() method for that template: def render_body(context,**pageargs): context.caller_stack._push_frame() try: __M_locals = __M_dict_builtin(pageargs=pageargs) table = context.get('table', UNDEFINED) __M_writer = context.writer() # SOURCE LINE 1 __M_writer(u'\ntable\n') # SOURCE LINE 3 for row in table: # SOURCE LINE 4 __M_writer(u' tr\n') # SOURCE LINE 5 for col in row.values(): # SOURCE LINE 6 __M_writer(u' td') __M_writer(filters.html_escape(unicode( col ))) __M_writer(u'/td\n') # SOURCE LINE 8 __M_writer(u' /tr\n') # SOURCE LINE 10 __M_writer(u'/table\n') return '' finally: context.caller_stack._pop_frame() I think this source code is extremely comparable to the z3c.pt codethe Mako has less initialization code at the top. You might want to maybe test against a wider range of template designs to get a better picture of the speed differenes. Is it literally just the html_escape(unicode()) that makes z3c 2x the speed ? Cheetah as well ? It seems like Mako could apply the exact same optimizations with no trouble at all. spitfire's main trick is to first generate a Python abstract syntax tree out of the template and then have multiple loops of various optimization's being applied to that tree, so it can optimize away even more. well if you've generated Python code as we've done here, you can get an AST from that. I think its the various optimizations part here that is somewhat mysterious :). Are you planning on applying Spitfire's techniques to z3c? If so, I might as well do whatever you're doing too since the code generation is extremely similar. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons on Google App Engine
Mako uses beaker for caching so it should support any of those backends. There's a little bit of hardcoding to particular Beaker backends in 0.2.2 which is removed in the current trunk (also in prep for the new release of Beaker), but even with 0.2.2 that can be worked around by adding the desired Container class to mako.cache.clsmap. On Jul 21, 4:46 pm, Mike Orr [EMAIL PROTECTED] wrote: On Mon, Jul 21, 2008 at 12:52 PM, Michael Bayer [EMAIL PROTECTED] wrote: is that test from the spitfire suite ? I haven't looked at it, but their Mako numbers look a whole lot like Myghty, not Mako (Mako is roughly the same speed as Cheetah in reality, a tad slower usually). I haven't had the time to deal with spitfire, which will involve verifying that they are testing against Mako and not Myghty, and then spending the time to plug Psyco into Mako (should be a three liner) to see if that closes the gap (since they certainly aren't running pure python to get that kind of result). Can you perhaps tell me if that suite is in fact using mako and not myghty ? Mako is impressive because it approaches Cheetah's speed even though Cheetah has a C extension and Mako doesn't. MikeB, would it be feasable to have an optional caching backend in Mao that stored cached templates in Datastore rather than as files? Beaker did something similar to save sessions in Datastore, since you can't write files in App Engine. -- Mike Orr [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Mako + Dictionaries = unexpected EOF while parsing
sorry i suppose i should try to improve the condition of this bug at some point --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy group_by with count
also note that Aaron's SQL statement is against the Table object, not the mapped class. since you just want a count and a single column, its appropriate that you'd use a SQL expression and not an ORM Query (assignmapper's behavior with Class.select() is to use a Query). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Too many database connections.
this issue is resolved in the latest release of SQLAlchemy 0.4 (beta5). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy queries getting logged twice
I just released beta4, which unfortunately still has this issue. However if you checkout rev 3411, its fixed. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: reddit.com moving to pylons.
On Aug 21, 6:58 pm, Tim Riley [EMAIL PROTECTED] wrote: I was checking out reddit.com's new site and in the comment section there were some discussions about reddit moving away from web.py. Well I threw a guess out there about reddit switching to pylons and Chris Slowe confirmed it[1]. This isn't really anything too exciting but reddit would be a nice addition to the Sites Using Pylons wiki page. [1]http://reddit.com/info/2h8kd/comments/c2hccv I'm excited. im going to put a note on the Mako site right now --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: New SQLAlchemy tutorial; SAContext is dead
On Aug 17, 7:17 am, Christoph Haas [EMAIL PROTECTED] wrote: Does that mean I still have to run Session.commit() manually after I did changes? I'm glad about the autoflush option so I don't need to flush any more. But instead it appears I need to commit() after every change. Shouldn't Pylons flush the Session automatically when a request is done? Would it be wise to add a model.Session.commit() to the lib/base.py - BaseController - __after__()? this could be done, yes. however, for the purposes of the tutorial, i think its better that the commit() is manual to start with, because a commit() in all cases might be unexpected. particularly if the controller wishes to display an error on the page and not actually persist changes - in that case you might not want the commit(). basically, the basic controller idea here does not address any sort of generic way of handling success/fail conditions, like with an exception throw or similar. so i think commit()-in-all-cases might assume too much, and if users want to reduce the number of explicit commit() calls, they should choose how they want to do that. my own preference is using a @transact decorator that inspects the .errors attribute of the controller before committing (otherwise it rolls back). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy tutorial
im -1 on the model. prefix too. proper imports should be used so that this isn't needed. also, i dont see any comment about a beta4 in there and beta3 was only released yesterday. not much has changed since then. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy tutorial
On Aug 17, 12:10 pm, Neil Blakey-Milner [EMAIL PROTECTED] wrote: By the way, the capital Session still irks me. Sure, it's a (constructed) class, but it's being treated as a (non-class) object, and I don't see how using Session is any semantically different to how we use a StackedObjectProxy or any other sort of proxy/facade. Implementation details don't seem as important as how it feels to use it. OK. well, its just a variable name. You can name it Daisy if you like (at least then, no more name collisions ;) ). this is why i hate suggesting names (like the Mako .mak extension debate). Pylons is the framework, you guys should decide on all the names you want to use. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SQLAlchemy tutorial
On Aug 17, 4:36 pm, Mike Orr [EMAIL PROTECTED] wrote: However, this classmethod thing has hit a collision with model.Session.configure(bind=) in the Multiple Engines chapter. MikeB says the example does not do what it says: it affects only future sessions, not the current one. We're discussing how to handle this on the pylons-devel list. It's easy to work around this; but it's harder to explain it to newbies without introducing additional syntax. We may have to do model.Session().configure(bind=), which is inconsistent with the other class methods. Mike has suggested a lambda but I'm not sure it will work. I'm wondering if SQLAlchemy needs to rethink its .configure method or add another method. So this is up in the air. like I said, just do model.Session(bind=foo) for now. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: New SQLAlchemy tutorial; SAContext is dead
On Aug 16, 5:26 am, Christoph Haas [EMAIL PROTECTED] wrote: Worse is that I can't query for objects through paster shell. I get this exception: /home/chaas/projekte/dnsdhcp/dnsdhcp/model/__init__.py in pylons_scope() 16 import thread 17 from pylons import config --- 18 return Pylons|%s|%s % (thread.id, id(config)) 19 20 # Global session manager. Session() returns the session object appropriate for the current web request. AttributeError: 'module' object has no attribute 'id' I'm not threading expert so I don't know where to look. In ipython the thread module doesn't have an id. the proper call is thread.get_ident(), changed in r92 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: New SQLAlchemy tutorial; SAContext is dead
On Aug 16, 5:34 am, Vegard Svanberg [EMAIL PROTECTED] wrote: * Christoph Haas [EMAIL PROTECTED] [2007-08-16 11:26]: Oh, well, the whole project seems to be a increasingly moving target. First pylons.database is deprecated and replaced by SAContext. Then SAContext is deprected. Takes some getting used to. Without intending to rant, I'm also a little concerned about the constant change of more or less fundamental parts. It seems this would mean that an application would have to be rewritten every so often and this could be quite tedious with a large and complex application, not to mention that everything would have to be tested and re-tested all over again. It seems to me the world is moving too fast :-) honestly, we've tried a few things and we are watching how the userbase responds. Theres two things at play here : 1. SQLAlchemy 0.4 offers better configurational options than SA 0.3, and is in the process of being released. SAContext was designed around 0.3 and doesn't have as strong a place with 0.4. 2. SAContext was pretty good but at the same time people put all their faith into it as the solver-of-all-problemsand we saw a fair amount of confusion remain. While I liked the idea of a single configurational object to do everything, at the same time I think the setup works out better when the two or three individual pieces of the configuration go where they really should go. this latest approach is also better: 3. SA0.4 improves the user experience here a little bit by providing more succinct objects, like Session.save instead of sacontext.ctx.save() or whatever it was. 4. this configuration supports transactions quite nicely, and embeds raw SQL in the same transaction smoothly. previous patterns didnt include any of this. so we apologize for getting the story different a few times but im pretty sure we'll be reaching cruising altitude very soon. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: New SQLAlchemy tutorial; SAContext is dead
So it had the attribute registry right before the remove() call, but somewhere deep inside the attribute vanished. the remove() call is actually not correct in release beta2, so ive updated the article to reference beta3 and/or the current trunk. also i changed the configure call above it which was incorrect. Wheres beta3 ? later today :) also added some WARNING BLEEDING EDGE caveats since this tutorial went up *really* fast. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: New SQLAlchemy tutorial; SAContext is dead
ive released beta3 which fixes the remove() issue. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: New SQLAlchemy tutorial; SAContext is dead
+1 on a pylons-sqlalchemy template since if youre *really* in a hurry, thats the best --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Non-unicode data from CSV into MySQL can't render in Myghty
first of all, definitely dont use sys.setdefaultencoding, that is a hack of the most brittle kind. but secondly, upgrade your MySQLDB (i.e., the mysql-python DBAPI). it most certainly had a double-encoding bug some time ago, you can find it in their bug tracker and ours (sqlalchemy's). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Template engines
i just click around my site and watch the little green Firebug checkbox in the corner. /amateur --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: SAContext 0.2.0
im going to see if i can put property magic into query so it will be like myclass.query.filter_by(foo).all() (i.e. you wont need to call it as a function). SA 0.4 is going to be decidedly slightly more opinionated in several areas, and im going to try to have a lot of those opinions expressed in a forwards compatible way in 0.3.9 (such as, use all(), one(), first() on query...assignmapper will have 'query' with () optional hopefully). re: echo/echo_uow, i think both should be removed from Pylons and ppl should learn to configure logging directly. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Turbogears now based on Pylons!
On Jun 28, 1:40 am, Uwe C. Schroeder [EMAIL PROTECTED] wrote: On Wednesday 27 June 2007, Michael Bayer wrote: this issue can be worked around by using explicit transactions. actually no, it can't. Maybe I don't get it right, but the only way for me to get a commit was actually to modify Connection._autocommit in sqlalchemy.engine.base. either TG is getting in the way, or youre not getting it right. if anyone ever needs to modify the internals of SA to get something to work, i would *highly* prefer if they could email the ML or post a trac ticket with their issue so that it may be handled properly. Obviously SA thinks there is no transaction in TG, so it just wraps one around it. if TG actually has a transaction going on, theyd certainly have to configure SA to be aware of it (most likely via SessionTransaction). if not, then yes things arent going to work at all (though still, an explicit SA transaction should work all by itself). I agree, and that is certainly DB dependent. Personally I can't imagine that an automatically issued rollback for every select transaction is in any way more overhead than issuing a commit. Not wrapping a select in a transaction will definitely be the least overhead. we dont issue a rollback for every select transaction. we issue a rollback when a connection is returned to the pool. you can check out a connection explicitly and perform any number of selects on it without any rollbacks or commits. because the rollback is at the connection-pool checkin level, it should be more apparent how inappropriate it would be to issue a *commit* every time a connection is returned to the pool, an operation that knows nothing about what just happened with that connection. the rollback is to release database locks. im thinking that it might be time to allow an option in SA that just turns the DBAPI's autocommit flag on. that way you can just blame the DBAPI for whatever issues arise. its not always possible to not wrap a select in a transaction. oracle for example *always* has a transaction going on, so everything is in a transaction in all cases. that a stored-procedure-oriented application is far more efficient is *extremely* debatable and database-dependent as well. I doubt it's *extremely* debatable. its extremely debatable: http://www.google.com/search?q=stored+procedures+vs Just issue 100 inserts from inside a stored procedure (or 100 updates) and do the same discretely with any kind of db interface. In case of the interface every statement has to be parsed by the db, whereas in a stored procedure the statement is already compiled of sorts (at least Oracle and PostgreSQL do that). the debate over SP's is about a larger issue than is an SP faster than 5 separate INSERT statements. SP's are of course much better for micro-benchmarks like that. its their impact on application development and architecture where the debate comes in (read some of the googled articles). I am certainly not anti-SP, ive done pure SP applications before (on projects where the DBAs controlled the SPs)...I just dont want to start hardwiring SQLAlchemy to expect that sort of application. I think 80/20 as applied to SELECT is that 80% of SELECTs are for read operations and a COMMIT is inappropriate. if you really want COMMIT for every SELECT, i'd favor it being enabled via an option passed to create_engine(). Not every select, every transaction that didn't roll back. I just think the default of rollback on every transaction is wrong - a rollback should occur when there is a problem, not when the transaction was fine. But that may just be me. this is the use case: c1 = pool.connect() row = c1.cursor().execute(select * from sometable).fetchone() pool.return_connection(c1) c2 = pool.connect() # (returns a connection that is not c1) c2.cursor().execute(drop sometable) # -- deadlock if DBAPI supported a release_locks() method, we'd be using that. Probably because a lot of people can't figure out how to use stored procedures and triggers, since the lightweight/open-source programming is often done on a database that has very limited support for both :-) keep in mind youre including the vast Hibernate community, including its creators, etc. im not sure if the im to dum to use stored procedures argument can fully explain why the SP-architecture remains a minority use case. i think the overall inconvenience of it, the clunky old languages you have to use on DBs like Oracle and SQL Server, as well as the harsh resistance it puts up to so-called agile development methods are better reasons. Personally I'm not a big fan of handling database integrity outside the database. continue SP arguments. thats great, you can have your preferences..the google link above should reveal that quite a few people have established their preferences in this matter. If you are truly writing an SP-only application which prevents direct SQL access
Re: Turbogears now based on Pylons!
On Jun 28, 2:09 am, Mike Orr [EMAIL PROTECTED] wrote: I would probably want this optional URI in a subclass rather than in SAContext itself. The reason 'uri' is a required argument is to guarantee that the default engine is initialized at all times. When we designed SAContext we (Mike I) agreed that the bound metadata strategy was the most straightforward and adequate for most user apps: the 80/20 rule. Hiding the engine (connectable) as much as possible in the metadata, while still making it accessible when you really need it. Now we're adding one unbound metadata strategy after another. That's fine as log as it doesn't detract from SAContex's primary commitment to its primary userbase. I'd say a mandatory URI and bound metadata is part of its primary commitment for an easy-to-use front end: a normal Python TypeError for You forgot the 'uri' argument is about as straightforward as one can get. Or is that changing too? Are you having second thoughts about using bound metadata by default? OK just FTR heres why SQLAlchemy is not a framework and why in order to do things without the advent of a framework you work with all these little highly granular components, and also why im still antsy including SAContext in SA. just trying to establish the simplest heres your connection object spawns all this debate over how it should look/work/act/etc. I would encourage those here to look at SAContext, see that its a dead-simple piece of code, and just work up various preferred versions of it to taste. My goal with it was more to illustrate *how* you can build these things. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Turbogears now based on Pylons!
On Jun 27, 5:56 pm, Ian Bicking [EMAIL PROTECTED] wrote: * Way for a library to get the transaction manager. * Interface for that transaction manager, maybe copied from Zope. * Single convention for how to specify a database connection (ideally string-based). * Probably a way to get the configuration, so you can lazily get a connection based on the configured connection. just as a point of reference, here is zalchemy's integration of Zope datamanager: http://svn.zope.org/z3c.zalchemy/trunk/src/z3c/zalchemy/datamanager.py?rev=77165view=auto I would favor that the interface supports two-phase semantics like zope's. sqlalchemy's flush() model already fits in pretty well with the two-phase model, and will eventually be building explicit two- phase hooks into SQLAlchemy's engine and session, with real implementations for postgres to start with. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---