> On Nov 23, 2014, at 7:55 PM, Mike Bayer <mba...@redhat.com> wrote: > >> >> On Nov 23, 2014, at 7:30 PM, Donald Stufft <don...@stufft.io> wrote: >> >> >>> On Nov 23, 2014, at 7:21 PM, Mike Bayer <mba...@redhat.com> wrote: >>> >>> Given that, I’ve yet to understand why a system that implicitly defers CPU >>> use when a routine encounters IO, deferring to other routines, is relegated >>> to the realm of “magic”. Is Python reference counting and garbage >>> collection “magic”? How can I be sure that my program is only declaring >>> memory, only as much as I expect, and then freeing it only when I >>> absolutely say so, the way async advocates seem to be about IO? Why would >>> a high level scripting language enforce this level of low-level bookkeeping >>> of IO calls as explicit, when it is 100% predictable and automatable ? >> >> The difference is that in the many years of Python programming I’ve had to >> think about garbage collection all of once. I’ve yet to write a non trivial >> implicit IO application where the implicit context switch didn’t break >> something and I had to think about adding explicit locks around things. > > that’s your personal experience, how is that an argument? I deal with the > Python garbage collector, memory management, etc. *all the time*. I have a > whole test suite dedicated to ensuring that SQLAlchemy constructs tear > themselves down appropriately in the face of gc and such: > https://github.com/zzzeek/sqlalchemy/blob/master/test/aaa_profiling/test_memusage.py > . This is the product of tons of different observed and reported issues > about this operation or that operation forming constructs that would take up > too much memory, wouldn’t be garbage collected when expected, etc. > > Yet somehow I still value very much the work that implicit GC does for me and > I understand well when it is going to happen. I don’t decide that that whole > world should be forced to never have GC again. I’m sure you wouldn’t be > happy if I got Guido to drop garbage collection from Python because I showed > how sometimes it makes my life more difficult, therefore we should all be > managing memory explicitly.
Eh, Maybe you need to do that, that’s fine I suppose. Though the option isn’t between something with a very clear failure condition and something with a “weird things start happening” error condition. It’s between “weird things start happening” and “weird things start happening, just they are less likely to happen less”. Implicit context switches introduce a new harder to debug failure mode over blocking code that explicit context switches do not. > > I’m sure my agenda here is pretty transparent. If explicit async becomes the > only way to go, SQLAlchemy basically closes down. I’d have to rewrite it > completely (after waiting for all the DBAPIs that don’t exist to be written, > why doesn’t anyone ever seem to be concerned about that?) , and it would run > much less efficiently due to the massive amount of additional function call > overhead incurred by the explicit coroutines. It’s a pointless amount of > verbosity within a scripting language. I don’t really take performance issues that seriously for CPython. If you care about performance you should be using PyPy. I like that argument though because the same argument is used against the GCs which you like to use as an example too. The verbosity isn’t really pointless, you have to be verbose in either situation, either explicit locks or explicit context switches. If you don’t have explicit locks you just have buggy software instead. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA _______________________________________________ OpenStack-dev mailing list OpenStackemail@example.com http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev