[Python-Dev] PEP 492: async/await in Python; version 4
Hi, still watching progress here. Read all posts and changes. Everything improved and I know it is a lot of work. Thx for doing this. But I still think this PEP goes to far. 1. To have "native coroutines" and await, __await__ is very good and useful. Also for beginners to see and mark coroutines are a different concept than generators. Also this is easy to learn. Easier as to explain why a generator is in this case a coroutine and so on. 2. I still don't like to sprinkle async everywhere. At all we don't need it for the first step. We can handle coroutines similar to generators, when there is a await in it then it is one. Same as for yield. Or to be more explicit, if it is marked as one with @coroutine it is enough. But then it makes also sense to do the same for generators with @generator. We should be consistent here. 3. async with is not needed, there are rare use cases for it and every can be written with a try: except: finally: Every async framework lived without it over years. No problem because for the seldom need try: ... was enough. 4. async for can be implemented with a while loop. For me this is even more explicit and clear. Every time I see the async for I try to find out what is done in async manner. For what I do an implicit await ? Also in most of my uses cases it was enough to produce Futures in a normal loop and yield (await) for them. Even for database interfaces. Most of the time they do prefetching under the hood and I don't have to care at all. 5. For async with/for a lot of new special methods are needed all only prefixed with "a". Easy to confuse with "a" for abstract. Often used to prefix abstract classes. Still think __async_iter__, __async_enter__ is better for this. Yes more to write but not so easy to confuse with the sync __iter__ or __enter__, ... And matches more to I must use async to call them. I am not new to the async land, have done a lot of stuff with twisted and even looked at tornado. Also has tried to teach some people the asnc world. This is not easy and you learn most only by doing and using it. For my final conclusion, we should not rush all this into the language. Do it step by step and also help other Python implementations to support it. For now only a very low percentage are at Python 3.4. And if you compare the statistics in PyPi, you see most are still at Python 2.7 and using async frameworks like twisted/tornado. We do such a fundamental language change only because a few people need it for a niche in the whole Python universe? We confuse the rest with new stuff they never need? Even the discussion on python-dev suggests there is some time needed to finalize all this. We forget to address the major problems here. How can someone in a "sync" script use this async stuff easy. How can async and sync stuff cooperate and we don't need to rewrite the world for async stuff. How can a normal user access the power of async stuff without rewriting all his code. So he can use a simple asyc request library in his code. How can a normal user learn and use all this in an easy way. And for all this we still can't tell them "oh the async stuff solves the multiprocessing problem of Python learn it and switch to version 3.5". It does not and it is only most useful for networking stuff nothing more. Don't get me wrong, I like async programming and I think it is very useful. But had to learn not everyone thinks so and most only want to solve there problems in an easy way and not get a new one called "async". Now I shut up. Go to my normal mode and be quiet and read. ;-) Regards, Wolfgang ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Windows x86-64 embeddable zip file, Feedback
Hi, had some time to test the new distributed stuff for Windows especially the embeddable zip. Thanks for this special distribution, it is very useful to generate standalone Python distributions and software based on Python. Very good work. I detected two minor issues only affecting size, opened tickets for them: http://bugs.python.org/issue25085 Windows x86-64 embeddable zip file contains test directorys http://bugs.python.org/issue25086 Windows x86-64 embeddable zip file, lot of big EXE files in distuils Think this can be an improvement in size, targeting for Python 3.5.1. Regards, Wolfgang ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3.4, marshal dumps slower (version 3 protocol)
Hi, yes I know the main usage is to generate pyc files. But marshal is also used for other stuff and is the fastest built in serialization method. For some use cases it makes sense to use it instead of pickle or others. And people use it not only to generate pyc files. I only found one case with a performance regression in the newer protocol versions for 3.4. We should take care of it and improve it. Now it is possible to handle this in a beta phase and fix it for the upcoming release. Or even document all this. I think it is also useful for others to know about the new versions and their usage and the behavior. I also noticed the new versions can be faster in some use cases. I like the work done for this and think it was also useful to reduce the size of the resulting serialization. I 'm not against it nor want to criticize it. I only want to improve all this further. Regards, Wolfgang On 28.01.2014 06:14, Kristján Valur Jónsson wrote: Hi there. I think you should modify your program to marshal (and load) a compiled module. This is where the optimizations in versions 3 and 4 become important. K -Original Message- From: Python-Dev [mailto:python-dev- bounces+kristjan=ccpgames@python.org] On Behalf Of Victor Stinner Sent: Monday, January 27, 2014 23:35 To: Wolfgang Cc: Python-Dev Subject: Re: [Python-Dev] Python 3.4, marshal dumps slower (version 3 protocol) Hi, I'm surprised: marshal.dumps() doesn't raise an error if you pass an invalid version. In fact, Python 3.3 only supports versions 0, 1 and 2. If you pass 3, it will use the version 2. (Same apply for version 99.) Python 3.4 has two new versions: 3 and 4. The version 3 "shares common object references", the version 4 adds short tuples and short strings (produce smaller files). It would be nice to document the differences between marshal versions. And what do you think of raising an error if the version is unknown in marshal.dumps()? I modified your benchmark to test also loads() and run the benchmark 10 times. Results: --- Python 3.3.3+ (3.3:50aa9e3ab9a4, Jan 27 2014, 16:11:26) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux dumps v0: 391.9 ms data size v0: 45582.9 kB loads v0: 616.2 ms dumps v1: 384.3 ms data size v1: 45582.9 kB loads v1: 594.0 ms dumps v2: 153.1 ms data size v2: 41395.4 kB loads v2: 549.6 ms dumps v3: 152.1 ms data size v3: 41395.4 kB loads v3: 535.9 ms dumps v4: 152.3 ms data size v4: 41395.4 kB loads v4: 549.7 ms --- And: --- Python 3.4.0b3+ (default:dbad4564cd12, Jan 27 2014, 16:09:40) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux dumps v0: 389.4 ms data size v0: 45582.9 kB loads v0: 564.8 ms dumps v1: 390.2 ms data size v1: 45582.9 kB loads v1: 545.6 ms dumps v2: 165.5 ms data size v2: 41395.4 kB loads v2: 470.9 ms dumps v3: 425.6 ms data size v3: 41395.4 kB loads v3: 528.2 ms dumps v4: 369.2 ms data size v4: 37000.9 kB loads v4: 550.2 ms --- Version 2 is the fastest in Python 3.3 and 3.4, but version 4 with Python 3.4 produces the smallest file. Victor 2014-01-27 Wolfgang : Hi, I tested the latest beta from 3.4 (b3) and noticed there is a new marshal protocol version 3. The documentation is a little silent about the new features, not going into detail. I've run a performance test with the new protocol version and noticed the new version is two times slower in serialization than version 2. I tested it with a simple value tuple in a list (50 elements). Nothing special. (happens only if the tuple contains also a tuple) Copy of the test code: from time import time from marshal import dumps def genData(amount=50): for i in range(amount): yield (i, i+2, i*2, (i+1,i+4,i,4), "my string template %s" % i, 1.01*i, True) data = list(genData()) print(len(data)) t0 = time() result = dumps(data, 2) t1 = time() print("duration p2: %f" % (t1-t0)) t0 = time() result = dumps(data, 3) t1 = time() print("duration p3: %f" % (t1-t0)) Is the overhead for the recursion detection so high ? Note this happens only if there is a tuple in the tuple of the datalist. Regards, Wolfgang ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python- dev/victor.stinner%40gm ail.com ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3.4, marshal dumps slower (version 3 protocol)
On 28.01.2014 10:23, Barry Warsaw wrote: On Jan 28, 2014, at 09:17 AM, tds...@gmail.com wrote: yes I know the main usage is to generate pyc files. But marshal is also used for other stuff and is the fastest built in serialization method. For some use cases it makes sense to use it instead of pickle or others. And people use it not only to generate pyc files. marshall is not guaranteed to be backward compatible between Python versions, so it's generally not a good idea to use it for serialization. Yes I know. And because of that I use it only if nothing persists and the exchange is between the same Python version (even the same architecture and Interpreter type). But there are use cases for inter process communication with no persistence and no need to serialize custom classes and so on. And if speed matters and security is not the problem you use the marshal module to serialize data. Assume something like multiprocessing for Windows (no fork available) and only a pipe to exchange a lot of simple data and pickle is to slow. (Sometimes distributed to other computers.) Another use case can be a persistent cache with ultra fast serialization (dump/load) needs but not with critical data normally stored in a database. Can be regenerated easily if Python version changes from main data. (think pyc files are such a use case) I have tested a lot of modules for some needs (JSON, Thrift, MessagePack, Pickle, ProtoBuffers, ...) all are very useful and has their usage scenario. The same applies to marshal if all the limitations are no problem for you. (I've read the manual and have some knowledge about the limitations) But all these serialization modules are not as fast as marshal. (for my use case) I hear you and registered the warning about this. And will not complain if something will be incompatible. :-) If someone knows something faster to serialize basic Python types. I'm glad to use it. Regards, Wolfgang ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com