Using a background thread with asyncio/futures with flask
Hello, I have a simple (and not working) example of what I'm trying to do. This is a simplified version of what I'm trying to achieve (obviously the background workers and finalizer functions will do more later): `app.py` ``` import asyncio import threading import time from queue import Queue from flask import Flask in_queue = Queue() out_queue = Queue() def worker(): print("worker started running") while True: future = in_queue.get() print(f"worker got future: {future}") time.sleep(5) print("worker sleeped") out_queue.put(future) def finalizer(): print("finalizer started running") while True: future = out_queue.get() print(f"finalizer got future: {future}") future.set_result("completed") print("finalizer set result") threading.Thread(target=worker, daemon=True).start() threading.Thread(target=finalizer, daemon=True).start() app = Flask(__name__) @app.route("/") async def root(): future = asyncio.get_event_loop().create_future() in_queue.put(future) print(f"root put future: {future}") result = await future return result if __name__ == "__main__": app.run() ``` If I start up that server, and execute `curl http://localhost:5000`, it prints out the following in the server before hanging: ``` $ python3 app.py worker started running finalizer started running * Serving Flask app 'app' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:5000 Press CTRL+C to quit root put future: worker got future: worker sleeped finalizer got future: finalizer set result ``` Judging by what's printing out, the `final result = await future` doesn't seem to be happy here. Maybe someone sees something obvious I'm doing wrong here? I presume I'm mixing threads and asyncio in a way I shouldn't be. Here's some system information (just freshly installed with pip3 install flask[async] in a virtual environment for python version 3.11.2): ``` $ uname -a Linux x1carbon 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux $ python3 -V Python 3.11.2 $ pip3 freeze asgiref==3.7.2 blinker==1.7.0 click==8.1.7 Flask==3.0.2 itsdangerous==2.1.2 Jinja2==3.1.3 MarkupSafe==2.1.5 Werkzeug==3.0.1 ``` Thanks for any help! Cheers, Thomas -- https://mail.python.org/mailman/listinfo/python-list
Re: Configuring an object via a dictionary
On 2024-03-20 at 09:49:54 +0100, Roel Schroeven via Python-list wrote: > You haven't only checked for None! You have rejected *every* falsish value, > even though they may very well be acceptable values. OTOH, only you can answer these questions about your situations. Every application, every item of configuration data, is going to be a little bit different. What, exactly, does "missing" mean? That there's no entry in a config file? That there's some sort of degenerate entry with "missing" semantics (e.g. a line in a text file that contains the name of the value and an equals sign, but no value)? An empty string or list? Are you making your program easier for users to use, easier for testers to test, easier for authors to write and to maintain, or something else? What is your program allowed and not allowed to do in the face of "missing" configuration data? Once you've nailed down the semantics of the configuration data, then the code usually falls out pretty quickly. But arguing about corner cases and failure modes without specifications is a losing battle. Every piece of code is suspect unless you know what the inputs mean, and what the application "should" do if the don't look like that. Python's flexibiliry and expressiveness are double edge swords. Use them wisely. :-) Sorry for the rant. Carry on. -- https://mail.python.org/mailman/listinfo/python-list
Re: Configuring an object via a dictionary
Op 19/03/2024 om 0:44 schreef Gilmeh Serda via Python-list: On Mon, 18 Mar 2024 10:09:27 +1300, dn wrote: > YMMV! > NB your corporate Style Guide may prefer 'the happy path'... If you only want to check for None, this works too: >>> name = None >>> dafault_value = "default" >>> name or default_value 'default' >>> name = 'Fred Flintstone' >>> name or default_value 'Fred Flintstone' >>> name = '' >>> name or default_value 'default' >>> name = False >>> name or default_value 'default' >>> name = [] >>> name or default_value 'default' >>> name = 0 >>> name or default_value 'default' You haven't only checked for None! You have rejected *every* falsish value, even though they may very well be acceptable values. -- "Most of us, when all is said and done, like what we like and make up reasons for it afterwards." -- Soren F. Petersen -- https://mail.python.org/mailman/listinfo/python-list
Re: GIL-Removal Project Takes Another Step (Posting On Python-List Prohibited)
On Wed, 20 Mar 2024 at 18:31, Greg Ewing via Python-list wrote: > > On 20/03/24 4:14 pm, Lawrence D'Oliveiro wrote: > > not to > > mention the latency when there isn’t quite enough memory for an allocation > > and you have to wait until the next GC run to proceed. Run the GC a > > thousand times a second, and the latency is still 1 millisecond. > > That's not the way it usually works. If you run out of memory, you > run a GC there and then. You don't have to wait for GCs to occur on > a time schedule. > > Also, as a previous poster pointed out, GCs are typically scheduled > by number of allocations, not by time. > FYI you're violating someone's request by responding to them in a way that results in it getting onto python-list, so it's probably safest to just ignore cranks and trolls and let them stew in their own juices. But normally the GC doesn't need to be scheduled at all. In CPython, the only reason to "run garbage collection" is to detect cycles, so you would have to be generating inordinate amounts of cyclic garbage for this to matter at all. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: GIL-Removal Project Takes Another Step (Posting On Python-List Prohibited)
On 20/03/24 4:14 pm, Lawrence D'Oliveiro wrote: not to mention the latency when there isn’t quite enough memory for an allocation and you have to wait until the next GC run to proceed. Run the GC a thousand times a second, and the latency is still 1 millisecond. That's not the way it usually works. If you run out of memory, you run a GC there and then. You don't have to wait for GCs to occur on a time schedule. Also, as a previous poster pointed out, GCs are typically scheduled by number of allocations, not by time. -- Greg -- https://mail.python.org/mailman/listinfo/python-list