What is the difference between your code and OP code?
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived a
Despite my concerns over code for an implementation on my previous e-mail,
it turns out that simply iterating in an `async for` loop won't yield
to the asyncio-loop. An explicit "await" inside the async-generator
is needed for that.
That makes factoring-out the code presented in the first e-mail
i
> On 15 Jun 2019, at 10:55, Gustavo Carneiro wrote:
>
>
> Perhaps. But using threads is more complicated. You have to worry about the
> integrity of your data in the face of concurrent threads. And if inside your
> task you sometimes need to call async coroutine code, again you need to b
On Sat, 15 Jun 2019 at 00:26, Greg Ewing
wrote:
> Gustavo Carneiro wrote:
> > 1. If you don't yield in the for loop body, then you are blocking the
> > main loop for 1 second;
> >
> > 2. If you yield in every iteration, you solved the task switch latency
> > problem, but you make the entire progr
Gustavo Carneiro wrote:
1. If you don't yield in the for loop body, then you are blocking the
main loop for 1 second;
2. If you yield in every iteration, you solved the task switch latency
problem, but you make the entire program run much slower.
It sounds to me like asyncio is the wrong too
Oh, I see. Thank you for clarification. In this case such wrapper is useless,
unfortunatelly.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/py
Because we have 1000 tasks scheduled for execution on the next loop iteration.
First consumes 10ms and pauses (switches context).
The next task is executed *in the same loop iteration*, it consumes
own 10ms and switches.
The same is repeated for all 1000 tasks in *the same loop iteration*,
I want t
Are you sure in your calculations? If we have 1000 task switches at the "same
time", then "after" one task start to do the job, then after 10ms it will
`sleep(0)` and loop will have time to choose next task. Why loop will be paused
in this case?
___
Py
The real problem is: you have a long-running synchronous loop (or
CPU-bound task in general).
The real solution is: run it inside a thread pool.
By explicit inserting context switches, you don't eliminate the
problem but hide it.
asyncio loop is still busy on handling your CPU-bound task, it
decrea
I'm not sure this is a good approach. For me `async for` is just best way,
since it is explicit. Whe you see `async for`, you think «alright, context will
switch somewhere inside, I am aware of this». If I get you right though.
___
Python-ideas mailing
Exactly our case!
My position is same as njsmith (AFAIR) said somewhere about running file-io in
threads: yes, it is faster to write chunk directly from coroutine, than write
chunk from executor, but you guarantee that there will be no «freeze».
___
Pyt
I think the main point here is that, yes, it is known that `await
asyncio.sleep(0)` yields control allowing other tasks to run.
But imagine you have a for loop with 1000 iterations, and which normally
completes in 1 second. Which means that each iteration takes 1ms to
complete (maybe it does some
On Fri, 14 Jun 2019 at 11:20, Andrew Svetlov
wrote:
> We need either both `asyncio.switch()` and `time.switch()`
> (`threading.switch()` maybe) or none of them.
>
> https://docs.python.org/3/library/asyncio-task.html#sleeping has the
> explicit sentence:
>
> sleep() always suspends the current ta
So -
Now thinking on the problem as a whole -
I think maybe a good way to address this is to put the logic on
"counting N interations or X time and allowing switch" - the logic you
had to explicitly mingle in your code in the first example, in
a function that could wrap the iterator of the `for`
We need either both `asyncio.switch()` and `time.switch()`
(`threading.switch()` maybe) or none of them.
https://docs.python.org/3/library/asyncio-task.html#sleeping has the
explicit sentence:
sleep() always suspends the current task, allowing other tasks to run.
On Fri, Jun 14, 2019 at 5:06 PM
> it is very well known feature.
Or is it? Just because you do know it, it does not mean it is universal -
This is not documented on time.sleep, threading.Thread, or asyncio.sleep
anyway.
I've never worked much on explicitly multi-threaded code, but in 15+ years
this is a pattern I had not seem u
time.sleep(0) is used for a thread context switch, it is very well
known feature.
await asyncio.sleep(0) does the same for async tasks.
Why do we need another API?
On Fri, Jun 14, 2019 at 4:43 PM Joao S. O. Bueno wrote:
>
> Regardless of a mechanism to counting time, and etc...
>
> Maybe a plain
Regardless of a mechanism to counting time, and etc...
Maybe a plain and simple adition to asincio would be a
context-switching call that does what `asyncio.sleep(0)` does today?
It would feel better to write something like
`await asyncio.switch()` than an arbitrary `sleep`.
On Fri, 14 Jun 201
Fortunately, asyncio provide this good universal default: 100ms, when WARING
appears. Nested loops can be solved with context manager, which will share
`last_context_switch_time` between loops. But main thing here is that this is
strictly optional, and when someone will use this thing he will kn
That is exaclty the point of coroutines, but as I described above, there are
cases, where blocking code is too long and moving it to thread makes it harder
to use.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to
On Fri, 14 Jun 2019 at 12:00, Nikita Melentev
wrote:
> > The problem is that the snippet itself is not very helpful.
>
> Explain please.
>
> The good thing is that, if this snippet will be somewhere (asyncio or
> docs), then user will not decide by its own about "what is a long running
> task", b
On Fri, Jun 14, 2019 at 10:44 PM Nikita Melentev
wrote:
>
> The problem here is that even if I have a coroutine all code between «awaits»
> is blocking.
Isn't that kinda the point of coroutines? If you want more yield
points, you either insert more awaits, or you use threads instead.
ChrisA
___
The problem here is that even if I have a coroutine all code between «awaits»
is blocking.
``` python
async def foo():
data = await connection.get() # it is ok, loop handling request, we waiting
# from here
for item in data: # this is 10 ** 6 len
do_sync_jon(item) # this to
I may say something stupid but aren’t coroutines exactly what you are
looking for ?
Le ven. 14 juin 2019 à 13:07, Paul Moore a écrit :
> On Fri, 14 Jun 2019 at 11:38, Nikita Melentev
> wrote:
> >
> > **Sorry, did not know markdown is supported**
>
> Oh cool! It's not "supported" in the sense th
On Fri, 14 Jun 2019 at 11:38, Nikita Melentev wrote:
>
> **Sorry, did not know markdown is supported**
Oh cool! It's not "supported" in the sense that this is a mailing
list, and whether your client renders markdown is client-dependent
(mine doesn't for example). But it looks like Mailman3 does,
> The problem is that the snippet itself is not very helpful.
Explain please.
The good thing is that, if this snippet will be somewhere (asyncio or docs),
then user will not decide by its own about "what is a long running task",
because good default value will be there. This also reduce time to
Not sure how asyncio can help in this case.
It has a warning in debug mode already.
Adding `await asyncio.sleep(0)` is the correct fix for your case.
I don't think that the code should be a part of asyncio.
A recipe is a good idea maybe, not sure. The problem is that the
snippet itself is not very
**Sorry, did not know markdown is supported**
At work we faced a problem of long running python code. Our case was a short
task, but huge count of iterations. Something like:
``` python
for x in data_list:
# do 1ms non-io pure python task
```
So we block loop for more than 100ms, or even 10
28 matches
Mail list logo