On Fri, Jun 14, 2019 at 10:45:17AM -0700, Brett Cannon wrote:
> I think the logic breaks down with multiple inheritance. If you make C(A,
> B), then you can say C > A and C > B, but then you can't say A > B or A < B
> which breaks sorting.
Just like sets, or floats, or anything other type that
Gustavo Carneiro wrote:
1. If you don't yield in the for loop body, then you are blocking the
main loop for 1 second;
2. If you yield in every iteration, you solved the task switch latency
problem, but you make the entire program run much slower.
It sounds to me like asyncio is the wrong
Most of the times, I am ending up using isinstance(A, list) or type(A) == list
if I can. Therefore I feel like it’s not consistent. And isinstance itself has
10 characters in its name already. Would love to follow == like syntax which we
allow to compare types.
I think I have to check how
Bad idea.
issubclass() works well.
Adding operator support should be considered very carefully.
I prefer a good portion of conservatism for such changes.
How often the proposed change is needed?
It adds a new complexity (documenting, teaching etc) without real
benefits for daily jobs.
On Fri, Jun
And yet:
This is trivial to implement in a custom metaclass -
and maybe it would make default type too "noisy".
I am -1 on this going into normal classes,
and +0 for having a colaborative
metaclass with __lt__, __eq__ and such implementing
these on the stdlib. Maybe living in "types".
On
I think the logic breaks down with multiple inheritance. If you make C(A,
B), then you can say C > A and C > B, but then you can't say A > B or A < B
which breaks sorting.
If you want to know if a B inherits from Base, then I think `Base in
B.mro()` will cover that just as succinctly. And if you
Oh, I see. Thank you for clarification. In this case such wrapper is useless,
unfortunatelly.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
Because we have 1000 tasks scheduled for execution on the next loop iteration.
First consumes 10ms and pauses (switches context).
The next task is executed *in the same loop iteration*, it consumes
own 10ms and switches.
The same is repeated for all 1000 tasks in *the same loop iteration*,
I want
Are you sure in your calculations? If we have 1000 task switches at the "same
time", then "after" one task start to do the job, then after 10ms it will
`sleep(0)` and loop will have time to choose next task. Why loop will be paused
in this case?
___
The real problem is: you have a long-running synchronous loop (or
CPU-bound task in general).
The real solution is: run it inside a thread pool.
By explicit inserting context switches, you don't eliminate the
problem but hide it.
asyncio loop is still busy on handling your CPU-bound task, it
I'm not sure this is a good approach. For me `async for` is just best way,
since it is explicit. Whe you see `async for`, you think «alright, context will
switch somewhere inside, I am aware of this». If I get you right though.
___
Python-ideas mailing
Exactly our case!
My position is same as njsmith (AFAIR) said somewhere about running file-io in
threads: yes, it is faster to write chunk directly from coroutine, than write
chunk from executor, but you guarantee that there will be no «freeze».
___
On Fri, 14 Jun 2019 at 11:20, Andrew Svetlov
wrote:
> We need either both `asyncio.switch()` and `time.switch()`
> (`threading.switch()` maybe) or none of them.
>
> https://docs.python.org/3/library/asyncio-task.html#sleeping has the
> explicit sentence:
>
> sleep() always suspends the current
So -
Now thinking on the problem as a whole -
I think maybe a good way to address this is to put the logic on
"counting N interations or X time and allowing switch" - the logic you
had to explicitly mingle in your code in the first example, in
a function that could wrap the iterator of the `for`
We need either both `asyncio.switch()` and `time.switch()`
(`threading.switch()` maybe) or none of them.
https://docs.python.org/3/library/asyncio-task.html#sleeping has the
explicit sentence:
sleep() always suspends the current task, allowing other tasks to run.
On Fri, Jun 14, 2019 at 5:06
> it is very well known feature.
Or is it? Just because you do know it, it does not mean it is universal -
This is not documented on time.sleep, threading.Thread, or asyncio.sleep
anyway.
I've never worked much on explicitly multi-threaded code, but in 15+ years
this is a pattern I had not seem
time.sleep(0) is used for a thread context switch, it is very well
known feature.
await asyncio.sleep(0) does the same for async tasks.
Why do we need another API?
On Fri, Jun 14, 2019 at 4:43 PM Joao S. O. Bueno wrote:
>
> Regardless of a mechanism to counting time, and etc...
>
> Maybe a plain
Regardless of a mechanism to counting time, and etc...
Maybe a plain and simple adition to asincio would be a
context-switching call that does what `asyncio.sleep(0)` does today?
It would feel better to write something like
`await asyncio.switch()` than an arbitrary `sleep`.
On Fri, 14 Jun
Fortunately, asyncio provide this good universal default: 100ms, when WARING
appears. Nested loops can be solved with context manager, which will share
`last_context_switch_time` between loops. But main thing here is that this is
strictly optional, and when someone will use this thing he will
That is exaclty the point of coroutines, but as I described above, there are
cases, where blocking code is too long and moving it to thread makes it harder
to use.
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to
On Fri, 14 Jun 2019 at 12:00, Nikita Melentev
wrote:
> > The problem is that the snippet itself is not very helpful.
>
> Explain please.
>
> The good thing is that, if this snippet will be somewhere (asyncio or
> docs), then user will not decide by its own about "what is a long running
> task",
On Fri, Jun 14, 2019 at 10:44 PM Nikita Melentev
wrote:
>
> The problem here is that even if I have a coroutine all code between «awaits»
> is blocking.
Isn't that kinda the point of coroutines? If you want more yield
points, you either insert more awaits, or you use threads instead.
ChrisA
The problem here is that even if I have a coroutine all code between «awaits»
is blocking.
``` python
async def foo():
data = await connection.get() # it is ok, loop handling request, we waiting
# from here
for item in data: # this is 10 ** 6 len
do_sync_jon(item) # this
I may say something stupid but aren’t coroutines exactly what you are
looking for ?
Le ven. 14 juin 2019 à 13:07, Paul Moore a écrit :
> On Fri, 14 Jun 2019 at 11:38, Nikita Melentev
> wrote:
> >
> > **Sorry, did not know markdown is supported**
>
> Oh cool! It's not "supported" in the sense
> On 14 Jun 2019, at 11:31, Steven D'Aprano wrote:
>
>> On Fri, Jun 14, 2019 at 08:02:15AM -, eminbugrasaral--- via Python-ideas
>> wrote:
>> class Base:
>>pass
>>
>> class A(Base):
>>pass
> [...]
>
>> While we can do `A == B`, or `B == C` or `B == B`, I would expect to
>> be
On Fri, 14 Jun 2019 at 11:38, Nikita Melentev wrote:
>
> **Sorry, did not know markdown is supported**
Oh cool! It's not "supported" in the sense that this is a mailing
list, and whether your client renders markdown is client-dependent
(mine doesn't for example). But it looks like Mailman3 does,
> The problem is that the snippet itself is not very helpful.
Explain please.
The good thing is that, if this snippet will be somewhere (asyncio or docs),
then user will not decide by its own about "what is a long running task",
because good default value will be there. This also reduce time
On Fri, Jun 14, 2019 at 12:39 PM Steven D'Aprano wrote:
> and no simple way to talk about *strict* subclass and superclass
> relationships without a verbose compound test:
>
> assert issubclass(parent, child) and child != parent
Well, why not go further then and make the following thing to be a
Not sure how asyncio can help in this case.
It has a warning in debug mode already.
Adding `await asyncio.sleep(0)` is the correct fix for your case.
I don't think that the code should be a part of asyncio.
A recipe is a good idea maybe, not sure. The problem is that the
snippet itself is not very
**Sorry, did not know markdown is supported**
At work we faced a problem of long running python code. Our case was a short
task, but huge count of iterations. Something like:
``` python
for x in data_list:
# do 1ms non-io pure python task
```
So we block loop for more than 100ms, or even
At work we faced a problem of long running python code. Our case was a short
task, but huge count of iterations. Something like:
for x in data_list:
# do 1ms non-io pure python task
So we block loop for more than 100ms, or even 1000ms. The first naive solution
was "move this to thread" so
On Fri, Jun 14, 2019 at 08:02:15AM -, eminbugrasaral--- via Python-ideas
wrote:
> class Base:
> pass
>
> class A(Base):
> pass
[...]
> While we can do `A == B`, or `B == C` or `B == B`, I would expect to
> be able to compare like this as well: `A >= B (if B is subclass or
> itself
Let's assume you have this model:
```
class Base:
pass
class A(Base):
pass
class B(Base):
pass
class C(A):
pass
```
While we can do `A == B`, or `B == C` or `B == B`, I would expect to be able to
compare like this as well: `A >= B (if B is subclass or itself of A)`, or `B <=
Stephen J. Turnbull wrote:
Note that signal matrices will almost certainly be a completely
different type from signals, so as far as the compiler is concerned
there's no conflict between "@=" for signal injection and "@=" for
signal matrix multiplication.
Except that if @= represents signal
Kyle Lahnakoski wrote:
Here is a half baked idea:
class A:
def assign(self, other):
# whatever this means
setattr(A, "<==", A.assign)
Some things that would need to be addressed to fully bake this idea:
* What determines the precedence of these new operators?
* How to
35 matches
Mail list logo