uld not figure out how to hook
> into it with SetCommMask, WaitCommEvent, and the overlapped structures. Yet
> another idea is to take what I have learned from the IocpProactor internals
> and copy and expose them in simplified form for my own implementation, though
> I'd still nee
e: ioports? iomem? usb? bt? etc.
If your aim is to achieve high-bandwidth or low-latency -- get close to hardware
If your aim is to support, let's say 100 ports at once -- one of the
two approaches above
If I couldn't guess your aim, please explain why `asyncio` in the first place.
Cheers,
Dima Tisn
There's no interdependency in your code snippet.
If that's really the case, then why not refactor it into something like:
async def notify_all():
async for x in queue:
await handle_notification(x)
async def receive_all():
async for x in receive:
await handle_reception(x)
Hi group,
I'm recently debugging a long-running asyncio program that appears to
get stuck about once a week.
The tools I've discovered so far are:
high level: `asyncio.all_tasks()` + `asyncio.Task.get_stack()`
low level: `loop._selector._fd_to_key`
What's missing is the middle level, i.e.
End-user point of view, a.k.a. my 2c:
re more worrisome scenario: if "objects" from two event loops depends
on each other, that's unsolvable in general case. On the other hand,
what OP wanted, was akin to DAG-like functionality or locking
hierarchy. Naive implementation would block caller
While on the subject of referenced documentation, I find that it too
conflates concurrency with parallelism.
I don't have a good fix in mind though. Any takers?
___
Async-sig mailing list
Async-sig@python.org
No, in this case fib(1) is resolved instantly, thus it's caller is resolved
instantly, thus...
On Mon, 10 Dec 2018 at 9:28 PM, Pradip Caulagi wrote:
> I was wondering if every use of 'await' should return the control to
> event loop? So in this example -
>
What Bret said, here (perhaps) more concise:
async def main():
f1 = ensure_future(say("two", 2))
f2 = ensure_future(say("one", 1))
# at this point both are running
await f1
await f2
Note that current event loop is automatic since Python 3.6; Futures are
higher level
Looking into the internals of aiostream, it's meant to accept an async
sequence of generators, see advanced.py flatmap.
(perhaps some other function has to be used than merge().)
In which case, you could do something along the lines of:
async def tasks(some_queue):
yield go()
yield go()
You are not appending to the list that's being iterated ;)
tasks is an async generator or possibly a custom object that overrides
aiter, anext, etc.
I'd say look at internals of aiostream's merge, it should not be too hard
to extend perhaps
On Wed, 18 Jul 2018 at 9:01 PM, James Stidard
wrote:
Hi group,
It seems that Python docs don't make a recommendation about which library
to use for asynchronous access to files when using asyncio.
Is there a canonical recommendation?
Is it a good idea?
There's at least one 3rd party library popular enough to deserve own
stackoverflow tag, however
My 2c: don't use py3.4; in fact don't use 3.5 either :)
If you decide to support older Python versions, it's only fair that
separate implementation may be needed.
Re: overall problem, why not try the following:
wrap your individual tasks in async def, where each staggers, connects and
resolves
My 2c after careful reading:
restarting tasks automatically (custom nursery example) is quite questionable:
* it's unexpected
* it's not generally safe (argument reuse, side effects)
* user's coroutine can be decorated to achieve same effect
I'd say just remove this, it's not relevant to your
Perhaps it's good to distinguish between graceful shutdown signal
(cancel all head/logical tasks, or even all tasks, let finally blocks
run) and hard stop signal.
In the past, synchronous code, I've used following paradigm:
def custom_signal():
alarm(5)
raise KeyboardInterrupt()
Hi Ludovic,
I believe it's relatively straightforward to implement the core
functionality, if you can at first reduce it to:
* allow only one coro to wait on lock at a given time (i.e. one user
per process / event loop)
* decide explicitly if you want other coros to continue (I assume so,
as
Let me try to answer the question behind the question.
Like any code validation, it's a healthy mix of:
* unit tests [perhaps with
mock.patch_all_known_blocking_calls(side_effect=Exception)]
* good judgement [open("foo").read() technically blocks, but only a
problem on network filesystems]
*
I suppose the websocket case ought to follow conventions similar to kernel
TCP API where `close` returns immediately but continues to send packets
behind the scenes. It could look something like this:
with move_on_after(10):
await get_ws_message(url):
async def get_ws_message(url):
Hi Laurent,
I'm still a dilettante, so take my comments with a grain of salt:
1. Target Python 3.6 only.
(i.e. drop 3.5; look at 3.7 obv, but you want users now)
(i.e. forget `yield from`, none will remember/get it next year)
(if 2.7 or 3.3 must be supported, provide synch package)
t;
> On Tue, Jul 4, 2017 at 1:04 PM Dima Tisnek <dim...@gmail.com> wrote:
>>
>> Come to think of it, what sane tests need is a custom event loop or clever
>> mocks around asyncio.sleep, asyncio.Condition.wait, etc. So that code under
>> test never sleeps.
>>
>
gt;>>>>> for it. Hmm, README is pretty empty but we do use the library for
>>>>>> documenting aio-libs and aiohttp [2] itself
>>>>>>
>>>>>> We use ".. comethod:: connect(request)" for method and "cofunction&quo
com> wrote:
>
>> On Jul 1, 2017, at 6:49 AM, Dima Tisnek <dim...@gmail.com> wrote:
>>
>> There's an academic publication from Microsoft where they built a runtime
>> that would run each test really many times, where scheduler is rigged to
>> order runnable
Hi Chris,
This specific test is easy to write (mock first to return a resolved
future, 2nd to block and 3rd to assert False)
OTOH complexity of the general case is unbounded and generally exponential.
It's akin to testing multithreaded code.
(There's an academic publication from Microsoft where
Hi all,
I'm working to improve async docs, and I wonder if/how async methods
ought to be marked in the documentation, for example
library/async-sync.rst:
""" ... It [lock] has two basic methods, `acquire()` and `release()`. ... """
In fact, these methods are not symmetric, the earlier is
- self.cond.wait()
+ await self.cond.wait()
I've no tests for this :P
On 26 June 2017 at 21:37, Dima Tisnek <dim...@gmail.com> wrote:
> Chris, here's a simple RWLock implementation and analysis:
>
> ```
> import asyncio
>
>
> class RWLock:
> def __ini
ll() makes real life use O(N^2) for N being number of
simultaneous write lock requests
Feel free to use it :)
On 26 June 2017 at 20:21, Chris Jerdonek <chris.jerdo...@gmail.com> wrote:
> On Mon, Jun 26, 2017 at 10:02 AM, Dima Tisnek <dim...@gmail.com> wrote:
>> Chris, comi
Thanks Yuri for quick reply.
http://bugs.python.org/issue30773 created :)
On 26 June 2017 at 19:55, Yury Selivanov wrote:
>
>> On Jun 26, 2017, at 1:53 PM, Andrew Svetlov wrote:
>>
>> IIRC gather collects coroutines in arbitrary order, maybe it's
./O.T. when it comes to directories, you probably want hierarchical
locks rather than RW.
On 26 June 2017 at 11:28, Chris Jerdonek <chris.jerdo...@gmail.com> wrote:
> On Mon, Jun 26, 2017 at 1:43 AM, Dima Tisnek <dim...@gmail.com> wrote:
>> Perhaps you can share your use-case
Looks like a bug in the `ssl` module, not `asyncio`.
Refer to https://github.com/openssl/openssl/issues/710
IMO `ssl` module should be prepared for this.
I'd say post a bug to cpython and see what core devs have to say about it :)
Please note exact versions of python and openssl ofc.
my 2c:
Hi list,
sorry for being a bit late, but I only discovered that post recently...
I've a couple concerns:
1. update
now 3.6 is out, the post should be rewritten or new one made, mainly
because this post if what inquisitive minds find...
2. asyncio for whom?
I've presented asyncio tutorial of
29 matches
Mail list logo