Re: [python-tulip] asyncio.wait(FIRST_COMPLETED) returns more than one completions - 3.4rc2

2014-03-19 Thread Imran Geriskovan
A typical scenario...
Stateful resources recycled among multiple consumers.
Shortcut above seems nice on screen.
Imran


[python-tulip] TLS/SSL Wrapping

2014-04-05 Thread Imran Geriskovan
Hi,
Normal sockets can be created, used for some communications and then they may be
wrapped with a SSLContext to proceed in encrypted mode.

How can such a sequence be executed using asynio?
Mode of communication is determined at creation time (ie. when calling
'open_connection').
Is there any way to switch to TLS/SSL after that?

Regards,
Imran


Re: [python-tulip] TLS/SSL Wrapping

2014-04-06 Thread Imran Geriskovan
Thanks for the quidance.

selector_events.py : _SelectorSslTransport : __init__ wraps the socket
and postpones handshake. There is not much asyncio terms here..

Then it delegates to '_on_handshake' which has much of the asyncio stuff..
However, 'Under the hood' modification you've pointed is probably
beyond my abilities.

Actually I'm still thinking in terms of Streams, because they make life
easier.

Parallel to above partitioning I guess we need something like this:

sStream = ssl_wrap(Stream)
yield from sStream.do_handshake()

Regards,
Imran


[python-tulip] TLS handshake exception. Bug/Comments/HeartBleed?

2014-05-05 Thread Imran Geriskovan
This code snippet throws exception below:

ctx = create_default_context()
co = open_connection(ip, port, family = AF_INET, ssl = ctx,
server_hostname = host)
yield from co

Throws:
- ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
(_ssl.c:598)
- Or sometimes Unknown CA

I'm using Debian Unstable. Infact, this worked fine before HeartBleed.
After disclosure of HeartBleed I began to get this exception for some sites.
And now I get it for all sites. Situation is same for both 3.4 and 3.4.1.rc1.

May it be a combination of:
- Mass certificate renewals around the net
- Lack of proper CA certificates on Debian during this period
- Bugs related to updates on OpenSsl, gnutls, etc
- Bugs in between python and openssl
- An Asyncio issue
- etc, etc.. ?

Your comments and experiences are welcome..
Regards, Imran


Traceback (most recent call last):
  File /_/_/_/xyz.py, line 666, in Open
return (yield from co)
  File /usr/lib/python3.4/asyncio/streams.py, line 61, in open_connection
lambda: protocol, host, port, **kwds)
  File /usr/lib/python3.4/asyncio/base_events.py, line 437, in
create_connection
sock, protocol_factory, ssl, server_hostname)
  File /usr/lib/python3.4/asyncio/base_events.py, line 453, in
_create_connection_transport
yield from waiter
  File /usr/lib/python3.4/asyncio/futures.py, line 348, in __iter__
yield self  # This tells Task to wait for completion.
  File /usr/lib/python3.4/asyncio/tasks.py, line 370, in _wakeup
value = future.result()
  File /usr/lib/python3.4/asyncio/futures.py, line 243, in result
raise self._exception
  File /usr/lib/python3.4/asyncio/selector_events.py, line 598, in
_on_handshake
self._sock.do_handshake()
  File /usr/lib/python3.4/ssl.py, line 805, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify
failed (_ssl.c:598)


Re: [python-tulip] TLS handshake exception. Bug/Comments/HeartBleed?

2014-05-06 Thread Imran Geriskovan
Honestly, I do expect some misconfiguration on my side but I couldn't track
it down to the source. So I want to make sure I'm the only one with
such a problem.

The machine is an up to date Debian Sid with all fresh updates.
sources.list: deb http://ftp.debian.org/debian unstable main contrib non-free

It has a typical installation with no customizations on Python,
OpenSsl, ca-certificates, etc. ca-certificates is especially up-to-date.

A site example:
www.linkedin.com:443 Can connect
static.licdn.com:443 Can not connect

Regards, Imran


Re: [python-tulip] TLS handshake exception. Bug/Comments/HeartBleed?

2014-05-06 Thread Imran Geriskovan
Thank you. Results for
echo | openssl s_client -CApath /etc/ssl/certs/ -connect
static.licdn.com:443 | grep 'Verify return code':

www.linkedin.com:443 OK
static.licdn.com:443:  Verify return code: 20 (unable to get local
issuer certificate)!
That's parallel to what asyncio also says.

Interestingly Firefox (Iceweasel) does not complain when opening
https://static.licdn.com
with its usual This Connection is Untrusted page.
Is it a A MITM setup which is detected by openssl/asyncio but not Firefox?

Some other sites:
mail.google.com:443 OK for now. But was NOK for a while.
www.reddit.com:443 NOK. Firefox complains too.
Though it says Cert is valid for *.akamaihd.net ,
*.akamaihd-staging.net , a248.e.akamai.net.
Optimistic possibility may be a misconfigured CDN network + wave of
certificate renewals..

Imran


[python-tulip] SSL HandShake / Dynamic Certificates

2014-07-30 Thread Imran Geriskovan
One can start a SSL server with a static certificate like this:

ctx = create_default_context(Purpose.CLIENT_AUTH)
ctx.load_cert_chain('pem.crt')
async(start_server(handle, host, port, family=AF_INET, limit=8192, ssl=ctx))

However, if you need to use dynamic certificates, you must have access
to SSL Handshake in async means. But this is not currently supported
by asyncio.

I also remember that, about 3 months ago we had some discussion
about creating a plain Stream and at some point of communication
switching to SSL, which again need Async SSL HandShake.

Are there any developments about supporting this capability.

Regards, Imran


Re: [python-tulip] SSL HandShake / Dynamic Certificates

2014-07-30 Thread Imran Geriskovan
 What is a dynamic certificate?
 Victor

Certificates are not Dynamic after all.

It is providing different certificates to different accepted
clients by SSL server. Pre-Asyncio era code is here:

ctx = create_default_context(Purpose.CLIENT_AUTH)
ctx.load_cert_chain(pem1.crt') # or pem99.crt
s = ctx.wrap_socket(s, server_side = True, do_handshake_on_connect = False)
...
s.do_handshake()


Anyway..
The request is to ssl wrap a stream (sort of. Switch to ssl mode after
creation) and have seperate access to handshake on asyncio.

Regards,


Re: [python-tulip] Is async object creation supported?

2015-05-16 Thread Imran Geriskovan
 So asynchronous object creation should be done by a factory coroutine,
 and not by calling the standard constructor machinery.

It may be done by a factory or a second async helper function
method or by any other creative method. However, they all make the
the code more bureaucratic. Elegance of single point __init__ is lost.

May be developers can consider allowing

x = yield from X()

as async style will be more wide spread with python 3.5+
and resumable functions on C++.

Regards,
Imran


[python-tulip] Is async object creation supported?

2015-05-16 Thread Imran Geriskovan
Or am I doing something wrong?
X does not run. Y runs.
Thanks for comments..

x = yield from X()  # Does not work
y = Y() # Works..
yield from y.init()

class X:
def __init__():
yield from some async code

class Y:
def __init__():
   some sync code

def init():
yield from some async code


Re: [python-tulip] Re: Cost of async call

2015-11-15 Thread Imran Geriskovan
Using code below, I got ratios around 2.5-2.7.
Not so bad if carefully used.

I wonder how this figure would change in
upcoming releases. The reason I asked
the question, was to have some discussion
about the ingredients of it.

In short: Should we invest on it?

Regards,
Imran

now = datetime.now
async def main():

t1 = now()
for i in range(300):
foo1()
tot1 = now() - t1

t1 = now()
for i in range(300):
await foo2()
tot2 = now() - t1

print(tot2/tot1)

get_event_loop().run_until_complete(main())


On 11/15/15, Luca Sbardella <luca.sbarde...@gmail.com> wrote:
>
> On Thursday, November 12, 2015 at 5:16:22 PM UTC, Imran Geriskovan wrote:
>>
>> What is the cost/overhead difference
>> between these calls?
>>
>> await foo1()
>> foo2()
>>
>> async def foo1():
>> pass
>>
>> def foo2():
>>pass
>>
>>
> Double the time
> https://gist.github.com/lsbardel/cbe62fa5218ee0252188
>
>


Re: [python-tulip] Asynchronous console and interfaces

2015-12-03 Thread Imran Geriskovan
>> Regarding the "stream style", I can share some of my
>> experience:
>>
>> For code below, other than some limited portion
>> and awaits, can you see any line specific to asyncio? No?
>>
>> Well that's the point. With some little search replace,
>> and tweaks it can easily be transformed into
>> equivalent multithreaded version. Even to a C++ version.

>   3. implementing a protocol completely using asyncio coroutines and
> streams makes the implementation itself more readable, so it can be a goal
> in itself.

At the end, you got the idea.
It is readable, portable, generic, etc, etc..
"This" should be underlined in asyncio documentation.
Not the protocols, callbacks, futures, transports...


Re: [python-tulip] Add Timeout class to asyncio

2015-12-28 Thread Imran Geriskovan
Async/Sync Context Managers are especially neat for
Stream Style development. They make such organization
possible and comprehensible. i.e:

async with resp.send():
with Page(resp, title):
with Form(resp, postpath):
self.menu(resp)
await nav(resp, dpath, fname)
 self.show_clipboard(resp)

May be, we can broaden the discussion on
extended use of Async Context Managers (CM)
with asyncio.

Timeout CM may be a good start.

Regards,
Imran


On 12/28/15, Andrew Svetlov  wrote:
> with Timeout(1.5):
> # do long running task
> await asyncio.sleep(1)
> I'll be happy to make a Pull Request after getting general agreement.


[python-tulip] Async Process

2016-01-31 Thread Imran Geriskovan
Below you can find async SubProcess class which
derives from Stream Class send previously.

Features:
1) Process Pipes (See Example)
2) ContextManager style usage (See Example)

I would like to hear your experiences regarding
SubProcess usage patterns. I'll be glad if a similar
approach is included in standard library.

Example:
This is a http handler which dumps Html formatted
man page into a chunked stream.
** Note the piping among processes.

async def manpage(stream, f):
async with await create_proc('groff -mandoc -Thtml -P -l -P -D -P
/tmp/', 'rw') as groff:
async with await create_proc('zcat ' + f, outstream=groff):
while 1:
r = await groff.readline()
if not r:
break
# Format links to other man pages
r = sub(b'(.+?)\\((.+?)\\)',
b'\\1(\\2)', r)
await stream.flushchunk(r)


Source:
from asyncio import create_subprocess_exec, get_event_loop
from asyncio.subprocess import PIPE

from lib.setup import bsize
from lib.stream import Stream

async def create_proc(cmd, mode='r', outstream=None):
stdin = PIPE if 'w' in mode else None
stdout = PIPE if outstream or ('r' in mode) else None
if type(cmd) is str:
clist = cmd.split()
else:
clist = cmd
aproc = await create_subprocess_exec(*clist, stdin=stdin,
stdout=stdout, limit=bsize)
return Proc(aproc, outstream)


class Proc(Stream):

def __init__(self, aproc, outstream):
Stream.__init__(self, aproc.stdout, aproc.stdin)
self.aproc = aproc
self.outstream = outstream

async def readwrite(self):
while 1:
r = await self.read(bsize)
if r:
self.outstream.write(r)
else:
break
self.outstream.close()

async def __aenter__(self):
if self.outstream:
get_event_loop().create_task(self.readwrite())
return self

async def __aexit__(self, *args):
if self.aproc.stdout:
await self.read()  # ??
await self.aproc.wait()


[python-tulip] Async Constructor (__ainit__)

2016-01-31 Thread Imran Geriskovan
Currently class instances can not
directly be created asyncronously.

Yes, there are other means for async
creation. (As we have discussed before,
factories, __new__ is a coroutine, etc)

However, having all creation logic, under
the class definition and similar to
__init__ is more elegant.

Regards,
Imran


Re: [python-tulip] Async Constructor (__ainit__)

2016-01-31 Thread Imran Geriskovan
Did you mean this? :

aresult = await someafunc()  # Good
aobject = await AClass()  # Bad!!
bobject = await somefuncreturningBOjbect()  # Good???

Interesting. What is the difference?
Is it python's duty to make Bad! style,
anti-patterns impossible?

I think there is no relation between microblog,
sqlite, etc with the subject.


Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-25 Thread Imran Geriskovan
On 4/25/16, cr0hn cr0hn  wrote:
> I uploaded as GIST my PoC code, if anyone would like to see the code or
> send any improvement:
> https://gist.github.com/cr0hn/e88dfb1fe8ed0fbddf49185f419db4d8
> Regards,

Thanks for the work.

>> 2) You cant use any blocking call anywhere in async server.
>> If you do, ALL your server is dead in the water till the return
>> of this blocking call. Do you think that my design is faulty?
>> Then look at the SSH/TLS implementation of asyncio itself.
>> During handshake, you are at the mercy of openssh library.
>> Thus, it is impossible to build medium to highload TLS server.
>> To do that safely and appropiately you need asyncio
>> implemenation of openssh!

It's openssl. Not ssh... Sorry..


Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-18 Thread Imran Geriskovan
On 4/18/16, Gustavo Carneiro  wrote:
> I don't think you need the threads.
> 1. If your tasks are I/O bound, coroutines are a safer way to do things,
> and probably even have better performance;

Thread vs Coroutine context switching is an interesting topic.
Do you have any data for comparison?

Regards,
Imran


Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-18 Thread Imran Geriskovan
>>> I don't think you need the threads.
>>> 1. If your tasks are I/O bound, coroutines are a safer way to do things,
>>> and probably even have better performance;
>>
>> Thread vs Coroutine context switching is an interesting topic.
>> Do you have any data for comparison?

> My 2cts:
> OS native (= non-green) threads are an OS scheduler driven, preemptive
> multitasking approach, necessarily with context switching overhead that
> is higher than a cooperative multitasking approach like asyncio event loop.
> Note: that is Twisted, not asyncio, but the latter should behave the
> same qualitatively.
> /Tobias

Linux OS threads come with 8MB stack per thread + switching
costs as you mentioned.

A) Python threads are not real threads. It multiplexes "Python Threads"
on a single OS thread. (Guido, can you correct me if I'm wrong,
and can you provide some info on multiplexing/context switching of
"Python Threads"?)

B) Where as asyncio multiplexes coroutines on a "Python Thread"?

The question is "Which one is more effective?". The answer is
ofcourse dependent on use case.

However, as a heavy user of coroutines, I begin to think to go back to
"Python Threads".. Anyway that's personal choice.

Now lets clarify advantages and disadvantages between A and B..

Regards,
Imran


Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-19 Thread Imran Geriskovan
> This is a very simple example, but it illustrates some of the problems with
> threading vs coroutines:
>1. With threads you need more locks, and the more locks you have: a) the
> lower the performance, and b) the greater the risk of introducing
> deadlocks;
> So please keep in mind that things are not as black and white as "which is
> faster".  There are other things to consider.


While handling mutually exclusive muItithreaded I/O,
you dont need any lock. Aside from generalist advices,
reasons for thinking to go back to threads are:

1) Awaits are viral. Async programmining is kind of all_or_nothing.
You need all I/O libraries to be async.

2) You cant use any blocking call anywhere in async server.
If you do, ALL your server is dead in the water till the return
of this blocking call. Do you think that my design is faulty?
Then look at the SSH/TLS implementation of asyncio itself.
During handshake, you are at the mercy of openssh library.
Thus, it is impossible to build medium to highload TLS server.
To do that safely and appropiately you need asyncio
implemenation of openssh!

3) I appreciate the core idea of asyncio. However, it is not cheap.
It hardly justifies the whole new thing, while you can only
drop "await" s and run it as multithreaded and preserving compatibility
with all old libraries. If you did not bought the inverted
async patterns, even you still preserve your chances of migrating
to any other classical language.

4) Major Down side of thread approach is memory consumption.
That is 8MB per thread on linux. Other than this OS threads are cheap
on linux. (Windows is another story.) If your use case can afford
it, why not use it.

Returning to the original subject of this message thread;
as cr...@cr0hn.com proposed certain combinations of processes,
threads and coroutines definetely make sense..

Regards,


Re: [python-tulip] Curio

2016-10-23 Thread Imran Geriskovan
>> I'd be happy to see Curio in std library..

> Dave has repeatedly stated that he's not interested in maintaining
> Curio long term or keeping the API stable. That means it's not going
> to happen. It might make more sense to propose carefully designed
> additions to asyncio that aim to fill in the gaps you've found by
> using curio. This should focus on API functionality; the performance
> is being worked on separately, and there's also uvloop.

Providing direct async versions of blocking operations is the
key. That's it. Curio just does that. At the surface its that simple.
Is providing this on asyncio possible?

I've carried some hacks for asyncio streams. However
covering async versions of all blocking use cases at some
point required some kind of dirty hack involving going into async
internals. Frankly I've given up. I dont think a carefully designed
patch can solve it.

I've ended up with this:
Async programming is good, as long as it is the mirror
image of blocking one.

Inverting the control and then trying to get it back is not
a good design. As I've tried it once..


[python-tulip] Curio

2016-10-23 Thread Imran Geriskovan
As I noted in my previous posts in this group,
I mostly try keeping async code with parity of
blocking code. (Reasons: Ease of streams based
development, Easy migration to compiled langs,
threads, etc, etc..)

For couple of months I've been playing with
Curio, to which now I'm a total convert.

For an async code based on Curio, you can almost
drop all "await/async" keywords with some
minor manupulations; and bang: You get working
blocking version. Or you can go from blocking
version to async.

Even better, you can mix async and blocking code
in hybrid combinations to get best of both worlds.
(Ex: when using blocking libraries, performance
critical routines, etc..)

With enough effort these can also be done with
asyncio. However, with Curio, this approach
is directly supported out of the box. And it's
a compact one without any bells and whistles.
It has nothing more than necessary to get
the job done.

Performance of pure async code is about 2x with
respect to asyncio. And your millage may be
extended with hybrid blocking combinations.

I'd be happy to see Curio in std library..

Regards,
Imran


Re: [python-tulip] Curio

2016-10-23 Thread Imran Geriskovan
>> As I noted in my previous posts in this group,
>> I mostly try keeping async code with parity of
>> blocking code. (Reasons: Ease of streams based
>> development, Easy migration to compiled langs,
>> threads, etc, etc..)
>>
>> For an async code based on Curio, you can almost
>> drop all "await/async" keywords with some
>> minor manupulations; and bang: You get working
>> blocking version. Or you can go from blocking
>> version to async.

> Thanks for sharing your experiences.

You're welcome.

> So, it's the same motivation as for MicroPython's uasyncio which was
> presented on this list previously.

I think, it is good to see such a class of async implementations.
I can even release my own version of hacks for running asyncio
in the same spirit. However, I found Curio much more elegant,
so dumped most of it.

After some settlement about the direction, with such async
engines around, I'd expect Python standard library
to support a similar approach.

It is much more than async/sync write().

Regards,
Imran