Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-19 Thread Tobias Oberstein

Sorry, I should have been more explicit:

With Python (both CPython and PyPy), the least overhead / best 
performance (throughput) approach to network servers is:


Use a multi-process architecture with shared listening ports (Linux 
SO_REUSEPORT), with each process running an event loop (asyncio/Twisted).


I don't recommend using OS threads (of course) ;)

Am 19.04.2016 um 23:51 schrieb Gustavo Carneiro:

On 19 April 2016 at 22:02, Imran Geriskovan > wrote:

>> A) Python threads are not real threads. It multiplexes "Python Threads"
>> on a single OS thread. (Guido, can you correct me if I'm wrong,
>> and can you provide some info on multiplexing/context switching of
>> "Python Threads"?)

> Sorry, you are wrong. Python threads map 1:1 to OS threads. They are as
> real as threads come (the GIL notwithstanding).

Ok then. Just to confirm for cpython:
- Among these OS threads, only one thread can run at a time due to GIL.

A thread releases GIL (thus allow any other thread began execution)
when waiting for blocking I/O. (http://www.dabeaz.com/python/GIL.pdf)
This is similar to what we do in asyncio with awaits.

Thus, multi-threaded I/O is the next best thing if we do not use
asyncio.

Then the question is still this: Which one is cheaper?
Thread overheads or asyncio overheads.


IMHO, that is the wrong question to ask; that doesn't matter that much.
What matters most is, which one is safer.  Threads appear deceptively
simple... that is up to the point where you trigger a deadlock and your
whole application just freezes as a result.  Because threads need lots
and lots of locks everywhere.  Asyncio code also may need some locks,
but only a fraction, because for a lot of things you can get away with
not doing any locking.  For example, imagine a simple statistics class,
like this:

class MeanStat:
 def __init__(self):
 self.num_values = 0
 self.sum_values = 0

 def add_sample(self, value):
 self.num_values += 1
 self.sum_values += value
 @property
 def mean(self):
 return self.sum_values/self.num_values if self.num_values > 0
else 0


The code above can be used as is in asyncio applications.  You can call
MeanStat.add_sample() from multiple asyncio tasks at the same time
without any locking and you know the MeanStat.mean property will always
return a correct value.

However, if you try to do this with a threaded application, if you don't
use any locking you will get incorrect results (and what is annoying is
that you may not get incorrect results in development, but only in
production!), because a thread may be calling MeanStat.mean() and the
sum/nvalues expression may en up being calculated in the middle of
another thread adding a sample:

 def add_sample(self, value):
 self.num_values += 1
   < switches to another thread here: num_values was
updated, but sum_values was not!
 self.sum_values += value

The correct way to fix that code with threading is to add locks:

class MeanStat:
 def __init__(self):
 self.lock = threading.Lock()
 self.num_values = 0
 self.sum_values = 0

 def add_sample(self, value):
 with self.lock:
 self.num_values += 1
 self.sum_values += value
 @property
 def mean(self):
 with self.lock:
 return self.sum_values/self.num_values if self.num_values >
0 else 0

This is a very simple example, but it illustrates some of the problems
with threading vs coroutines:

1. With threads you need more locks, and the more locks you have: a)
the lower the performance, and b) the greater the risk of introducing
deadlocks;

2. If you /forget/ that you need locks in some place (remember that
most code is not as simple as this example), you get race conditions:
code that /seems/ to work fine in development, but behaves strangely in
production: strange values being computed, crashes, deadlocks.

So please keep in mind that things are not as black and white as "which
is faster".  There are other things to consider.

--
Gustavo J. A. M. Carneiro
Gambit Research
"The universe is always one step beyond logic." -- Frank Herbert




Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-19 Thread Tobias Oberstein

Am 19.04.2016 um 23:02 schrieb Imran Geriskovan:

A) Python threads are not real threads. It multiplexes "Python Threads"
on a single OS thread. (Guido, can you correct me if I'm wrong,
and can you provide some info on multiplexing/context switching of
"Python Threads"?)



Sorry, you are wrong. Python threads map 1:1 to OS threads. They are as
real as threads come (the GIL notwithstanding).


Ok then. Just to confirm for cpython:
- Among these OS threads, only one thread can run at a time due to GIL.

A thread releases GIL (thus allow any other thread began execution)
when waiting for blocking I/O. (http://www.dabeaz.com/python/GIL.pdf)
This is similar to what we do in asyncio with awaits.

Thus, multi-threaded I/O is the next best thing if we do not use asyncio.

Then the question is still this: Which one is cheaper?
Thread overheads or asyncio overheads.



The overhead of cooperative multitasking is smaller, but for maximum 
performance you need to combine that with preemptive multitasking 
because to saturate modern hardware, you need high IO concurrency


(I am leaving out stuff like Linux AIO in this discussion)




Re: [python-tulip] Process + Threads + asyncio... has sense?

2016-04-18 Thread Tobias Oberstein

Am 18.04.2016 um 21:33 schrieb Imran Geriskovan:

On 4/18/16, Gustavo Carneiro  wrote:

I don't think you need the threads.
1. If your tasks are I/O bound, coroutines are a safer way to do things,
and probably even have better performance;


Thread vs Coroutine context switching is an interesting topic.
Do you have any data for comparison?


My 2cts:

OS native (= non-green) threads are an OS scheduler driven, preemptive 
multitasking approach, necessarily with context switching overhead that 
is higher than a cooperative multitasking approach like asyncio event loop.


Eg the context switching with threads involves saving and restoring the 
whole CPU core register set. OS native threads also involves bounding 
back and forth between kernel- and userspace.


Practical evidence: name one high performance network server that is 
using threads (and only threads), and not some event loop thing;)


You want N threads/processes where N is related to number of cores 
and/or effective IO concurrency _and_ each thread/process run an event 
loop thing. And because of the GIL, you want processes, not threads on 
(C)Python.


The effective IO concurrency depends on the number of IO queues your 
hardware supports (the NICs or the storage devices). The IO queues 
should have affinity to the (nearest) CPU core on an SMP system also.


For network, I once did some experiments of how far Python can go. Here 
is Python (PyPy) doing 630k HTTP requests/sec (12.6 GB/sec) using 40 cores:


https://github.com/crossbario/crossbarexamples/tree/master/benchmark/web

Note: that is Twisted, not asyncio, but the latter should behave the 
same qualitatively.


Cheers,
/Tobias



Regards,
Imran





[python-tulip] AutobahnPython: deprecate Python 2 / asyncio? Any real-world use?

2016-01-25 Thread Tobias Oberstein

Hi,

AutobahnPython supports running on Python 2 with asyncio - more 
precisely, by using Trollius, a backport of asyncio to Python 2.


Same for txaio: https://github.com/crossbario/txaio#platform-support

Victor, creator of Trollius is thinking of depcrecating Trollius and is 
asking if there is _real-world_ use of Trollius in _applications_.


Please see the thread here:

https://groups.google.com/forum/#!topic/python-tulip/mecVwhnVP0A

Since AutobahnPython depends on Trollius for Py2/asyncio support, I am 
forwarding the question. If Trollius is deprecated, AutobahnPython will 
(sooner or later) deprecate supporting that combination as well.


If you have an app using the combination, now would be a good time to 
speak up;)


Cheers,
/Tobias


Am 25.01.2016 um 22:06 schrieb Victor Stinner:

Hi,

2016-01-25 21:57 GMT+01:00 Tobias Oberstein <tobias.oberst...@gmail.com>:

Autobahn, a WebSocket/WAMP library supports running on top of Twisted or
asyncio (and Trollius for Py2):


Yeah, it's mentioned in https://trollius.readthedocs.org/libraries.html

AutobahnPython, Pulsar and Tornado can be used with Trollius. Ok. But
I'm more interested by feedbak of final users, users building
applications with all these pieces.

I'm not sure that Trollius is used in practice :-)

Victor





Re: [python-tulip] Re: Writing unit tests for user code

2014-07-25 Thread Tobias Oberstein

Am 25.07.2014 13:21, schrieb Victor Stinner:

2014-07-25 11:30 GMT+02:00 Luca Sbardella luca.sbarde...@gmail.com:

Pulsar has an asynchronous testing framework too

http://quantmind.github.io/pulsar/apps/test.html

It uses the unittest.TestCase class from the standard library but can handle
test functions which return generators or asyncio/trollius Futures.
It should work with twisted with the tx decorator documented here

http://quantmind.github.io/pulsar/tutorials/webmail.html


Cool. Maybe you should extract the testing framework to a separated
module so other projects can use it?


+1
That would be useful!



Victor





Re: [python-tulip] Writing unit tests for user code

2014-07-25 Thread Tobias Oberstein

In Twisted, there is Trial, which provides an extended version of
unittest.TestCase with this feature:


The main unique feature of this testing class is the ability to return a
​Deferred from a test-case method. A test method which returns a


snip


I strongly recommend you reconsider doing this.

The Twisted core developers have, for several years now, mostly eschewed
the usage of returning Deferreds from test cases. Instead, tests are


Are you referring to Twisted code or to _user_ code (on top of Twisted)?

I am asking since

http://twistedmatrix.com/documents/current/core/howto/trial.html#testing-a-protocol

seems to encourage writing unit tests for user code with test methods 
returning deferreds. In fact, this document is what I used for guidance. 
Is it outdated?



written by explicitly controlling all events explicitly, with careful
use of test doubles to replace certain features of the event loop.
There’s also a class in Trial, “SynchronousTestCase”, which offers
utilities such as synchronousResultOf(Deferred) - result and
synchronousFailureOf(Deferred) - result, which both assume the Deferred
has fired synchronously (which your test case can ensure). The testing
philosophy in Twisted is: if you need the global event loop to really
truly run, it’s not a unit test.

Trial’s return-a-Deferred feature is still useful in certain integration
test situations, where you really want to talk to the Internet or
interact with the user in your set of test suites, but for unit tests,
it’s possible to exercise all code paths before the test method returns.


Is there any document on this best practice - applied to testing user 
code, including protocols?


And: I'm sure Twisted developer's will have reasons for frowning upon 
tests returning deferreds, but could you recap shortly why?


Thanks for your hints,
/Tobias





--
Christopher Armstrong
http://twitter.com/radix
http://wordeology.com/





Re: [python-tulip] Writing unit tests for user code

2014-07-25 Thread Tobias Oberstein

And: I'm sure Twisted developer's will have reasons for frowning upon
tests returning deferreds, but could you recap shortly why?



It has the classic problem with bad tests: reliance on global state
which is not controlled by the test. Unit tests should be isolated, they
should not affect or rely on external state. When they do, you’re much
more likely to run into Spooky Action At A Distance[1]. It’s also hard
to debug when things go wrong — often you’re left with a frozen test
suite and no useful diagnostic output when there’s a bug.


I do understand that unit tests should not perform _actual_ networking 
(like a real TCP connection - whether that's going remote or loopback). 
Networking introduces some non-determinism.


But why is it bad to test protocol/factories using _fake_ transports like

proto_helpers.StringTransport

?

Sorry for asking again .. but I feel it's important (for me;) to 
understand the actual problem here:


Is it a) doing networking in unit test or b) doing any async code in 
unit tests (even when not doing any networking)?




As my friends David Reid and Alex Gaynor like to say: “Call a function,
pass it some arguments, look at the return value”.


1: https://en.wikipedia.org/wiki/Action_at_a_distance_%28computer_programming%29


Alright. I see. However, doesn't that leave big gaps in testing?

E.g., for integration tests, we (Autobahn|Python) have 
Autobahn|Testsuite, which actually tests interaction of components over 
the network.


For low-level (per function) testing, we have (synchronous) unit tests.

But there is a gap between those latter two.

It is this level of testing that I am missing.

Thanks again!
/Tobias



Re: [python-tulip] Release of Trollius 1.0

2014-07-22 Thread Tobias Oberstein

I'm very happy to announce the release of Trollius 1.0:


Oh, great! Congrats on this Victor!

/Tobias


Re: [python-tulip] Writing unit tests for user code

2014-07-22 Thread Tobias Oberstein

So I guess what I am after is six for asyncio/Twisted ;)

Isn't six for asyncio/Twisted covered by the fact that you can
implement one event loop in terms of another?  Idiomatic asyncio code


Yep, wrapping a network library for some protocol written for network 
framework X for users to run under framework Y would be another 
approach. We took a different one (below) ..



and idiomatic Twisted code can coexist on the same event loop without
any single component being written to a lowest common denominator that
supports both simultaneously without being idiomatic in either.  There
is a little work to be done at the boundaries to convert a Deferred into
something that can be yielded in an asyncio coroutine or vice versa, but
that seems to me to be preferable to trying to support multiple async
frameworks in one package.


For Autobahn, since it's a library, it would put one user community at a 
disadvantage by adding an additional dependency and run-time requirement 
on the other framework.


But it's not an issue. The dual support is already there for some time 
and I'm quite happy with it. It's about extending this dual support to 
unit tests.


Well, probably my situation is quite specific.

/Tobias


[python-tulip] Re: Trollius 0.3 beta: module renamed from asyncio to trollius

2014-06-01 Thread Tobias Oberstein
Hi,

I missed that discussion. 

IMHO, the renaming further complicates matters, since now selecting 
networking library must be done consistently in both library and app code.

E.g. if we add

try: 
# Use Tulip on Python 3.3, or builtin asyncio on Python 3.4+ 
import asyncio 
except ImportError: 
# Use Trollius on Python = 3.2 
import trollius as asyncio 

to AutobahnPython, what if the user (running Py 3.4 with Trollius 
installed) does

import trollius as asyncio 

in his user code? Stuff will likely break.

A library (AutobahnPython) cannot know which one the user is running.

Either the choice is made canonically (and hence consistently between 
library and user code), or the choice is made by the library, and then user 
code would look like

from autobahn import asyncio

Mmh.

/Tobias



Am Dienstag, 20. Mai 2014 15:55:25 UTC+2 schrieb Victor Stinner:

 Hi, 

 I synchronized Trollius with Tulip 3.4.1. As discussed on this list, I 
 also chose to rename asyncio to trollius to make it possible to 
 use Trollius on Python 3.4+. 

 It is now more explicit than Trollius and Tulip are two different 
 projects which are almost the same but are different (yield from ... 
 vs yield From(...)). 

 It's still a beta version. I plan to release a new version this week. 
 I hesitate between the version 0.3 and the version 1.0. 

 It would be nice if someone can test the folllowing projects which are 
 known to work on Tulip and Trollius: 

 - AutobahnPython 
 - Pulsar 
 - Tornado 
 - aiodns 

 You will need to modify the source code to add the following code at 
 the top of files using asyncio: 
 --- 
 try: 
 # Use Trollius on Python = 3.2 
 import trollius as asyncio 
 except ImportError: 
 # Use Tulip on Python 3.3, or builtin asyncio on Python 3.4+ 
 import asyncio 
 --- 

 Victor 



[python-tulip] RPC + PubSub for asyncio

2014-04-11 Thread Tobias Oberstein

Hi,

I am happy to announce that

https://github.com/tavendo/AutobahnPython

now fully supports both WebSocket and WAMP on asyncio.

WAMP (http://wamp.ws) is an application protocol that provides RPC 
(remote procedure calls) and PubSub (publish  subscribe) messaging 
patterns. WAMP can run over WebSocket or other transports.


Here is how it look like

http://autobahn.ws/#show-me-some-code

These examples are for Twisted, but the asyncio variants look very similar:

https://github.com/tavendo/AutobahnPython/blob/master/examples/asyncio/wamp/wamplet/wamplet1/wamplet1/component1.py#L31

https://github.com/tavendo/AutobahnPython/tree/master/examples/asyncio/wamp/basic

===

As a side effect, Autobahn can now provide WebSocket + WAMP on Python 
3.4 without any dependencies (ok, 1 tiny dependency: six).


===

Autobahn now has first class support for WAMP on Python, JavaScript and C++:

https://github.com/tavendo/AutobahnJS#show-me-some-code
https://github.com/tavendo/AutobahnCpp#show-me-some-code

Note: Autobahn|Android is right now still lacking WAMP v2 support.

Cheers,
/Tobias


Re: [python-tulip] Creation of the new Trollius project

2014-01-07 Thread Tobias Oberstein

Hi Victor,

I've added support for Trollius in Autobahn 0.7.3

https://github.com/tavendo/AutobahnPython#python-support

pip install autobahn[asyncio]

will - as a dependency - install Trollius on Python 2, Tulip on Python 
3.3 and nothing on Python 3.4+.


Awesome!

Cheers,
/Tobias



Re: [python-tulip] Creation of the new Trollius project

2014-01-07 Thread Tobias Oberstein

  * Trollius coroutines must use raise Return(value), whereas Tulip simply


Could you explain why that particular construct of returning by 
raising was chosen?


It works, but looks strange ..

https://github.com/tavendo/AutobahnPython/blob/master/examples/asyncio/websocket/slowsquare/server_py2.py#L34

/Tobias


Re: [python-tulip] Creation of the new Trollius project

2014-01-07 Thread Tobias Oberstein

Not sure I fully get it:

Twisted's `inlineCallback` use regular raise for exceptions, but a special
`returnValue` function for returning

https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/websocket/slowsquare/server.py#L36

Is that a third alternative? [not a rhetorical question! ;)]


No, that function raises an exception. The disadvantage (in my mind)
over just using a raise statement is that the control flow is clear to
even the dumbest static analysis code (like Emacs' python-mode).


Ahh. Yes.

http://twistedmatrix.com/trac/browser/tags/releases/twisted-13.2.0/twisted/internet/defer.py#L1063

[Sorry, I should have had a look into `returnValue` myself ..]

It's just sugered up in Twisted .. and one could simply do

def returnValue(value):
   raise asyncio.Result(value)

on top of Trollius.

But I agree on the control flow thing.

Thanks again,

/Tobias



Re: [python-tulip] Calling coroutines within asyncio.Protocol.data_received

2014-01-05 Thread Tobias Oberstein

And a transport deriving from asyncio.transport.Transport (a stream) might
be in fact unreliable?


I don't see any useful semantics for that.


It was more an example for why mere API (what functions with what 
parameters) and implied semantics are somewhat orthogonal.


The same API (asyncio.transport.DatagramTransport) can be implemented 
with different semantics (reliable vs. unreliable).





What are the exact semantics implied by Transport and DatagramTransport -
beyond the mere existence of certain functions syntactically?


You probably have to ask a more specific question.

The datagram API promises that if the sender sends two packets, and
both arrive, the receiver will see two packets, though not necessarily
in the same order (and one or both may be lost).


So my question boils down to: how can a class deriving from 
DatagramTransport _programatically signal_ that it implements a more 
strict set of semantics than what you say above: ordering + reliability?


programatically signal:

One option would be by interface identity - but that you don't like. I 
suspect you will have your reasons for that - I won't open that can;)


Another would be via (mandatory) class attributes:

DatagramTransport.is_reliable
DatagramTransport.is_ordered

An UDP implementation would have both False. WebSocket both True. I 
don't know of protocols that would have mixed values.


A third option would be ReliableDatagramTransport, that
- either derives of DatagramTransport with no API change at all, merely 
to signal it's more strict semantics or
- is a new class where we get rid of `addr` parameters (which are 
unneeded for connected transports)



The stream API promises that the receiver sees the bytes that the
sender sent, in the same order, until the connection is terminated or
broken, but it makes no promises about whether you'll get a separate
data_received() call for each byte or a single call for all bytes. In
particular there's no promise that the framing implied by the sender's
sequence of send() or write() calls is preserved -- the network may
repackage the bytes  in arbitrary blocks.

By the way, there's nothing that would prevent you from defining your
own transport and protocol ABCs.



Sure. It's all about trying to play nice and fit into the bigger 
picture. So things work together ..




Re: [python-tulip] Calling coroutines within asyncio.Protocol.data_received

2014-01-02 Thread Tobias Oberstein

What I was thinking of was actually a lower level of the HTTP
infrastructure, where you have to write something that parses the HTTP
protocol (and things like form input or other common request content
types). I'm personally pretty happy with the pull style HTTP parsing I
wrote as an example
(http://code.google.com/p/tulip/source/browse/examples/fetch3.py#112
-- note that this is a client but the server-side header parsing is
about the same).


Looks nice and clean. So this relies on StreamReader for the translation 
of the push-style that comes out of the low-level transport 
(data_received) to pull-style asyncio.StreamReader.readXXX(). Nice.


I thought about this a bit more and I think in the end it comes down
to one's preferred style for writing parsers. Take a parser for a
language like Python -- you can write it in pull style (the lexer
reads a character or perhaps a line at a time, the parser asks the
lexer for a token at a time) or in push style, using a parser
generator (like CPython's parser does). Actually, even there, you can
use one style for the lexer and another style for the parser.


Interesting analogy. Yes, seems language/syntax parsing a file is 
similar to protocol parsing a wire-level stream transport. I wonder 
about the sending leg: with language parsers, this would be probably 
the AST. With network protocols, it's more of producing a 2nd stream 
conforming again to the same syntax: for sending to the other peer.



Using push style, the state machine ends up being represented
explicitly in the form of state variables, e.g. am I parsing the
status line, am I parsing the headers, have I seen the end of the
headers, in addition to some buffers holding a representation of the
stuff you've already parsed (completed headers, request
method/path/version) and the stuff you haven't parsed yet (e.g. the
next incomplete line). Typically those have to be represented as
instance variables on the Protocol (or some higher-level object with a
similar role).

Using pull style, you can often represent the state implicitly in the
form of program location; e.g. an HTTP request/response parser could
start with a readline() call to read the initial request/response,
then a loop reading the headers until a blank line is found, perhaps
an inner loop to handle continuation lines. The buffers may be just
local variables.


The ability to represent state machine states implicitly in program 
location instead of explicit variables indeed seems higher-level / more 
abstracted. I have never looked at it that way .. very interesting.


I am wondering what happens if you take timing constraints into account. 
Eg. with WebSocket, for DoS protection, one might want to have the 
initial opening handshake finished in N seconds. Hence you want to 
check after N seconds if state HANDSHAKE_FINISHED has been reached. A


yield from socket.read_handshake()

(simplified) will however just block infinitely. So I need a 2nd 
coroutine for the timeout. And the timeout will need to check .. an 
instance variable for state. Or can I have a timing out yield from?



I've only written a small amount of Android code but I sure remember
that it felt nearly impossible to follow the logic of a moderately
complex Android app -- whereas in pull style your abstractions nicely
correspond to e.g. classes or methods (or just functions), in Android
even the simplest logic seemed to be spread across many different
classes, with the linkage between them often expressed separately
(sometimes even in XML or some other dynamic configuration that
requires the reader to switch languages). But I'll add that this was
perhaps due to being a beginner in the Android world (and I haven't
taken it up since).


Thats also my experience (but I also have limited exposure to Android): 
it can get unwieldly pretty quick.


How would you do a pull-style UI toolkit? Transforming each push-style 
callback for UI widgets into pull-style code


yield from button1.onclick()
# handle button1 click

or

evt = yield from ui.onevent()
if evt.target == button1 and evt.type == click:
  # handle button1 click

The latter leads to one massive, monolithic code block handling all UI 
interaction. The former leads to many small sequential looking code 
pieces .. similar to callbacks. And those distributed code pieces 
somehow need to interact with each other.


FWIW, the - for me - most comfortable and managable way of doing UI is 
via reactive programming, e.g. in JavaScript http://knockoutjs.com/


Eg. say some x is changing asynchronously (like a UI input field 
widget) and some y needs to be changed _whenever_ x changes (like a 
UI label).


In reactive programming, I can basically write code

y = f(x)

and the reactive engine will _analyze_ that code, and hook up push-style 
callback code under the hood, so that _whenever_ x changes, f() is 
_automatically_ reapplied.


Probably better explained here:
http://peak.telecommunity.com/DevCenter/Trellis

MS also seems to 

[python-tulip] AutobahnPython 0.7.0 released: supports Python 3 and asyncio

2014-01-02 Thread Tobias Oberstein


Hi,

I am happy to announce the release of AutobahnPython 0.7.0

https://github.com/tavendo/AutobahnPython
https://pypi.python.org/pypi/autobahn

This release now fully supports (with all Autobahn features) both 
Twisted (on Python 2/3) and asyncio (on Python 3.3+).


Here is an example that shows how to do WebSocket on both:

Twisted:
https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/websocket/echo

Asyncio:
https://github.com/tavendo/AutobahnPython/tree/master/examples/asyncio/websocket/echo

The application code is very similar or even identical.

Cheers,
/Tobias

PS:
Would it be possible to add Autobahn to the Wiki?

https://github.com/tavendo/AutobahnPython
=
http://code.google.com/p/tulip/wiki/ThirdParty


Re: [python-tulip] Re: Calling coroutines within asyncio.Protocol.data_received

2013-12-28 Thread Tobias Oberstein

Hi Antoine,

In the meanwhile, I got Autobahn running on Python3/asyncio.

Here is a complete example of WebSocket client and server, running on 
asyncio


https://github.com/tavendo/AutobahnPython/tree/master/examples/asyncio/websocket/echo

and running on Twisted

https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/websocket/echo

Autobahn (http://autobahn.ws/) is a feature rich, compliant and 
performant WebSocket and WAMP client/server framework. And it's exciting 
that we can support both Twisted and asyncio at the same time;)



Also: will asyncio.async schedule via reentering the event loop or how?


Off the top of my head, no. But better confirm by reading the source
code :-)


The code seems to indicate that the Task will only run after the event 
loop is reentered


http://code.google.com/p/tulip/source/browse/asyncio/tasks.py#157

Or do I get this wrong?

Anway, I will do some performance benchmarking to compare Twisted and 
asyncio ..


/Tobias


Re: [python-tulip] Re: Calling coroutines within asyncio.Protocol.data_received

2013-12-28 Thread Tobias Oberstein

Why not us an asyncio queue so you can just have a loop in a task
instead of a recurring task?


Ah, ok. Thanks! I need to read up on asyncio queues.


Also: Autobahn now works with above design (and I have 95% shared code
between Twisted and asyncio), but is this how you intended asyncio to be
used, or am I misusing / not following best practice in some way? I am
totally new to asyncio, coming from Twisted ..


Seems to me as if perhaps you are trying to find ways to combine
asyncio operations to implement the primitives you are familiar with
from Twisted, rather than figuring out how to best solve your


Yeah, I've grown up event driven: from C++/ACE, C++/Boost/ASIO to Twisted.


underlying problem using asyncio. Using asyncio, your best approach is
to think of how you would do it in a sequential world using blocking
I/O, then use coroutines and yield-from for the blocking I/O, then
think about how to introduce some parallelism where you have two
independent blocking operations that don't depend on each other.



Indeed event-driven (push style) feels more natural for me. Writing 
synchronous (pull-style) code is a stretch to me. In fact, other 
stuff I work with (besides Twisted) is also event-driven, like 
JavaScript or Android.


/Tobias