Author: Armin Rigo <[email protected]>
Branch: extradoc
Changeset: r5373:8fe97fa6f212
Date: 2014-07-23 10:51 +0200
http://bitbucket.org/pypy/extradoc/changeset/8fe97fa6f212/
Log: tweaks
diff --git a/talk/ep2014/stm/demo/bench-multiprocessing.py
b/talk/ep2014/stm/demo/bench-multiprocessing.py
--- a/talk/ep2014/stm/demo/bench-multiprocessing.py
+++ b/talk/ep2014/stm/demo/bench-multiprocessing.py
@@ -8,7 +8,7 @@
subtotal += 1
return subtotal
-pool = Pool(4)
+pool = Pool(2)
results = pool.map(process, xrange(0, 5000000, 20000))
total = sum(results)
diff --git a/talk/ep2014/stm/talk.html b/talk/ep2014/stm/talk.html
--- a/talk/ep2014/stm/talk.html
+++ b/talk/ep2014/stm/talk.html
@@ -535,7 +535,6 @@
<li>but <em>can be very coarse:</em><ul>
<li>two transactions can optimistically run in parallel</li>
<li>even if they both <em>acquire and release the same lock</em></li>
-<li>internally, drive the transaction lengths by the locks we acquire</li>
</ul>
</li>
</ul>
@@ -558,19 +557,30 @@
<li>this is not "everybody should use careful explicit threading
with all the locking issues"</li>
<li>instead, PyPy-STM pushes forward:<ul>
-<li>use a thread pool library</li>
+<li>make or use a thread pool library</li>
<li>coarse locking, inside that library only</li>
</ul>
</li>
</ul>
</div>
+<div class="slide" id="id4">
+<h1>PyPy-STM Programming Model</h1>
+<ul class="simple">
+<li>e.g.:<ul>
+<li><tt class="docutils literal">multiprocessing</tt>-like thread pool</li>
+<li>Twisted/Tornado/Bottle extension</li>
+<li>Stackless/greenlet/gevent extension</li>
+</ul>
+</li>
+</ul>
+</div>
<div class="slide" id="pypy-stm-status">
<h1>PyPy-STM status</h1>
<ul class="simple">
<li>current status:<ul>
<li>basics work</li>
<li>best case 25-40% overhead (much better than originally planned)</li>
-<li>parallelizing user locks not done yet (see "with atomic")</li>
+<li>app locks not done yet ("with atomic" workaround)</li>
<li>tons of things to improve</li>
<li>tons of things to improve</li>
<li>tons of things to improve</li>
@@ -604,7 +614,7 @@
<li>need tool to support this (debugger/profiler)</li>
</ul>
</li>
-<li>Performance hit: 25-40% over a plain PyPy-JIT (may be ok)</li>
+<li>Performance hit: 25-40% slower than a plain PyPy-JIT (may be ok)</li>
</ul>
</div>
<div class="slide" id="summary-pypy-stm">
@@ -678,7 +688,7 @@
a read flag set in some other thread</li>
</ul>
</div>
-<div class="slide" id="id4">
+<div class="slide" id="id5">
<h1>...</h1>
</div>
<div class="slide" id="thank-you">
diff --git a/talk/ep2014/stm/talk.rst b/talk/ep2014/stm/talk.rst
--- a/talk/ep2014/stm/talk.rst
+++ b/talk/ep2014/stm/talk.rst
@@ -176,8 +176,6 @@
- even if they both *acquire and release the same lock*
- - internally, drive the transaction lengths by the locks we acquire
-
Long Transactions
-----------------
@@ -201,11 +199,23 @@
* instead, PyPy-STM pushes forward:
- - use a thread pool library
+ - make or use a thread pool library
- coarse locking, inside that library only
+PyPy-STM Programming Model
+--------------------------
+
+* e.g.:
+
+ - ``multiprocessing``-like thread pool
+
+ - Twisted/Tornado/Bottle extension
+
+ - Stackless/greenlet/gevent extension
+
+
PyPy-STM status
---------------
@@ -213,7 +223,7 @@
- basics work
- best case 25-40% overhead (much better than originally planned)
- - parallelizing user locks not done yet (see "with atomic")
+ - app locks not done yet ("with atomic" workaround)
- tons of things to improve
- tons of things to improve
- tons of things to improve
@@ -250,7 +260,7 @@
- need tool to support this (debugger/profiler)
-* Performance hit: 25-40% over a plain PyPy-JIT (may be ok)
+* Performance hit: 25-40% slower than a plain PyPy-JIT (may be ok)
Summary: PyPy-STM
_______________________________________________
pypy-commit mailing list
[email protected]
https://mail.python.org/mailman/listinfo/pypy-commit