Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-yappi for openSUSE:Factory 
checked in at 2022-06-04 23:27:21
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-yappi (Old)
 and      /work/SRC/openSUSE:Factory/.python-yappi.new.1548 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-yappi"

Sat Jun  4 23:27:21 2022 rev:12 rq:980765 version:1.3.5

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-yappi/python-yappi.changes        
2021-10-16 22:48:23.936730279 +0200
+++ /work/SRC/openSUSE:Factory/.python-yappi.new.1548/python-yappi.changes      
2022-06-04 23:27:26.440780363 +0200
@@ -1,0 +2,8 @@
+Sat Jun  4 12:43:04 UTC 2022 - Dirk M??ller <[email protected]>
+
+- update to 1.3.5:
+  * Use PyEval_GetLocals for getting locals in Py3.10 and up. 
+  * Fix cp->coroutines becomes NULL when head is removed 
+  * Remove pypistats dw count
+
+-------------------------------------------------------------------

Old:
----
  yappi-1.3.3.tar.gz

New:
----
  yappi-1.3.5.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-yappi.spec ++++++
--- /var/tmp/diff_new_pack.MQQnl5/_old  2022-06-04 23:27:27.004780933 +0200
+++ /var/tmp/diff_new_pack.MQQnl5/_new  2022-06-04 23:27:27.008780937 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-yappi
 #
-# Copyright (c) 2021 SUSE LLC
+# Copyright (c) 2022 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -20,7 +20,7 @@
 %define skip_python2 1
 %define skip_python36 1
 Name:           python-yappi
-Version:        1.3.3
+Version:        1.3.5
 Release:        0
 Summary:        Yet Another Python Profiler
 License:        MIT

++++++ yappi-1.3.3.tar.gz -> yappi-1.3.5.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yappi-1.3.3/PKG-INFO new/yappi-1.3.5/PKG-INFO
--- old/yappi-1.3.3/PKG-INFO    2021-10-05 15:03:24.452152300 +0200
+++ new/yappi-1.3.5/PKG-INFO    2022-05-20 11:47:08.236228000 +0200
@@ -1,302 +1,11 @@
 Metadata-Version: 2.1
 Name: yappi
-Version: 1.3.3
+Version: 1.3.5
 Summary: Yet Another Python Profiler
 Home-page: https://github.com/sumerc/yappi
 Author: Sumer Cip
 Author-email: [email protected]
 License: MIT
-Description: <p align="center">
-            <img 
src="https://raw.githubusercontent.com/sumerc/yappi/master/Misc/logo.png"; 
alt="yappi">
-        </p>
-        
-        <h1 align="center">Yappi</h1>
-        <p align="center">
-            Yet Another Python Profiler, but this time 
<b>thread&coroutine&greenlet</b> aware.
-        </p>
-        
-        <p align="center">
-            <img 
src="https://www.travis-ci.org/sumerc/yappi.svg?branch=master";>
-            <img 
src="https://github.com/sumerc/yappi/workflows/CI/badge.svg?branch=master";>
-            <img src="https://img.shields.io/pypi/v/yappi.svg";>
-            <img src="https://img.shields.io/pypi/dw/yappi.svg";>
-            <img src="https://img.shields.io/pypi/pyversions/yappi.svg";>
-            <img 
src="https://img.shields.io/github/last-commit/sumerc/yappi.svg";>
-            <img src="https://img.shields.io/github/license/sumerc/yappi.svg";>
-        </p>
-        
-        ## Highlights
-        
-        - **Fast**: Yappi is fast. It is completely written in C and lots of 
love&care went into making it fast.
-        - **Unique**: Yappi supports multithreaded, 
[asyncio](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 and 
[gevent](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md) 
profiling. Tagging/filtering multiple profiler results has interesting [use 
cases](https://github.com/sumerc/yappi/blob/master/doc/api.md#set_tag_callback).
-        - **Intuitive**: Profiler can??be??started/stopped and results can be 
obtained from any time and any thread.
-        - **Standards Compliant**: 
Profiler??results??can??be??saved??in??[callgrind](http://valgrind.org/docs/manual/cl-format.html)??or??[pstat](http://docs.python.org/3.4/library/profile.html#pstats.Stats)??formats.
-        - **Rich in Feature set**: Profiler results can show either [Wall 
Time](https://en.wikipedia.org/wiki/Elapsed_real_time) or actual [CPU 
Time](http://en.wikipedia.org/wiki/CPU_time) and can be 
aggregated??from??different??sessions. Various flags are defined for filtering 
and sorting profiler results.
-        - **Robust**: Yappi had seen years of production usage.
-        
-        ## Motivation
-        
-        CPython standard distribution comes with three deterministic 
profilers. `cProfile`, `Profile` and `hotshot`. `cProfile` is implemented as a 
C module based on `lsprof`, `Profile` is in pure Python and `hotshot` can be 
seen as a small subset of a cProfile. The major issue is that all of these 
profilers lack support for multi-threaded programs and CPU time.
-        
-        If you want to profile a  multi-threaded application, you must give an 
entry point to these profilers and then maybe merge the outputs. None of these 
profilers are designed to work on long-running multi-threaded applications. It 
is also not possible to profile an application that start/stop/retrieve traces 
on the fly with these profilers. 
-        
-        Now fast forwarding to 2019: With the latest improvements on `asyncio` 
library and asynchronous frameworks, most of the current profilers lacks the 
ability to show correct wall/cpu time or even call count information 
per-coroutine. Thus we need a different kind of approach to profile 
asynchronous code. Yappi, with v1.2 introduces the concept of `coroutine 
profiling`. With `coroutine-profiling`, you should be able to profile correct 
wall/cpu time and call count of your coroutine. (including the time spent in 
context switches, too). You can see details 
[here](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md).
-        
-        
-        ## Installation
-        
-        Can be installed via PyPI
-        
-        ```
-        $ pip install yappi
-        ```
-        
-        OR from the source directly.
-        
-        ```
-        $ pip install git+https://github.com/sumerc/yappi#egg=yappi
-        ```
-        
-        ## Examples
-        
-        ### A simple example:
-        
-        ```python
-        import yappi
-        
-        def a():
-            for _ in range(10000000):  # do something CPU heavy
-                pass
-        
-        yappi.set_clock_type("cpu") # Use set_clock_type("wall") for wall time
-        yappi.start()
-        a()
-        
-        yappi.get_func_stats().print_all()
-        yappi.get_thread_stats().print_all()
-        '''
-        
-        Clock type: CPU
-        Ordered by: totaltime, desc
-        
-        name                                  ncall  tsub      ttot      tavg  
    
-        doc.py:5 a                            1      0.117907  0.117907  
0.117907
-        
-        name           id     tid              ttot      scnt        
-        _MainThread    0      139867147315008  0.118297  1
-        '''
-        ```
-        
-        ### Profile a multithreaded application:
-        
-        You can profile a multithreaded application via Yappi and can easily 
retrieve
-        per-thread profile information by filtering on `ctx_id` with 
`get_func_stats` API.
-        
-        ```python
-        import yappi
-        import time
-        import threading
-        
-        _NTHREAD = 3
-        
-        
-        def _work(n):
-            time.sleep(n * 0.1)
-        
-        
-        yappi.start()
-        
-        threads = []
-        # generate _NTHREAD threads
-        for i in range(_NTHREAD):
-            t = threading.Thread(target=_work, args=(i + 1, ))
-            t.start()
-            threads.append(t)
-        # wait all threads to finish
-        for t in threads:
-            t.join()
-        
-        yappi.stop()
-        
-        # retrieve thread stats by their thread id (given by yappi)
-        threads = yappi.get_thread_stats()
-        for thread in threads:
-            print(
-                "Function stats for (%s) (%d)" % (thread.name, thread.id)
-            )  # it is the Thread.__class__.__name__
-            yappi.get_func_stats(ctx_id=thread.id).print_all()
-        '''
-        Function stats for (Thread) (3)
-        
-        name                                  ncall  tsub      ttot      tavg
-        ..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000062  
0.000062
-        doc3.py:8 _work                       1      0.000012  0.000045  
0.000045
-        
-        Function stats for (Thread) (2)
-        
-        name                                  ncall  tsub      ttot      tavg
-        ..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000065  
0.000065
-        doc3.py:8 _work                       1      0.000010  0.000048  
0.000048
-        
-        
-        Function stats for (Thread) (1)
-        
-        name                                  ncall  tsub      ttot      tavg
-        ..hon3.7/threading.py:859 Thread.run  1      0.000010  0.000043  
0.000043
-        doc3.py:8 _work                       1      0.000006  0.000033  
0.000033
-        '''
-        ```
-        
-        ### Different ways to filter/sort stats:
-        
-        You can use `filter_callback` on `get_func_stats` API to filter on 
functions, modules
-        or whatever available in `YFuncStat` object.
-        
-        ```python
-        import package_a
-        import yappi
-        import sys
-        
-        def a():
-            pass
-        
-        def b():
-            pass
-        
-        yappi.start()
-        a()
-        b()
-        package_a.a()
-        yappi.stop()
-        
-        # filter by module object
-        current_module = sys.modules[__name__]
-        stats = yappi.get_func_stats(
-            filter_callback=lambda x: yappi.module_matches(x, [current_module])
-        )  # x is a yappi.YFuncStat object
-        stats.sort("name", "desc").print_all()
-        '''
-        Clock type: CPU
-        Ordered by: name, desc
-        
-        name                                  ncall  tsub      ttot      tavg
-        doc2.py:10 b                          1      0.000001  0.000001  
0.000001
-        doc2.py:6 a                           1      0.000001  0.000001  
0.000001
-        '''
-        
-        # filter by function object
-        stats = yappi.get_func_stats(
-            filter_callback=lambda x: yappi.func_matches(x, [a, b])
-        ).print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        doc2.py:6 a                           1      0.000001  0.000001  
0.000001
-        doc2.py:10 b                          1      0.000001  0.000001  
0.000001
-        '''
-        
-        # filter by module name
-        stats = yappi.get_func_stats(filter_callback=lambda x: 'package_a' in 
x.module
-                                     ).print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        package_a/__init__.py:1 a             1      0.000001  0.000001  
0.000001
-        '''
-        
-        # filter by function name
-        stats = yappi.get_func_stats(filter_callback=lambda x: 'a' in x.name
-                                     ).print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        doc2.py:6 a                           1      0.000001  0.000001  
0.000001
-        package_a/__init__.py:1 a             1      0.000001  0.000001  
0.000001
-        '''
-        ```
-        
-        ### Profile an asyncio application:
-        
-        You can see that coroutine wall-time's are correctly profiled.
-        
-        ```python
-        import asyncio
-        import yappi
-        
-        async def foo():
-            await asyncio.sleep(1.0)
-            await baz()
-            await asyncio.sleep(0.5)
-        
-        async def bar():
-            await asyncio.sleep(2.0)
-        
-        async def baz():
-            await asyncio.sleep(1.0)
-        
-        yappi.set_clock_type("WALL")
-        with yappi.run():
-            asyncio.run(foo())
-            asyncio.run(bar())
-        yappi.get_func_stats().print_all()
-        '''
-        Clock type: WALL
-        Ordered by: totaltime, desc
-        
-        name                                  ncall  tsub      ttot      tavg  
    
-        doc4.py:5 foo                         1      0.000030  2.503808  
2.503808
-        doc4.py:11 bar                        1      0.000012  2.002492  
2.002492
-        doc4.py:15 baz                        1      0.000013  1.001397  
1.001397
-        '''
-        ```
-        
-        ### Profile a gevent application:
-        
-        You can use yappi to profile greenlet applications now!
-        
-        ```python
-        import yappi
-        from greenlet import greenlet
-        import time
-        
-        class GreenletA(greenlet):
-            def run(self):
-                time.sleep(1)
-        
-        yappi.set_context_backend("greenlet")
-        yappi.set_clock_type("wall")
-        
-        yappi.start(builtins=True)
-        a = GreenletA()
-        a.switch()
-        yappi.stop()
-        
-        yappi.get_func_stats().print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        tests/test_random.py:6 GreenletA.run  1      0.000007  1.000494  
1.000494
-        time.sleep                            1      1.000487  1.000487  
1.000487
-        '''
-        ```
-        
-        ## Documentation
-        
-        - 
[Introduction](https://github.com/sumerc/yappi/blob/master/doc/introduction.md)
-        - [Clock 
Types](https://github.com/sumerc/yappi/blob/master/doc/clock_types.md)
-        - [API](https://github.com/sumerc/yappi/blob/master/doc/api.md)
-        - [Coroutine 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 _(new in 1.2)_
-        - [Greenlet 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md)
 _(new in 1.3)_
-        
-          Note: Yes. I know I should be moving docs to readthedocs.io. Stay 
tuned!
-        
-        
-        ## Related Talks
-        
-          Special thanks to A.Jesse Jiryu Davis:
-        - [Python Performance Profiling: The Guts And The Glory (PyCon 
2015)](https://www.youtube.com/watch?v=4uJWWXYHxaM)
-        
-        ## PyCharm Integration
-        
-        Yappi is the default profiler in `PyCharm`. If you have Yappi 
installed, `PyCharm` will use it. See [the 
official](https://www.jetbrains.com/help/pycharm/profiler.html) documentation 
for more details.
-        
-        
 Keywords: python thread multithread profiler
 Platform: UNKNOWN
 Classifier: Development Status :: 5 - Production/Stable
@@ -317,3 +26,296 @@
 Classifier: Topic :: Software Development :: Libraries :: Python Modules
 Description-Content-Type: text/markdown
 Provides-Extra: test
+License-File: LICENSE
+
+<p align="center">
+    <img 
src="https://raw.githubusercontent.com/sumerc/yappi/master/Misc/logo.png"; 
alt="yappi">
+</p>
+
+<h1 align="center">Yappi</h1>
+<p align="center">
+    A tracing profiler that is <b>thread&coroutine&greenlet</b> aware.
+</p>
+
+<p align="center">
+    <img src="https://www.travis-ci.org/sumerc/yappi.svg?branch=master";>
+    <img 
src="https://github.com/sumerc/yappi/workflows/CI/badge.svg?branch=master";>
+    <img src="https://img.shields.io/pypi/v/yappi.svg";>
+    <img src="https://img.shields.io/pypi/pyversions/yappi.svg";>
+    <img src="https://img.shields.io/github/last-commit/sumerc/yappi.svg";>
+    <img src="https://img.shields.io/github/license/sumerc/yappi.svg";>
+</p>
+
+## Highlights
+
+- **Fast**: Yappi is fast. It is completely written in C and lots of love&care 
went into making it fast.
+- **Unique**: Yappi supports multithreaded, 
[asyncio](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 and 
[gevent](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md) 
profiling. Tagging/filtering multiple profiler results has interesting [use 
cases](https://github.com/sumerc/yappi/blob/master/doc/api.md#set_tag_callback).
+- **Intuitive**: Profiler can??be??started/stopped and results can be obtained 
from any time and any thread.
+- **Standards Compliant**: 
Profiler??results??can??be??saved??in??[callgrind](http://valgrind.org/docs/manual/cl-format.html)??or??[pstat](http://docs.python.org/3.4/library/profile.html#pstats.Stats)??formats.
+- **Rich in Feature set**: Profiler results can show either [Wall 
Time](https://en.wikipedia.org/wiki/Elapsed_real_time) or actual [CPU 
Time](http://en.wikipedia.org/wiki/CPU_time) and can be 
aggregated??from??different??sessions. Various flags are defined for filtering 
and sorting profiler results.
+- **Robust**: Yappi had seen years of production usage.
+
+## Motivation
+
+CPython standard distribution comes with three deterministic profilers. 
`cProfile`, `Profile` and `hotshot`. `cProfile` is implemented as a C module 
based on `lsprof`, `Profile` is in pure Python and `hotshot` can be seen as a 
small subset of a cProfile. The major issue is that all of these profilers lack 
support for multi-threaded programs and CPU time.
+
+If you want to profile a  multi-threaded application, you must give an entry 
point to these profilers and then maybe merge the outputs. None of these 
profilers are designed to work on long-running multi-threaded applications. It 
is also not possible to profile an application that start/stop/retrieve traces 
on the fly with these profilers. 
+
+Now fast forwarding to 2019: With the latest improvements on `asyncio` library 
and asynchronous frameworks, most of the current profilers lacks the ability to 
show correct wall/cpu time or even call count information per-coroutine. Thus 
we need a different kind of approach to profile asynchronous code. Yappi, with 
v1.2 introduces the concept of `coroutine profiling`. With 
`coroutine-profiling`, you should be able to profile correct wall/cpu time and 
call count of your coroutine. (including the time spent in context switches, 
too). You can see details 
[here](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md).
+
+
+## Installation
+
+Can be installed via PyPI
+
+```
+$ pip install yappi
+```
+
+OR from the source directly.
+
+```
+$ pip install git+https://github.com/sumerc/yappi#egg=yappi
+```
+
+## Examples
+
+### A simple example:
+
+```python
+import yappi
+
+def a():
+    for _ in range(10000000):  # do something CPU heavy
+        pass
+
+yappi.set_clock_type("cpu") # Use set_clock_type("wall") for wall time
+yappi.start()
+a()
+
+yappi.get_func_stats().print_all()
+yappi.get_thread_stats().print_all()
+'''
+
+Clock type: CPU
+Ordered by: totaltime, desc
+
+name                                  ncall  tsub      ttot      tavg      
+doc.py:5 a                            1      0.117907  0.117907  0.117907
+
+name           id     tid              ttot      scnt        
+_MainThread    0      139867147315008  0.118297  1
+'''
+```
+
+### Profile a multithreaded application:
+
+You can profile a multithreaded application via Yappi and can easily retrieve
+per-thread profile information by filtering on `ctx_id` with `get_func_stats` 
API.
+
+```python
+import yappi
+import time
+import threading
+
+_NTHREAD = 3
+
+
+def _work(n):
+    time.sleep(n * 0.1)
+
+
+yappi.start()
+
+threads = []
+# generate _NTHREAD threads
+for i in range(_NTHREAD):
+    t = threading.Thread(target=_work, args=(i + 1, ))
+    t.start()
+    threads.append(t)
+# wait all threads to finish
+for t in threads:
+    t.join()
+
+yappi.stop()
+
+# retrieve thread stats by their thread id (given by yappi)
+threads = yappi.get_thread_stats()
+for thread in threads:
+    print(
+        "Function stats for (%s) (%d)" % (thread.name, thread.id)
+    )  # it is the Thread.__class__.__name__
+    yappi.get_func_stats(ctx_id=thread.id).print_all()
+'''
+Function stats for (Thread) (3)
+
+name                                  ncall  tsub      ttot      tavg
+..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000062  0.000062
+doc3.py:8 _work                       1      0.000012  0.000045  0.000045
+
+Function stats for (Thread) (2)
+
+name                                  ncall  tsub      ttot      tavg
+..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000065  0.000065
+doc3.py:8 _work                       1      0.000010  0.000048  0.000048
+
+
+Function stats for (Thread) (1)
+
+name                                  ncall  tsub      ttot      tavg
+..hon3.7/threading.py:859 Thread.run  1      0.000010  0.000043  0.000043
+doc3.py:8 _work                       1      0.000006  0.000033  0.000033
+'''
+```
+
+### Different ways to filter/sort stats:
+
+You can use `filter_callback` on `get_func_stats` API to filter on functions, 
modules
+or whatever available in `YFuncStat` object.
+
+```python
+import package_a
+import yappi
+import sys
+
+def a():
+    pass
+
+def b():
+    pass
+
+yappi.start()
+a()
+b()
+package_a.a()
+yappi.stop()
+
+# filter by module object
+current_module = sys.modules[__name__]
+stats = yappi.get_func_stats(
+    filter_callback=lambda x: yappi.module_matches(x, [current_module])
+)  # x is a yappi.YFuncStat object
+stats.sort("name", "desc").print_all()
+'''
+Clock type: CPU
+Ordered by: name, desc
+
+name                                  ncall  tsub      ttot      tavg
+doc2.py:10 b                          1      0.000001  0.000001  0.000001
+doc2.py:6 a                           1      0.000001  0.000001  0.000001
+'''
+
+# filter by function object
+stats = yappi.get_func_stats(
+    filter_callback=lambda x: yappi.func_matches(x, [a, b])
+).print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+doc2.py:6 a                           1      0.000001  0.000001  0.000001
+doc2.py:10 b                          1      0.000001  0.000001  0.000001
+'''
+
+# filter by module name
+stats = yappi.get_func_stats(filter_callback=lambda x: 'package_a' in x.module
+                             ).print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+package_a/__init__.py:1 a             1      0.000001  0.000001  0.000001
+'''
+
+# filter by function name
+stats = yappi.get_func_stats(filter_callback=lambda x: 'a' in x.name
+                             ).print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+doc2.py:6 a                           1      0.000001  0.000001  0.000001
+package_a/__init__.py:1 a             1      0.000001  0.000001  0.000001
+'''
+```
+
+### Profile an asyncio application:
+
+You can see that coroutine wall-time's are correctly profiled.
+
+```python
+import asyncio
+import yappi
+
+async def foo():
+    await asyncio.sleep(1.0)
+    await baz()
+    await asyncio.sleep(0.5)
+
+async def bar():
+    await asyncio.sleep(2.0)
+
+async def baz():
+    await asyncio.sleep(1.0)
+
+yappi.set_clock_type("WALL")
+with yappi.run():
+    asyncio.run(foo())
+    asyncio.run(bar())
+yappi.get_func_stats().print_all()
+'''
+Clock type: WALL
+Ordered by: totaltime, desc
+
+name                                  ncall  tsub      ttot      tavg      
+doc4.py:5 foo                         1      0.000030  2.503808  2.503808
+doc4.py:11 bar                        1      0.000012  2.002492  2.002492
+doc4.py:15 baz                        1      0.000013  1.001397  1.001397
+'''
+```
+
+### Profile a gevent application:
+
+You can use yappi to profile greenlet applications now!
+
+```python
+import yappi
+from greenlet import greenlet
+import time
+
+class GreenletA(greenlet):
+    def run(self):
+        time.sleep(1)
+
+yappi.set_context_backend("greenlet")
+yappi.set_clock_type("wall")
+
+yappi.start(builtins=True)
+a = GreenletA()
+a.switch()
+yappi.stop()
+
+yappi.get_func_stats().print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+tests/test_random.py:6 GreenletA.run  1      0.000007  1.000494  1.000494
+time.sleep                            1      1.000487  1.000487  1.000487
+'''
+```
+
+## Documentation
+
+- 
[Introduction](https://github.com/sumerc/yappi/blob/master/doc/introduction.md)
+- [Clock Types](https://github.com/sumerc/yappi/blob/master/doc/clock_types.md)
+- [API](https://github.com/sumerc/yappi/blob/master/doc/api.md)
+- [Coroutine 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 _(new in 1.2)_
+- [Greenlet 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md)
 _(new in 1.3)_
+
+  Note: Yes. I know I should be moving docs to readthedocs.io. Stay tuned!
+
+
+## Related Talks
+
+  Special thanks to A.Jesse Jiryu Davis:
+- [Python Performance Profiling: The Guts And The Glory (PyCon 
2015)](https://www.youtube.com/watch?v=4uJWWXYHxaM)
+
+## PyCharm Integration
+
+Yappi is the default profiler in `PyCharm`. If you have Yappi installed, 
`PyCharm` will use it. See [the 
official](https://www.jetbrains.com/help/pycharm/profiler.html) documentation 
for more details.
+
+
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yappi-1.3.3/README.md new/yappi-1.3.5/README.md
--- old/yappi-1.3.3/README.md   2021-08-09 17:48:02.000000000 +0200
+++ new/yappi-1.3.5/README.md   2021-12-08 14:47:16.000000000 +0100
@@ -4,14 +4,13 @@
 
 <h1 align="center">Yappi</h1>
 <p align="center">
-    Yet Another Python Profiler, but this time 
<b>thread&coroutine&greenlet</b> aware.
+    A tracing profiler that is <b>thread&coroutine&greenlet</b> aware.
 </p>
 
 <p align="center">
     <img src="https://www.travis-ci.org/sumerc/yappi.svg?branch=master";>
     <img 
src="https://github.com/sumerc/yappi/workflows/CI/badge.svg?branch=master";>
     <img src="https://img.shields.io/pypi/v/yappi.svg";>
-    <img src="https://img.shields.io/pypi/dw/yappi.svg";>
     <img src="https://img.shields.io/pypi/pyversions/yappi.svg";>
     <img src="https://img.shields.io/github/last-commit/sumerc/yappi.svg";>
     <img src="https://img.shields.io/github/license/sumerc/yappi.svg";>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yappi-1.3.3/setup.py new/yappi-1.3.5/setup.py
--- old/yappi-1.3.3/setup.py    2021-10-04 09:59:24.000000000 +0200
+++ new/yappi-1.3.5/setup.py    2022-05-20 11:27:27.000000000 +0200
@@ -12,7 +12,7 @@
 
 HOMEPAGE = "https://github.com/sumerc/yappi";
 NAME = "yappi"
-VERSION = "1.3.3"
+VERSION = "1.3.5"
 _DEBUG = False  # compile/link code for debugging
 _PROFILE = False  # profile yappi itself
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yappi-1.3.3/tests/test_asyncio.py 
new/yappi-1.3.5/tests/test_asyncio.py
--- old/yappi-1.3.3/tests/test_asyncio.py       2021-08-09 17:48:02.000000000 
+0200
+++ new/yappi-1.3.5/tests/test_asyncio.py       2021-12-08 14:41:40.000000000 
+0100
@@ -12,6 +12,33 @@
 
 class SingleThreadTests(YappiUnitTestCase):
 
+    def test_issue58(self):
+
+        @asyncio.coroutine
+        def mt(d):
+            t = asyncio.Task(async_sleep(3 + d))
+            yield from async_sleep(3)
+            yield from t
+
+        yappi.set_clock_type('wall')
+
+        with yappi.run():
+            asyncio.get_event_loop().run_until_complete(mt(-2))
+        r1 = '''
+        async_sleep 2      0  4.005451  2.002725
+        '''
+        stats = yappi.get_func_stats()
+        self.assert_traces_almost_equal(r1, stats)
+        yappi.clear_stats()
+
+        with yappi.run():
+            asyncio.get_event_loop().run_until_complete(mt(1))
+        r1 = '''
+        async_sleep 2      0  7.006886  3.503443
+        '''
+        stats = yappi.get_func_stats()
+        self.assert_traces_almost_equal(r1, stats)
+
     def test_recursive_coroutine(self):
 
         @asyncio.coroutine
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yappi-1.3.3/yappi/_yappi.c 
new/yappi-1.3.5/yappi/_yappi.c
--- old/yappi-1.3.3/yappi/_yappi.c      2021-08-09 17:48:02.000000000 +0200
+++ new/yappi-1.3.5/yappi/_yappi.c      2022-05-20 11:23:48.000000000 +0200
@@ -625,6 +625,17 @@
     return ((_pit *)it->val);
 }
 
+static PyObject *_get_locals(PyFrameObject *fobj) {
+#if PY_MAJOR_VERSION >= 3 && PY_MINOR_VERSION >= 10
+    return PyEval_GetLocals();
+#else
+    PyFrame_FastToLocals(fobj);
+    PyObject* locals = fobj->f_locals;
+    PyFrame_LocalsToFast(fobj, 0);
+    return locals;
+#endif
+}
+
 // maps the PyCodeObject to our internal pit item via hash table.
 static _pit *
 _code2pit(PyFrameObject *fobj, uintptr_t current_tag)
@@ -658,12 +669,11 @@
     pit->fn_descriptor = (PyObject *)cobj;
     Py_INCREF(cobj);
 
-    PyFrame_FastToLocals(fobj);
     if (cobj->co_argcount) {
         const char *firstarg = 
PyStr_AS_CSTRING(PyTuple_GET_ITEM(cobj->co_varnames, 0));
 
         if (!strcmp(firstarg, "self")) {
-            PyObject* locals = fobj->f_locals;
+            PyObject* locals = _get_locals(fobj);
             if (locals) {
                 PyObject* self = PyDict_GetItemString(locals, "self");
                 if (self) {
@@ -685,8 +695,6 @@
         pit->name = cobj->co_name;
     }
 
-    PyFrame_LocalsToFast(fobj, 0);
-
     return pit;
 }
 
@@ -859,6 +867,18 @@
     return result;
 }
 
+static void
+_print_coros(_pit *cp)
+{
+    _coro *coro;
+
+    printf("Printing coroutines on %s...\n", PyStr_AS_CSTRING(cp->name));
+    coro = cp->coroutines;
+    while(coro) {
+        printf("CORO %s %p %lld\n", PyStr_AS_CSTRING(cp->name), coro->frame, 
coro->t0);
+        coro = (_coro *)coro->next;
+    }
+}
 
 static int 
 _coro_enter(_pit *cp, PyFrameObject *frame)
@@ -879,7 +899,7 @@
         coro = (_coro *)coro->next;
     }
 
-    //printf("CORO ENTER %s %p\n", PyStr_AS_CSTRING(cp->name), frame);
+    //printf("CORO ENTER %s %p %lld\n", PyStr_AS_CSTRING(cp->name), frame, 
tickcount());
 
     coro = ymalloc(sizeof(_coro));
     if (!coro) {
@@ -909,7 +929,7 @@
             return 0;
     }
 
-    //printf("CORO EXIT %s %p\n", PyStr_AS_CSTRING(cp->name), frame);
+    //printf("CORO EXIT %s %p %lld\n", PyStr_AS_CSTRING(cp->name), frame, 
tickcount());
 
     prev = NULL;
     coro = cp->coroutines;
@@ -919,9 +939,10 @@
             if (prev) {
                 prev->next = coro->next;
             } else {
-                cp->coroutines = NULL;
+                cp->coroutines = (_coro *)coro->next;
             }
             yfree(coro);
+            //printf("CORO EXIT(elapsed) %s %p %lld\n", 
PyStr_AS_CSTRING(cp->name), frame, tickcount()-_t0);
             return tickcount() - _t0;
         }
         prev = coro;
@@ -2179,6 +2200,7 @@
     test_timings = NULL;
 
     SUPPRESS_WARNING(_DebugPrintObjects);
+    SUPPRESS_WARNING(_print_coros);
 
     if (!_init_profiler()) {
         PyErr_SetString(YappiProfileError, "profiler cannot be initialized.");
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yappi-1.3.3/yappi/yappi.egg-info/PKG-INFO 
new/yappi-1.3.5/yappi/yappi.egg-info/PKG-INFO
--- old/yappi-1.3.3/yappi/yappi.egg-info/PKG-INFO       2021-10-05 
15:03:24.000000000 +0200
+++ new/yappi-1.3.5/yappi/yappi.egg-info/PKG-INFO       2022-05-20 
11:47:08.000000000 +0200
@@ -1,302 +1,11 @@
 Metadata-Version: 2.1
 Name: yappi
-Version: 1.3.3
+Version: 1.3.5
 Summary: Yet Another Python Profiler
 Home-page: https://github.com/sumerc/yappi
 Author: Sumer Cip
 Author-email: [email protected]
 License: MIT
-Description: <p align="center">
-            <img 
src="https://raw.githubusercontent.com/sumerc/yappi/master/Misc/logo.png"; 
alt="yappi">
-        </p>
-        
-        <h1 align="center">Yappi</h1>
-        <p align="center">
-            Yet Another Python Profiler, but this time 
<b>thread&coroutine&greenlet</b> aware.
-        </p>
-        
-        <p align="center">
-            <img 
src="https://www.travis-ci.org/sumerc/yappi.svg?branch=master";>
-            <img 
src="https://github.com/sumerc/yappi/workflows/CI/badge.svg?branch=master";>
-            <img src="https://img.shields.io/pypi/v/yappi.svg";>
-            <img src="https://img.shields.io/pypi/dw/yappi.svg";>
-            <img src="https://img.shields.io/pypi/pyversions/yappi.svg";>
-            <img 
src="https://img.shields.io/github/last-commit/sumerc/yappi.svg";>
-            <img src="https://img.shields.io/github/license/sumerc/yappi.svg";>
-        </p>
-        
-        ## Highlights
-        
-        - **Fast**: Yappi is fast. It is completely written in C and lots of 
love&care went into making it fast.
-        - **Unique**: Yappi supports multithreaded, 
[asyncio](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 and 
[gevent](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md) 
profiling. Tagging/filtering multiple profiler results has interesting [use 
cases](https://github.com/sumerc/yappi/blob/master/doc/api.md#set_tag_callback).
-        - **Intuitive**: Profiler can??be??started/stopped and results can be 
obtained from any time and any thread.
-        - **Standards Compliant**: 
Profiler??results??can??be??saved??in??[callgrind](http://valgrind.org/docs/manual/cl-format.html)??or??[pstat](http://docs.python.org/3.4/library/profile.html#pstats.Stats)??formats.
-        - **Rich in Feature set**: Profiler results can show either [Wall 
Time](https://en.wikipedia.org/wiki/Elapsed_real_time) or actual [CPU 
Time](http://en.wikipedia.org/wiki/CPU_time) and can be 
aggregated??from??different??sessions. Various flags are defined for filtering 
and sorting profiler results.
-        - **Robust**: Yappi had seen years of production usage.
-        
-        ## Motivation
-        
-        CPython standard distribution comes with three deterministic 
profilers. `cProfile`, `Profile` and `hotshot`. `cProfile` is implemented as a 
C module based on `lsprof`, `Profile` is in pure Python and `hotshot` can be 
seen as a small subset of a cProfile. The major issue is that all of these 
profilers lack support for multi-threaded programs and CPU time.
-        
-        If you want to profile a  multi-threaded application, you must give an 
entry point to these profilers and then maybe merge the outputs. None of these 
profilers are designed to work on long-running multi-threaded applications. It 
is also not possible to profile an application that start/stop/retrieve traces 
on the fly with these profilers. 
-        
-        Now fast forwarding to 2019: With the latest improvements on `asyncio` 
library and asynchronous frameworks, most of the current profilers lacks the 
ability to show correct wall/cpu time or even call count information 
per-coroutine. Thus we need a different kind of approach to profile 
asynchronous code. Yappi, with v1.2 introduces the concept of `coroutine 
profiling`. With `coroutine-profiling`, you should be able to profile correct 
wall/cpu time and call count of your coroutine. (including the time spent in 
context switches, too). You can see details 
[here](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md).
-        
-        
-        ## Installation
-        
-        Can be installed via PyPI
-        
-        ```
-        $ pip install yappi
-        ```
-        
-        OR from the source directly.
-        
-        ```
-        $ pip install git+https://github.com/sumerc/yappi#egg=yappi
-        ```
-        
-        ## Examples
-        
-        ### A simple example:
-        
-        ```python
-        import yappi
-        
-        def a():
-            for _ in range(10000000):  # do something CPU heavy
-                pass
-        
-        yappi.set_clock_type("cpu") # Use set_clock_type("wall") for wall time
-        yappi.start()
-        a()
-        
-        yappi.get_func_stats().print_all()
-        yappi.get_thread_stats().print_all()
-        '''
-        
-        Clock type: CPU
-        Ordered by: totaltime, desc
-        
-        name                                  ncall  tsub      ttot      tavg  
    
-        doc.py:5 a                            1      0.117907  0.117907  
0.117907
-        
-        name           id     tid              ttot      scnt        
-        _MainThread    0      139867147315008  0.118297  1
-        '''
-        ```
-        
-        ### Profile a multithreaded application:
-        
-        You can profile a multithreaded application via Yappi and can easily 
retrieve
-        per-thread profile information by filtering on `ctx_id` with 
`get_func_stats` API.
-        
-        ```python
-        import yappi
-        import time
-        import threading
-        
-        _NTHREAD = 3
-        
-        
-        def _work(n):
-            time.sleep(n * 0.1)
-        
-        
-        yappi.start()
-        
-        threads = []
-        # generate _NTHREAD threads
-        for i in range(_NTHREAD):
-            t = threading.Thread(target=_work, args=(i + 1, ))
-            t.start()
-            threads.append(t)
-        # wait all threads to finish
-        for t in threads:
-            t.join()
-        
-        yappi.stop()
-        
-        # retrieve thread stats by their thread id (given by yappi)
-        threads = yappi.get_thread_stats()
-        for thread in threads:
-            print(
-                "Function stats for (%s) (%d)" % (thread.name, thread.id)
-            )  # it is the Thread.__class__.__name__
-            yappi.get_func_stats(ctx_id=thread.id).print_all()
-        '''
-        Function stats for (Thread) (3)
-        
-        name                                  ncall  tsub      ttot      tavg
-        ..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000062  
0.000062
-        doc3.py:8 _work                       1      0.000012  0.000045  
0.000045
-        
-        Function stats for (Thread) (2)
-        
-        name                                  ncall  tsub      ttot      tavg
-        ..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000065  
0.000065
-        doc3.py:8 _work                       1      0.000010  0.000048  
0.000048
-        
-        
-        Function stats for (Thread) (1)
-        
-        name                                  ncall  tsub      ttot      tavg
-        ..hon3.7/threading.py:859 Thread.run  1      0.000010  0.000043  
0.000043
-        doc3.py:8 _work                       1      0.000006  0.000033  
0.000033
-        '''
-        ```
-        
-        ### Different ways to filter/sort stats:
-        
-        You can use `filter_callback` on `get_func_stats` API to filter on 
functions, modules
-        or whatever available in `YFuncStat` object.
-        
-        ```python
-        import package_a
-        import yappi
-        import sys
-        
-        def a():
-            pass
-        
-        def b():
-            pass
-        
-        yappi.start()
-        a()
-        b()
-        package_a.a()
-        yappi.stop()
-        
-        # filter by module object
-        current_module = sys.modules[__name__]
-        stats = yappi.get_func_stats(
-            filter_callback=lambda x: yappi.module_matches(x, [current_module])
-        )  # x is a yappi.YFuncStat object
-        stats.sort("name", "desc").print_all()
-        '''
-        Clock type: CPU
-        Ordered by: name, desc
-        
-        name                                  ncall  tsub      ttot      tavg
-        doc2.py:10 b                          1      0.000001  0.000001  
0.000001
-        doc2.py:6 a                           1      0.000001  0.000001  
0.000001
-        '''
-        
-        # filter by function object
-        stats = yappi.get_func_stats(
-            filter_callback=lambda x: yappi.func_matches(x, [a, b])
-        ).print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        doc2.py:6 a                           1      0.000001  0.000001  
0.000001
-        doc2.py:10 b                          1      0.000001  0.000001  
0.000001
-        '''
-        
-        # filter by module name
-        stats = yappi.get_func_stats(filter_callback=lambda x: 'package_a' in 
x.module
-                                     ).print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        package_a/__init__.py:1 a             1      0.000001  0.000001  
0.000001
-        '''
-        
-        # filter by function name
-        stats = yappi.get_func_stats(filter_callback=lambda x: 'a' in x.name
-                                     ).print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        doc2.py:6 a                           1      0.000001  0.000001  
0.000001
-        package_a/__init__.py:1 a             1      0.000001  0.000001  
0.000001
-        '''
-        ```
-        
-        ### Profile an asyncio application:
-        
-        You can see that coroutine wall-time's are correctly profiled.
-        
-        ```python
-        import asyncio
-        import yappi
-        
-        async def foo():
-            await asyncio.sleep(1.0)
-            await baz()
-            await asyncio.sleep(0.5)
-        
-        async def bar():
-            await asyncio.sleep(2.0)
-        
-        async def baz():
-            await asyncio.sleep(1.0)
-        
-        yappi.set_clock_type("WALL")
-        with yappi.run():
-            asyncio.run(foo())
-            asyncio.run(bar())
-        yappi.get_func_stats().print_all()
-        '''
-        Clock type: WALL
-        Ordered by: totaltime, desc
-        
-        name                                  ncall  tsub      ttot      tavg  
    
-        doc4.py:5 foo                         1      0.000030  2.503808  
2.503808
-        doc4.py:11 bar                        1      0.000012  2.002492  
2.002492
-        doc4.py:15 baz                        1      0.000013  1.001397  
1.001397
-        '''
-        ```
-        
-        ### Profile a gevent application:
-        
-        You can use yappi to profile greenlet applications now!
-        
-        ```python
-        import yappi
-        from greenlet import greenlet
-        import time
-        
-        class GreenletA(greenlet):
-            def run(self):
-                time.sleep(1)
-        
-        yappi.set_context_backend("greenlet")
-        yappi.set_clock_type("wall")
-        
-        yappi.start(builtins=True)
-        a = GreenletA()
-        a.switch()
-        yappi.stop()
-        
-        yappi.get_func_stats().print_all()
-        '''
-        name                                  ncall  tsub      ttot      tavg
-        tests/test_random.py:6 GreenletA.run  1      0.000007  1.000494  
1.000494
-        time.sleep                            1      1.000487  1.000487  
1.000487
-        '''
-        ```
-        
-        ## Documentation
-        
-        - 
[Introduction](https://github.com/sumerc/yappi/blob/master/doc/introduction.md)
-        - [Clock 
Types](https://github.com/sumerc/yappi/blob/master/doc/clock_types.md)
-        - [API](https://github.com/sumerc/yappi/blob/master/doc/api.md)
-        - [Coroutine 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 _(new in 1.2)_
-        - [Greenlet 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md)
 _(new in 1.3)_
-        
-          Note: Yes. I know I should be moving docs to readthedocs.io. Stay 
tuned!
-        
-        
-        ## Related Talks
-        
-          Special thanks to A.Jesse Jiryu Davis:
-        - [Python Performance Profiling: The Guts And The Glory (PyCon 
2015)](https://www.youtube.com/watch?v=4uJWWXYHxaM)
-        
-        ## PyCharm Integration
-        
-        Yappi is the default profiler in `PyCharm`. If you have Yappi 
installed, `PyCharm` will use it. See [the 
official](https://www.jetbrains.com/help/pycharm/profiler.html) documentation 
for more details.
-        
-        
 Keywords: python thread multithread profiler
 Platform: UNKNOWN
 Classifier: Development Status :: 5 - Production/Stable
@@ -317,3 +26,296 @@
 Classifier: Topic :: Software Development :: Libraries :: Python Modules
 Description-Content-Type: text/markdown
 Provides-Extra: test
+License-File: LICENSE
+
+<p align="center">
+    <img 
src="https://raw.githubusercontent.com/sumerc/yappi/master/Misc/logo.png"; 
alt="yappi">
+</p>
+
+<h1 align="center">Yappi</h1>
+<p align="center">
+    A tracing profiler that is <b>thread&coroutine&greenlet</b> aware.
+</p>
+
+<p align="center">
+    <img src="https://www.travis-ci.org/sumerc/yappi.svg?branch=master";>
+    <img 
src="https://github.com/sumerc/yappi/workflows/CI/badge.svg?branch=master";>
+    <img src="https://img.shields.io/pypi/v/yappi.svg";>
+    <img src="https://img.shields.io/pypi/pyversions/yappi.svg";>
+    <img src="https://img.shields.io/github/last-commit/sumerc/yappi.svg";>
+    <img src="https://img.shields.io/github/license/sumerc/yappi.svg";>
+</p>
+
+## Highlights
+
+- **Fast**: Yappi is fast. It is completely written in C and lots of love&care 
went into making it fast.
+- **Unique**: Yappi supports multithreaded, 
[asyncio](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 and 
[gevent](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md) 
profiling. Tagging/filtering multiple profiler results has interesting [use 
cases](https://github.com/sumerc/yappi/blob/master/doc/api.md#set_tag_callback).
+- **Intuitive**: Profiler can??be??started/stopped and results can be obtained 
from any time and any thread.
+- **Standards Compliant**: 
Profiler??results??can??be??saved??in??[callgrind](http://valgrind.org/docs/manual/cl-format.html)??or??[pstat](http://docs.python.org/3.4/library/profile.html#pstats.Stats)??formats.
+- **Rich in Feature set**: Profiler results can show either [Wall 
Time](https://en.wikipedia.org/wiki/Elapsed_real_time) or actual [CPU 
Time](http://en.wikipedia.org/wiki/CPU_time) and can be 
aggregated??from??different??sessions. Various flags are defined for filtering 
and sorting profiler results.
+- **Robust**: Yappi had seen years of production usage.
+
+## Motivation
+
+CPython standard distribution comes with three deterministic profilers. 
`cProfile`, `Profile` and `hotshot`. `cProfile` is implemented as a C module 
based on `lsprof`, `Profile` is in pure Python and `hotshot` can be seen as a 
small subset of a cProfile. The major issue is that all of these profilers lack 
support for multi-threaded programs and CPU time.
+
+If you want to profile a  multi-threaded application, you must give an entry 
point to these profilers and then maybe merge the outputs. None of these 
profilers are designed to work on long-running multi-threaded applications. It 
is also not possible to profile an application that start/stop/retrieve traces 
on the fly with these profilers. 
+
+Now fast forwarding to 2019: With the latest improvements on `asyncio` library 
and asynchronous frameworks, most of the current profilers lacks the ability to 
show correct wall/cpu time or even call count information per-coroutine. Thus 
we need a different kind of approach to profile asynchronous code. Yappi, with 
v1.2 introduces the concept of `coroutine profiling`. With 
`coroutine-profiling`, you should be able to profile correct wall/cpu time and 
call count of your coroutine. (including the time spent in context switches, 
too). You can see details 
[here](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md).
+
+
+## Installation
+
+Can be installed via PyPI
+
+```
+$ pip install yappi
+```
+
+OR from the source directly.
+
+```
+$ pip install git+https://github.com/sumerc/yappi#egg=yappi
+```
+
+## Examples
+
+### A simple example:
+
+```python
+import yappi
+
+def a():
+    for _ in range(10000000):  # do something CPU heavy
+        pass
+
+yappi.set_clock_type("cpu") # Use set_clock_type("wall") for wall time
+yappi.start()
+a()
+
+yappi.get_func_stats().print_all()
+yappi.get_thread_stats().print_all()
+'''
+
+Clock type: CPU
+Ordered by: totaltime, desc
+
+name                                  ncall  tsub      ttot      tavg      
+doc.py:5 a                            1      0.117907  0.117907  0.117907
+
+name           id     tid              ttot      scnt        
+_MainThread    0      139867147315008  0.118297  1
+'''
+```
+
+### Profile a multithreaded application:
+
+You can profile a multithreaded application via Yappi and can easily retrieve
+per-thread profile information by filtering on `ctx_id` with `get_func_stats` 
API.
+
+```python
+import yappi
+import time
+import threading
+
+_NTHREAD = 3
+
+
+def _work(n):
+    time.sleep(n * 0.1)
+
+
+yappi.start()
+
+threads = []
+# generate _NTHREAD threads
+for i in range(_NTHREAD):
+    t = threading.Thread(target=_work, args=(i + 1, ))
+    t.start()
+    threads.append(t)
+# wait all threads to finish
+for t in threads:
+    t.join()
+
+yappi.stop()
+
+# retrieve thread stats by their thread id (given by yappi)
+threads = yappi.get_thread_stats()
+for thread in threads:
+    print(
+        "Function stats for (%s) (%d)" % (thread.name, thread.id)
+    )  # it is the Thread.__class__.__name__
+    yappi.get_func_stats(ctx_id=thread.id).print_all()
+'''
+Function stats for (Thread) (3)
+
+name                                  ncall  tsub      ttot      tavg
+..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000062  0.000062
+doc3.py:8 _work                       1      0.000012  0.000045  0.000045
+
+Function stats for (Thread) (2)
+
+name                                  ncall  tsub      ttot      tavg
+..hon3.7/threading.py:859 Thread.run  1      0.000017  0.000065  0.000065
+doc3.py:8 _work                       1      0.000010  0.000048  0.000048
+
+
+Function stats for (Thread) (1)
+
+name                                  ncall  tsub      ttot      tavg
+..hon3.7/threading.py:859 Thread.run  1      0.000010  0.000043  0.000043
+doc3.py:8 _work                       1      0.000006  0.000033  0.000033
+'''
+```
+
+### Different ways to filter/sort stats:
+
+You can use `filter_callback` on `get_func_stats` API to filter on functions, 
modules
+or whatever available in `YFuncStat` object.
+
+```python
+import package_a
+import yappi
+import sys
+
+def a():
+    pass
+
+def b():
+    pass
+
+yappi.start()
+a()
+b()
+package_a.a()
+yappi.stop()
+
+# filter by module object
+current_module = sys.modules[__name__]
+stats = yappi.get_func_stats(
+    filter_callback=lambda x: yappi.module_matches(x, [current_module])
+)  # x is a yappi.YFuncStat object
+stats.sort("name", "desc").print_all()
+'''
+Clock type: CPU
+Ordered by: name, desc
+
+name                                  ncall  tsub      ttot      tavg
+doc2.py:10 b                          1      0.000001  0.000001  0.000001
+doc2.py:6 a                           1      0.000001  0.000001  0.000001
+'''
+
+# filter by function object
+stats = yappi.get_func_stats(
+    filter_callback=lambda x: yappi.func_matches(x, [a, b])
+).print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+doc2.py:6 a                           1      0.000001  0.000001  0.000001
+doc2.py:10 b                          1      0.000001  0.000001  0.000001
+'''
+
+# filter by module name
+stats = yappi.get_func_stats(filter_callback=lambda x: 'package_a' in x.module
+                             ).print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+package_a/__init__.py:1 a             1      0.000001  0.000001  0.000001
+'''
+
+# filter by function name
+stats = yappi.get_func_stats(filter_callback=lambda x: 'a' in x.name
+                             ).print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+doc2.py:6 a                           1      0.000001  0.000001  0.000001
+package_a/__init__.py:1 a             1      0.000001  0.000001  0.000001
+'''
+```
+
+### Profile an asyncio application:
+
+You can see that coroutine wall-time's are correctly profiled.
+
+```python
+import asyncio
+import yappi
+
+async def foo():
+    await asyncio.sleep(1.0)
+    await baz()
+    await asyncio.sleep(0.5)
+
+async def bar():
+    await asyncio.sleep(2.0)
+
+async def baz():
+    await asyncio.sleep(1.0)
+
+yappi.set_clock_type("WALL")
+with yappi.run():
+    asyncio.run(foo())
+    asyncio.run(bar())
+yappi.get_func_stats().print_all()
+'''
+Clock type: WALL
+Ordered by: totaltime, desc
+
+name                                  ncall  tsub      ttot      tavg      
+doc4.py:5 foo                         1      0.000030  2.503808  2.503808
+doc4.py:11 bar                        1      0.000012  2.002492  2.002492
+doc4.py:15 baz                        1      0.000013  1.001397  1.001397
+'''
+```
+
+### Profile a gevent application:
+
+You can use yappi to profile greenlet applications now!
+
+```python
+import yappi
+from greenlet import greenlet
+import time
+
+class GreenletA(greenlet):
+    def run(self):
+        time.sleep(1)
+
+yappi.set_context_backend("greenlet")
+yappi.set_clock_type("wall")
+
+yappi.start(builtins=True)
+a = GreenletA()
+a.switch()
+yappi.stop()
+
+yappi.get_func_stats().print_all()
+'''
+name                                  ncall  tsub      ttot      tavg
+tests/test_random.py:6 GreenletA.run  1      0.000007  1.000494  1.000494
+time.sleep                            1      1.000487  1.000487  1.000487
+'''
+```
+
+## Documentation
+
+- 
[Introduction](https://github.com/sumerc/yappi/blob/master/doc/introduction.md)
+- [Clock Types](https://github.com/sumerc/yappi/blob/master/doc/clock_types.md)
+- [API](https://github.com/sumerc/yappi/blob/master/doc/api.md)
+- [Coroutine 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/coroutine-profiling.md)
 _(new in 1.2)_
+- [Greenlet 
Profiling](https://github.com/sumerc/yappi/blob/master/doc/greenlet-profiling.md)
 _(new in 1.3)_
+
+  Note: Yes. I know I should be moving docs to readthedocs.io. Stay tuned!
+
+
+## Related Talks
+
+  Special thanks to A.Jesse Jiryu Davis:
+- [Python Performance Profiling: The Guts And The Glory (PyCon 
2015)](https://www.youtube.com/watch?v=4uJWWXYHxaM)
+
+## PyCharm Integration
+
+Yappi is the default profiler in `PyCharm`. If you have Yappi installed, 
`PyCharm` will use it. See [the 
official](https://www.jetbrains.com/help/pycharm/profiler.html) documentation 
for more details.
+
+
+

Reply via email to