https://github.com/python/cpython/commit/c92a473a71a0c395df57d31cd49900057da3c25b
commit: c92a473a71a0c395df57d31cd49900057da3c25b
branch: 3.11
author: Serhiy Storchaka <storch...@gmail.com>
committer: serhiy-storchaka <storch...@gmail.com>
date: 2024-01-10T13:13:27Z
summary:

[3.11] gh-113664: Improve style of Big O notation (GH-113695) (GH-113910)

Use cursive to make it looking like mathematic formulas.
(cherry picked from commit a8629816c6c0e6770248a60529fd7c9ba08aad55)

files:
M Doc/faq/design.rst
M Doc/glossary.rst
M Doc/library/asyncio-policy.rst
M Doc/library/bisect.rst
M Doc/library/collections.rst
M Doc/library/contextvars.rst
M Doc/library/heapq.rst
M Doc/library/select.rst
M Doc/reference/datamodel.rst
M Doc/using/cmdline.rst
M Doc/whatsnew/2.3.rst
M Doc/whatsnew/2.7.rst
M Doc/whatsnew/3.3.rst
M Misc/NEWS.d/3.5.0a1.rst

diff --git a/Doc/faq/design.rst b/Doc/faq/design.rst
index 83c0152c85e84a..d0c136deabad76 100644
--- a/Doc/faq/design.rst
+++ b/Doc/faq/design.rst
@@ -449,7 +449,7 @@ on the key and a per-process seed; for example, "Python" 
could hash to
 to 1142331976.  The hash code is then used to calculate a location in an
 internal array where the value will be stored.  Assuming that you're storing
 keys that all have different hash values, this means that dictionaries take
-constant time -- O(1), in Big-O notation -- to retrieve a key.
+constant time -- *O*\ (1), in Big-O notation -- to retrieve a key.
 
 
 Why must dictionary keys be immutable?
diff --git a/Doc/glossary.rst b/Doc/glossary.rst
index ec6cec3acc7939..7dd178d5ad34f9 100644
--- a/Doc/glossary.rst
+++ b/Doc/glossary.rst
@@ -741,7 +741,7 @@ Glossary
    list
       A built-in Python :term:`sequence`.  Despite its name it is more akin
       to an array in other languages than to a linked list since access to
-      elements is O(1).
+      elements is *O*\ (1).
 
    list comprehension
       A compact way to process all or part of the elements in a sequence and
diff --git a/Doc/library/asyncio-policy.rst b/Doc/library/asyncio-policy.rst
index f846f76ca095f4..3cd0f1e9d23cf3 100644
--- a/Doc/library/asyncio-policy.rst
+++ b/Doc/library/asyncio-policy.rst
@@ -226,7 +226,7 @@ implementation used by the asyncio event loop:
 
    It works reliably even when the asyncio event loop is run in a non-main OS 
thread.
 
-   There is no noticeable overhead when handling a big number of children 
(*O(1)* each
+   There is no noticeable overhead when handling a big number of children 
(*O*\ (1) each
    time a child terminates), but starting a thread per process requires extra 
memory.
 
    This watcher is used by default.
@@ -246,7 +246,7 @@ implementation used by the asyncio event loop:
    watcher is installed.
 
    The solution is safe but it has a significant overhead when
-   handling a big number of processes (*O(n)* each time a
+   handling a big number of processes (*O*\ (*n*) each time a
    :py:data:`SIGCHLD` is received).
 
    .. versionadded:: 3.8
@@ -260,7 +260,7 @@ implementation used by the asyncio event loop:
    The watcher avoids disrupting other code spawning processes
    by polling every process explicitly on a :py:data:`SIGCHLD` signal.
 
-   This solution is as safe as :class:`MultiLoopChildWatcher` and has the same 
*O(N)*
+   This solution is as safe as :class:`MultiLoopChildWatcher` and has the same 
*O*\ (*n*)
    complexity but requires a running event loop in the main thread to work.
 
 .. class:: FastChildWatcher
@@ -270,7 +270,7 @@ implementation used by the asyncio event loop:
    processes and waiting for their termination.
 
    There is no noticeable overhead when handling a big number of
-   children (*O(1)* each time a child terminates).
+   children (*O*\ (1) each time a child terminates).
 
    This solution requires a running event loop in the main thread to work, as
    :class:`SafeChildWatcher`.
diff --git a/Doc/library/bisect.rst b/Doc/library/bisect.rst
index 75d16dc3e1021c..92372f12ca4425 100644
--- a/Doc/library/bisect.rst
+++ b/Doc/library/bisect.rst
@@ -79,7 +79,7 @@ The following functions are provided:
    To support inserting records in a table, the *key* function (if any) is
    applied to *x* for the search step but not for the insertion step.
 
-   Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
+   Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ 
(*n*)
    insertion step.
 
    .. versionchanged:: 3.10
@@ -99,7 +99,7 @@ The following functions are provided:
    To support inserting records in a table, the *key* function (if any) is
    applied to *x* for the search step but not for the insertion step.
 
-   Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
+   Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ 
(*n*)
    insertion step.
 
    .. versionchanged:: 3.10
@@ -115,7 +115,7 @@ thoughts in mind:
 * Bisection is effective for searching ranges of values.
   For locating specific values, dictionaries are more performant.
 
-* The *insort()* functions are ``O(n)`` because the logarithmic search step
+* The *insort()* functions are *O*\ (*n*) because the logarithmic search step
   is dominated by the linear time insertion step.
 
 * The search functions are stateless and discard key function results after
diff --git a/Doc/library/collections.rst b/Doc/library/collections.rst
index a29cc9530390bc..10cd1e8e2f1e1d 100644
--- a/Doc/library/collections.rst
+++ b/Doc/library/collections.rst
@@ -457,10 +457,10 @@ or subtracting from an empty counter.
     Deques are a generalization of stacks and queues (the name is pronounced 
"deck"
     and is short for "double-ended queue").  Deques support thread-safe, memory
     efficient appends and pops from either side of the deque with 
approximately the
-    same O(1) performance in either direction.
+    same *O*\ (1) performance in either direction.
 
     Though :class:`list` objects support similar operations, they are 
optimized for
-    fast fixed-length operations and incur O(n) memory movement costs for
+    fast fixed-length operations and incur *O*\ (*n*) memory movement costs for
     ``pop(0)`` and ``insert(0, v)`` operations which change both the size and
     position of the underlying data representation.
 
@@ -584,7 +584,7 @@ or subtracting from an empty counter.
 In addition to the above, deques support iteration, pickling, ``len(d)``,
 ``reversed(d)``, ``copy.copy(d)``, ``copy.deepcopy(d)``, membership testing 
with
 the :keyword:`in` operator, and subscript references such as ``d[0]`` to access
-the first element.  Indexed access is O(1) at both ends but slows to O(n) in
+the first element.  Indexed access is *O*\ (1) at both ends but slows to *O*\ 
(*n*) in
 the middle.  For fast random access, use lists instead.
 
 Starting in version 3.5, deques support ``__add__()``, ``__mul__()``,
diff --git a/Doc/library/contextvars.rst b/Doc/library/contextvars.rst
index 0ac2f3d85749b7..647832447de946 100644
--- a/Doc/library/contextvars.rst
+++ b/Doc/library/contextvars.rst
@@ -131,7 +131,7 @@ Manual Context Management
       ctx: Context = copy_context()
       print(list(ctx.items()))
 
-   The function has an O(1) complexity, i.e. works equally fast for
+   The function has an *O*\ (1) complexity, i.e. works equally fast for
    contexts with a few context variables and for contexts that have
    a lot of them.
 
diff --git a/Doc/library/heapq.rst b/Doc/library/heapq.rst
index 8b00f7b27959b6..ddbada13bddf5b 100644
--- a/Doc/library/heapq.rst
+++ b/Doc/library/heapq.rst
@@ -270,7 +270,7 @@ winner.  The simplest algorithmic way to remove it and find 
the "next" winner is
 to move some loser (let's say cell 30 in the diagram above) into the 0 
position,
 and then percolate this new 0 down the tree, exchanging values, until the
 invariant is re-established. This is clearly logarithmic on the total number of
-items in the tree. By iterating over all items, you get an O(n log n) sort.
+items in the tree. By iterating over all items, you get an *O*\ (*n* log *n*) 
sort.
 
 A nice feature of this sort is that you can efficiently insert new items while
 the sort is going on, provided that the inserted items are not "better" than 
the
diff --git a/Doc/library/select.rst b/Doc/library/select.rst
index c2941e628d9d78..a0058046d0ce4c 100644
--- a/Doc/library/select.rst
+++ b/Doc/library/select.rst
@@ -185,8 +185,8 @@ The module defines the following:
 -----------------------------
 
 Solaris and derivatives have ``/dev/poll``. While :c:func:`!select` is
-O(highest file descriptor) and :c:func:`!poll` is O(number of file
-descriptors), ``/dev/poll`` is O(active file descriptors).
+*O*\ (*highest file descriptor*) and :c:func:`!poll` is *O*\ (*number of file
+descriptors*), ``/dev/poll`` is *O*\ (*active file descriptors*).
 
 ``/dev/poll`` behaviour is very close to the standard :c:func:`!poll`
 object.
@@ -381,8 +381,8 @@ scalability for network servers that service many, many 
clients at the same
 time. :c:func:`!poll` scales better because the system call only requires 
listing
 the file descriptors of interest, while :c:func:`!select` builds a bitmap, 
turns
 on bits for the fds of interest, and then afterward the whole bitmap has to be
-linearly scanned again. :c:func:`!select` is O(highest file descriptor), while
-:c:func:`!poll` is O(number of file descriptors).
+linearly scanned again. :c:func:`!select` is *O*\ (*highest file descriptor*), 
while
+:c:func:`!poll` is *O*\ (*number of file descriptors*).
 
 
 .. method:: poll.register(fd[, eventmask])
diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst
index e04827e34f7927..54ba171ec7696c 100644
--- a/Doc/reference/datamodel.rst
+++ b/Doc/reference/datamodel.rst
@@ -1854,7 +1854,7 @@ Basic customization
 
       This is intended to provide protection against a denial-of-service caused
       by carefully chosen inputs that exploit the worst case performance of a
-      dict insertion, O(n\ :sup:`2`) complexity.  See
+      dict insertion, *O*\ (*n*\ :sup:`2`) complexity.  See
       http://ocert.org/advisories/ocert-2011-003.html for details.
 
       Changing hash values affects the iteration order of sets.
diff --git a/Doc/using/cmdline.rst b/Doc/using/cmdline.rst
index 5389d33023096d..0ddf2964ab11f4 100644
--- a/Doc/using/cmdline.rst
+++ b/Doc/using/cmdline.rst
@@ -366,7 +366,7 @@ Miscellaneous options
 
    Hash randomization is intended to provide protection against a
    denial-of-service caused by carefully chosen inputs that exploit the worst
-   case performance of a dict construction, O(n\ :sup:`2`) complexity.  See
+   case performance of a dict construction, *O*\ (*n*\ :sup:`2`) complexity.  
See
    http://ocert.org/advisories/ocert-2011-003.html for details.
 
    :envvar:`PYTHONHASHSEED` allows you to set a fixed value for the hash
diff --git a/Doc/whatsnew/2.3.rst b/Doc/whatsnew/2.3.rst
index 01c8879ba83870..6127a5df966a18 100644
--- a/Doc/whatsnew/2.3.rst
+++ b/Doc/whatsnew/2.3.rst
@@ -1196,7 +1196,7 @@ Optimizations
 
 * Multiplication of large long integers is now much faster thanks to an
   implementation of Karatsuba multiplication, an algorithm that scales better 
than
-  the O(n\*n) required for the grade-school multiplication algorithm.  
(Original
+  the *O*\ (*n*\ :sup:`2`) required for the grade-school multiplication 
algorithm.  (Original
   patch by Christopher A. Craig, and significantly reworked by Tim Peters.)
 
 * The ``SET_LINENO`` opcode is now gone.  This may provide a small speed
@@ -1308,7 +1308,7 @@ complete list of changes, or look through the CVS logs 
for all the details.
   partially sorted order such that, for every index *k*, ``heap[k] <=
   heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]``.  This makes it quick to remove 
the
   smallest item, and inserting a new item while maintaining the heap property 
is
-  O(lg n).  (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more
+  *O*\ (log *n*).  (See https://xlinux.nist.gov/dads//HTML/priorityque.html 
for more
   information about the priority queue data structure.)
 
   The :mod:`heapq` module provides :func:`~heapq.heappush` and 
:func:`~heapq.heappop` functions
diff --git a/Doc/whatsnew/2.7.rst b/Doc/whatsnew/2.7.rst
index 099090487ed94a..25206c143c340b 100644
--- a/Doc/whatsnew/2.7.rst
+++ b/Doc/whatsnew/2.7.rst
@@ -282,7 +282,7 @@ How does the :class:`~collections.OrderedDict` work?  It 
maintains a
 doubly linked list of keys, appending new keys to the list as they're inserted.
 A secondary dictionary maps keys to their corresponding list node, so
 deletion doesn't have to traverse the entire linked list and therefore
-remains O(1).
+remains *O*\ (1).
 
 The standard library now supports use of ordered dictionaries in several
 modules.
diff --git a/Doc/whatsnew/3.3.rst b/Doc/whatsnew/3.3.rst
index 3bc3d2f7c3740e..608a8b54c10ffd 100644
--- a/Doc/whatsnew/3.3.rst
+++ b/Doc/whatsnew/3.3.rst
@@ -174,7 +174,7 @@ Features
   b or c are now hashable.  (Contributed by Antoine Pitrou in :issue:`13411`.)
 
 * Arbitrary slicing of any 1-D arrays type is supported. For example, it
-  is now possible to reverse a memoryview in O(1) by using a negative step.
+  is now possible to reverse a memoryview in *O*\ (1) by using a negative step.
 
 API changes
 -----------
diff --git a/Misc/NEWS.d/3.5.0a1.rst b/Misc/NEWS.d/3.5.0a1.rst
index 96e59206cb1291..f793323f217070 100644
--- a/Misc/NEWS.d/3.5.0a1.rst
+++ b/Misc/NEWS.d/3.5.0a1.rst
@@ -2648,7 +2648,7 @@ module.
 .. nonce: THJSYB
 .. section: Library
 
-Changed FeedParser feed() to avoid O(N\ :sup:`2`) behavior when parsing long 
line.
+Changed FeedParser feed() to avoid *O*\ (*n*\ :sup:`2`) behavior when parsing 
long line.
 Original patch by Raymond Hettinger.
 
 ..

_______________________________________________
Python-checkins mailing list -- python-checkins@python.org
To unsubscribe send an email to python-checkins-le...@python.org
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: arch...@mail-archive.com

Reply via email to