https://github.com/python/cpython/commit/082998c3af5bfd163da5c0198cd10acf588a44ba
commit: 082998c3af5bfd163da5c0198cd10acf588a44ba
branch: 3.12
author: Miss Islington (bot) <31488909+miss-isling...@users.noreply.github.com>
committer: serhiy-storchaka <storch...@gmail.com>
date: 2024-01-10T13:08:08Z
summary:

[3.12] gh-113664: Improve style of Big O notation (GH-113695) (GH-113909)

Use cursive to make it looking like mathematic formulas.
(cherry picked from commit a8629816c6c0e6770248a60529fd7c9ba08aad55)

Co-authored-by: Serhiy Storchaka <storch...@gmail.com>

files:
M Doc/faq/design.rst
M Doc/glossary.rst
M Doc/library/asyncio-policy.rst
M Doc/library/bisect.rst
M Doc/library/collections.rst
M Doc/library/contextvars.rst
M Doc/library/heapq.rst
M Doc/library/select.rst
M Doc/reference/datamodel.rst
M Doc/using/cmdline.rst
M Doc/whatsnew/2.3.rst
M Doc/whatsnew/2.7.rst
M Doc/whatsnew/3.3.rst
M Misc/NEWS.d/3.12.0a1.rst
M Misc/NEWS.d/3.12.0a7.rst
M Misc/NEWS.d/3.5.0a1.rst

diff --git a/Doc/faq/design.rst b/Doc/faq/design.rst
index ae02c443e5938b..300e1b6cc40a58 100644
--- a/Doc/faq/design.rst
+++ b/Doc/faq/design.rst
@@ -451,7 +451,7 @@ on the key and a per-process seed; for example, 
``'Python'`` could hash to
 to ``1142331976``.  The hash code is then used to calculate a location in an
 internal array where the value will be stored.  Assuming that you're storing
 keys that all have different hash values, this means that dictionaries take
-constant time -- O(1), in Big-O notation -- to retrieve a key.
+constant time -- *O*\ (1), in Big-O notation -- to retrieve a key.
 
 
 Why must dictionary keys be immutable?
diff --git a/Doc/glossary.rst b/Doc/glossary.rst
index a687086c8d2138..a5f01c4c466f0a 100644
--- a/Doc/glossary.rst
+++ b/Doc/glossary.rst
@@ -741,7 +741,7 @@ Glossary
    list
       A built-in Python :term:`sequence`.  Despite its name it is more akin
       to an array in other languages than to a linked list since access to
-      elements is O(1).
+      elements is *O*\ (1).
 
    list comprehension
       A compact way to process all or part of the elements in a sequence and
diff --git a/Doc/library/asyncio-policy.rst b/Doc/library/asyncio-policy.rst
index 0d7821e608ec98..346b740a8f757a 100644
--- a/Doc/library/asyncio-policy.rst
+++ b/Doc/library/asyncio-policy.rst
@@ -237,7 +237,7 @@ implementation used by the asyncio event loop:
 
    It works reliably even when the asyncio event loop is run in a non-main OS 
thread.
 
-   There is no noticeable overhead when handling a big number of children 
(*O(1)* each
+   There is no noticeable overhead when handling a big number of children 
(*O*\ (1) each
    time a child terminates), but starting a thread per process requires extra 
memory.
 
    This watcher is used by default.
@@ -257,7 +257,7 @@ implementation used by the asyncio event loop:
    watcher is installed.
 
    The solution is safe but it has a significant overhead when
-   handling a big number of processes (*O(n)* each time a
+   handling a big number of processes (*O*\ (*n*) each time a
    :py:data:`SIGCHLD` is received).
 
    .. versionadded:: 3.8
@@ -273,7 +273,7 @@ implementation used by the asyncio event loop:
    The watcher avoids disrupting other code spawning processes
    by polling every process explicitly on a :py:data:`SIGCHLD` signal.
 
-   This solution is as safe as :class:`MultiLoopChildWatcher` and has the same 
*O(N)*
+   This solution is as safe as :class:`MultiLoopChildWatcher` and has the same 
*O*\ (*n*)
    complexity but requires a running event loop in the main thread to work.
 
    .. deprecated:: 3.12
@@ -285,7 +285,7 @@ implementation used by the asyncio event loop:
    processes and waiting for their termination.
 
    There is no noticeable overhead when handling a big number of
-   children (*O(1)* each time a child terminates).
+   children (*O*\ (1) each time a child terminates).
 
    This solution requires a running event loop in the main thread to work, as
    :class:`SafeChildWatcher`.
diff --git a/Doc/library/bisect.rst b/Doc/library/bisect.rst
index c0923093c1cb06..31c79b91061591 100644
--- a/Doc/library/bisect.rst
+++ b/Doc/library/bisect.rst
@@ -79,7 +79,7 @@ The following functions are provided:
    To support inserting records in a table, the *key* function (if any) is
    applied to *x* for the search step but not for the insertion step.
 
-   Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
+   Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ 
(*n*)
    insertion step.
 
    .. versionchanged:: 3.10
@@ -99,7 +99,7 @@ The following functions are provided:
    To support inserting records in a table, the *key* function (if any) is
    applied to *x* for the search step but not for the insertion step.
 
-   Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
+   Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ 
(*n*)
    insertion step.
 
    .. versionchanged:: 3.10
@@ -115,7 +115,7 @@ thoughts in mind:
 * Bisection is effective for searching ranges of values.
   For locating specific values, dictionaries are more performant.
 
-* The *insort()* functions are ``O(n)`` because the logarithmic search step
+* The *insort()* functions are *O*\ (*n*) because the logarithmic search step
   is dominated by the linear time insertion step.
 
 * The search functions are stateless and discard key function results after
diff --git a/Doc/library/collections.rst b/Doc/library/collections.rst
index eec2330ca97fd0..1a3084ef107c88 100644
--- a/Doc/library/collections.rst
+++ b/Doc/library/collections.rst
@@ -458,10 +458,10 @@ or subtracting from an empty counter.
     Deques are a generalization of stacks and queues (the name is pronounced 
"deck"
     and is short for "double-ended queue").  Deques support thread-safe, memory
     efficient appends and pops from either side of the deque with 
approximately the
-    same O(1) performance in either direction.
+    same *O*\ (1) performance in either direction.
 
     Though :class:`list` objects support similar operations, they are 
optimized for
-    fast fixed-length operations and incur O(n) memory movement costs for
+    fast fixed-length operations and incur *O*\ (*n*) memory movement costs for
     ``pop(0)`` and ``insert(0, v)`` operations which change both the size and
     position of the underlying data representation.
 
@@ -585,7 +585,7 @@ or subtracting from an empty counter.
 In addition to the above, deques support iteration, pickling, ``len(d)``,
 ``reversed(d)``, ``copy.copy(d)``, ``copy.deepcopy(d)``, membership testing 
with
 the :keyword:`in` operator, and subscript references such as ``d[0]`` to access
-the first element.  Indexed access is O(1) at both ends but slows to O(n) in
+the first element.  Indexed access is *O*\ (1) at both ends but slows to *O*\ 
(*n*) in
 the middle.  For fast random access, use lists instead.
 
 Starting in version 3.5, deques support ``__add__()``, ``__mul__()``,
diff --git a/Doc/library/contextvars.rst b/Doc/library/contextvars.rst
index 0ac2f3d85749b7..647832447de946 100644
--- a/Doc/library/contextvars.rst
+++ b/Doc/library/contextvars.rst
@@ -131,7 +131,7 @@ Manual Context Management
       ctx: Context = copy_context()
       print(list(ctx.items()))
 
-   The function has an O(1) complexity, i.e. works equally fast for
+   The function has an *O*\ (1) complexity, i.e. works equally fast for
    contexts with a few context variables and for contexts that have
    a lot of them.
 
diff --git a/Doc/library/heapq.rst b/Doc/library/heapq.rst
index 8b00f7b27959b6..ddbada13bddf5b 100644
--- a/Doc/library/heapq.rst
+++ b/Doc/library/heapq.rst
@@ -270,7 +270,7 @@ winner.  The simplest algorithmic way to remove it and find 
the "next" winner is
 to move some loser (let's say cell 30 in the diagram above) into the 0 
position,
 and then percolate this new 0 down the tree, exchanging values, until the
 invariant is re-established. This is clearly logarithmic on the total number of
-items in the tree. By iterating over all items, you get an O(n log n) sort.
+items in the tree. By iterating over all items, you get an *O*\ (*n* log *n*) 
sort.
 
 A nice feature of this sort is that you can efficiently insert new items while
 the sort is going on, provided that the inserted items are not "better" than 
the
diff --git a/Doc/library/select.rst b/Doc/library/select.rst
index c2941e628d9d78..a0058046d0ce4c 100644
--- a/Doc/library/select.rst
+++ b/Doc/library/select.rst
@@ -185,8 +185,8 @@ The module defines the following:
 -----------------------------
 
 Solaris and derivatives have ``/dev/poll``. While :c:func:`!select` is
-O(highest file descriptor) and :c:func:`!poll` is O(number of file
-descriptors), ``/dev/poll`` is O(active file descriptors).
+*O*\ (*highest file descriptor*) and :c:func:`!poll` is *O*\ (*number of file
+descriptors*), ``/dev/poll`` is *O*\ (*active file descriptors*).
 
 ``/dev/poll`` behaviour is very close to the standard :c:func:`!poll`
 object.
@@ -381,8 +381,8 @@ scalability for network servers that service many, many 
clients at the same
 time. :c:func:`!poll` scales better because the system call only requires 
listing
 the file descriptors of interest, while :c:func:`!select` builds a bitmap, 
turns
 on bits for the fds of interest, and then afterward the whole bitmap has to be
-linearly scanned again. :c:func:`!select` is O(highest file descriptor), while
-:c:func:`!poll` is O(number of file descriptors).
+linearly scanned again. :c:func:`!select` is *O*\ (*highest file descriptor*), 
while
+:c:func:`!poll` is *O*\ (*number of file descriptors*).
 
 
 .. method:: poll.register(fd[, eventmask])
diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst
index 7308dcde1fafe3..fe39b82846039f 100644
--- a/Doc/reference/datamodel.rst
+++ b/Doc/reference/datamodel.rst
@@ -1871,7 +1871,7 @@ Basic customization
 
       This is intended to provide protection against a denial-of-service caused
       by carefully chosen inputs that exploit the worst case performance of a
-      dict insertion, O(n\ :sup:`2`) complexity.  See
+      dict insertion, *O*\ (*n*\ :sup:`2`) complexity.  See
       http://ocert.org/advisories/ocert-2011-003.html for details.
 
       Changing hash values affects the iteration order of sets.
diff --git a/Doc/using/cmdline.rst b/Doc/using/cmdline.rst
index de6a1843355976..9da28dafdd772c 100644
--- a/Doc/using/cmdline.rst
+++ b/Doc/using/cmdline.rst
@@ -369,7 +369,7 @@ Miscellaneous options
 
    Hash randomization is intended to provide protection against a
    denial-of-service caused by carefully chosen inputs that exploit the worst
-   case performance of a dict construction, O(n\ :sup:`2`) complexity.  See
+   case performance of a dict construction, *O*\ (*n*\ :sup:`2`) complexity.  
See
    http://ocert.org/advisories/ocert-2011-003.html for details.
 
    :envvar:`PYTHONHASHSEED` allows you to set a fixed value for the hash
diff --git a/Doc/whatsnew/2.3.rst b/Doc/whatsnew/2.3.rst
index 6027e5799a2b24..b292b2ece0c7d0 100644
--- a/Doc/whatsnew/2.3.rst
+++ b/Doc/whatsnew/2.3.rst
@@ -1196,7 +1196,7 @@ Optimizations
 
 * Multiplication of large long integers is now much faster thanks to an
   implementation of Karatsuba multiplication, an algorithm that scales better 
than
-  the O(n\*n) required for the grade-school multiplication algorithm.  
(Original
+  the *O*\ (*n*\ :sup:`2`) required for the grade-school multiplication 
algorithm.  (Original
   patch by Christopher A. Craig, and significantly reworked by Tim Peters.)
 
 * The ``SET_LINENO`` opcode is now gone.  This may provide a small speed
@@ -1308,7 +1308,7 @@ complete list of changes, or look through the CVS logs 
for all the details.
   partially sorted order such that, for every index *k*, ``heap[k] <=
   heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]``.  This makes it quick to remove 
the
   smallest item, and inserting a new item while maintaining the heap property 
is
-  O(lg n).  (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more
+  *O*\ (log *n*).  (See https://xlinux.nist.gov/dads//HTML/priorityque.html 
for more
   information about the priority queue data structure.)
 
   The :mod:`heapq` module provides :func:`~heapq.heappush` and 
:func:`~heapq.heappop` functions
diff --git a/Doc/whatsnew/2.7.rst b/Doc/whatsnew/2.7.rst
index 2f89b9b35e5345..9eb157c5299fea 100644
--- a/Doc/whatsnew/2.7.rst
+++ b/Doc/whatsnew/2.7.rst
@@ -282,7 +282,7 @@ How does the :class:`~collections.OrderedDict` work?  It 
maintains a
 doubly linked list of keys, appending new keys to the list as they're inserted.
 A secondary dictionary maps keys to their corresponding list node, so
 deletion doesn't have to traverse the entire linked list and therefore
-remains O(1).
+remains *O*\ (1).
 
 The standard library now supports use of ordered dictionaries in several
 modules.
diff --git a/Doc/whatsnew/3.3.rst b/Doc/whatsnew/3.3.rst
index ad5cd371919aa3..0f810c17cb9443 100644
--- a/Doc/whatsnew/3.3.rst
+++ b/Doc/whatsnew/3.3.rst
@@ -174,7 +174,7 @@ Features
   b or c are now hashable.  (Contributed by Antoine Pitrou in :issue:`13411`.)
 
 * Arbitrary slicing of any 1-D arrays type is supported. For example, it
-  is now possible to reverse a memoryview in O(1) by using a negative step.
+  is now possible to reverse a memoryview in *O*\ (1) by using a negative step.
 
 API changes
 -----------
diff --git a/Misc/NEWS.d/3.12.0a1.rst b/Misc/NEWS.d/3.12.0a1.rst
index 031ef2d3c24174..5bce9e1a9ecc37 100644
--- a/Misc/NEWS.d/3.12.0a1.rst
+++ b/Misc/NEWS.d/3.12.0a1.rst
@@ -1462,7 +1462,7 @@ expression, but there's no trailing brace. For example, 
f"{i=".
 .. nonce: Jf6gAj
 .. section: Core and Builtins
 
-Cache the result of :c:func:`PyCode_GetCode` function to restore the O(1)
+Cache the result of :c:func:`PyCode_GetCode` function to restore the *O*\ (1)
 lookup of the :attr:`~types.CodeType.co_code` attribute.
 
 ..
diff --git a/Misc/NEWS.d/3.12.0a7.rst b/Misc/NEWS.d/3.12.0a7.rst
index 1ef81747558857..f22050b0dc377b 100644
--- a/Misc/NEWS.d/3.12.0a7.rst
+++ b/Misc/NEWS.d/3.12.0a7.rst
@@ -429,7 +429,7 @@ an awaitable object. Patch by Kumar Aditya.
 Speed up setting or deleting mutable attributes on non-dataclass subclasses
 of frozen dataclasses. Due to the implementation of ``__setattr__`` and
 ``__delattr__`` for frozen dataclasses, this previously had a time
-complexity of ``O(n)``. It now has a time complexity of ``O(1)``.
+complexity of *O*\ (*n*). It now has a time complexity of *O*\ (1).
 
 ..
 
diff --git a/Misc/NEWS.d/3.5.0a1.rst b/Misc/NEWS.d/3.5.0a1.rst
index 96e59206cb1291..f793323f217070 100644
--- a/Misc/NEWS.d/3.5.0a1.rst
+++ b/Misc/NEWS.d/3.5.0a1.rst
@@ -2648,7 +2648,7 @@ module.
 .. nonce: THJSYB
 .. section: Library
 
-Changed FeedParser feed() to avoid O(N\ :sup:`2`) behavior when parsing long 
line.
+Changed FeedParser feed() to avoid *O*\ (*n*\ :sup:`2`) behavior when parsing 
long line.
 Original patch by Raymond Hettinger.
 
 ..

_______________________________________________
Python-checkins mailing list -- python-checkins@python.org
To unsubscribe send an email to python-checkins-le...@python.org
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: arch...@mail-archive.com

Reply via email to