https://github.com/python/cpython/commit/25ae2045a8a6e36d2cc2f386314ee4fd68ccd379
commit: 25ae2045a8a6e36d2cc2f386314ee4fd68ccd379
branch: 3.13
author: Miss Islington (bot) <31488909+miss-isling...@users.noreply.github.com>
committer: encukou <encu...@gmail.com>
date: 2025-03-18T12:51:02+01:00
summary:

[3.13] gh-116666: Add "token" glossary term (GH-130888) (GH-131367)

gh-116666: Add "token" glossary term (GH-130888)

Add glossary entry for `token`, and link to it.
Avoid talking about tokens in the SyntaxError intro (errors.rst); at this point
tokenization is too much of a technical detail. (Even to an advanced reader,
the fact that a *single* token is highlighted isn't too relevant. Also, we don't
need to guarantee that it's a single token.)
(cherry picked from commit 30d52058493e07fd1d3efea960482f4001bd2f86)

Co-authored-by: Petr Viktorin <encu...@gmail.com>
Co-authored-by: Adam Turner <9087854+aa-tur...@users.noreply.github.com>

files:
M Doc/glossary.rst
M Doc/reference/lexical_analysis.rst
M Doc/tutorial/errors.rst
M Doc/tutorial/interactive.rst

diff --git a/Doc/glossary.rst b/Doc/glossary.rst
index 598f3961a1f321..1ca9d0f5e7b407 100644
--- a/Doc/glossary.rst
+++ b/Doc/glossary.rst
@@ -787,6 +787,10 @@ Glossary
       thread removes *key* from *mapping* after the test, but before the 
lookup.
       This issue can be solved with locks or by using the EAFP approach.
 
+   lexical analyzer
+
+      Formal name for the *tokenizer*; see :term:`token`.
+
    list
       A built-in Python :term:`sequence`.  Despite its name it is more akin
       to an array in other languages than to a linked list since access to
@@ -1278,6 +1282,17 @@ Glossary
       See also :term:`binary file` for a file object able to read and write
       :term:`bytes-like objects <bytes-like object>`.
 
+   token
+
+      A small unit of source code, generated by the
+      :ref:`lexical analyzer <lexical>` (also called the *tokenizer*).
+      Names, numbers, strings, operators,
+      newlines and similar are represented by tokens.
+
+      The :mod:`tokenize` module exposes Python's lexical analyzer.
+      The :mod:`token` module contains information on the various types
+      of tokens.
+
    triple-quoted string
       A string which is bound by three instances of either a quotation mark
       (") or an apostrophe (').  While they don't provide any functionality
diff --git a/Doc/reference/lexical_analysis.rst 
b/Doc/reference/lexical_analysis.rst
index bffef9db8fb632..6fbe922cad6a3f 100644
--- a/Doc/reference/lexical_analysis.rst
+++ b/Doc/reference/lexical_analysis.rst
@@ -8,8 +8,9 @@ Lexical analysis
 .. index:: lexical analysis, parser, token
 
 A Python program is read by a *parser*.  Input to the parser is a stream of
-*tokens*, generated by the *lexical analyzer*.  This chapter describes how the
-lexical analyzer breaks a file into tokens.
+:term:`tokens <token>`, generated by the *lexical analyzer* (also known as
+the *tokenizer*).
+This chapter describes how the lexical analyzer breaks a file into tokens.
 
 Python reads program text as Unicode code points; the encoding of a source file
 can be given by an encoding declaration and defaults to UTF-8, see :pep:`3120`
diff --git a/Doc/tutorial/errors.rst b/Doc/tutorial/errors.rst
index c01cb8c14a0360..bfb281c1b7d66a 100644
--- a/Doc/tutorial/errors.rst
+++ b/Doc/tutorial/errors.rst
@@ -24,11 +24,12 @@ complaint you get while you are still learning Python::
    SyntaxError: invalid syntax
 
 The parser repeats the offending line and displays little arrows pointing
-at the token in the line where the error was detected.  The error may be
-caused by the absence of a token *before* the indicated token.  In the
-example, the error is detected at the function :func:`print`, since a colon
-(``':'``) is missing before it.  File name and line number are printed so you
-know where to look in case the input came from a script.
+at the place where the error was detected.  Note that this is not always the
+place that needs to be fixed.  In the example, the error is detected at the
+function :func:`print`, since a colon (``':'``) is missing just before it.
+
+The file name (``<stdin>`` in our example) and line number are printed so you
+know where to look in case the input came from a file.
 
 
 .. _tut-exceptions:
diff --git a/Doc/tutorial/interactive.rst b/Doc/tutorial/interactive.rst
index 4e054c4e6c2c32..00e705f999f4b2 100644
--- a/Doc/tutorial/interactive.rst
+++ b/Doc/tutorial/interactive.rst
@@ -37,10 +37,10 @@ Alternatives to the Interactive Interpreter
 
 This facility is an enormous step forward compared to earlier versions of the
 interpreter; however, some wishes are left: It would be nice if the proper
-indentation were suggested on continuation lines (the parser knows if an indent
-token is required next).  The completion mechanism might use the interpreter's
-symbol table.  A command to check (or even suggest) matching parentheses,
-quotes, etc., would also be useful.
+indentation were suggested on continuation lines (the parser knows if an
+:data:`~token.INDENT` token is required next).  The completion mechanism might
+use the interpreter's symbol table.  A command to check (or even suggest)
+matching parentheses, quotes, etc., would also be useful.
 
 One alternative enhanced interactive interpreter that has been around for quite
 some time is IPython_, which features tab completion, object exploration and

_______________________________________________
Python-checkins mailing list -- python-checkins@python.org
To unsubscribe send an email to python-checkins-le...@python.org
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: arch...@mail-archive.com

Reply via email to