Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-numexpr for openSUSE:Factory 
checked in at 2023-12-18 22:56:31
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-numexpr (Old)
 and      /work/SRC/openSUSE:Factory/.python-numexpr.new.9037 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-numexpr"

Mon Dec 18 22:56:31 2023 rev:22 rq:1133817 version:2.8.8

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-numexpr/python-numexpr.changes    
2023-11-27 22:45:34.758712212 +0100
+++ /work/SRC/openSUSE:Factory/.python-numexpr.new.9037/python-numexpr.changes  
2023-12-18 22:56:34.933016150 +0100
@@ -1,0 +2,9 @@
+Mon Dec 18 09:23:43 UTC 2023 - Dirk Müller <[email protected]>
+
+- update to 2.8.8:
+  * Fix re_evaluate not taking global_dict as argument.
+  * Fix parsing of simple complex numbers.  Now,
+    `ne.evaluate('1.5j')` works.
+  * Fixes for upcoming NumPy 2.0
+
+-------------------------------------------------------------------

Old:
----
  numexpr-2.8.7.tar.gz

New:
----
  numexpr-2.8.8.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-numexpr.spec ++++++
--- /var/tmp/diff_new_pack.pWSioN/_old  2023-12-18 22:56:35.809048232 +0100
+++ /var/tmp/diff_new_pack.pWSioN/_new  2023-12-18 22:56:35.813048379 +0100
@@ -16,8 +16,9 @@
 #
 
 
+%{?sle15_python_module_pythons}
 Name:           python-numexpr
-Version:        2.8.7
+Version:        2.8.8
 Release:        0
 Summary:        Numerical expression evaluator for NumPy
 License:        MIT
@@ -27,7 +28,6 @@
 BuildRequires:  %{python_module devel >= 3.7}
 BuildRequires:  %{python_module numpy-devel >= 1.13.3}
 BuildRequires:  %{python_module pip}
-BuildRequires:  %{python_module setuptools}
 BuildRequires:  %{python_module wheel}
 BuildRequires:  fdupes
 BuildRequires:  gcc-c++

++++++ numexpr-2.8.7.tar.gz -> numexpr-2.8.8.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/ANNOUNCE.rst 
new/numexpr-2.8.8/ANNOUNCE.rst
--- old/numexpr-2.8.7/ANNOUNCE.rst      2023-09-26 10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/ANNOUNCE.rst      2023-12-11 13:56:26.000000000 +0100
@@ -1,40 +1,31 @@
 ========================
-Announcing NumExpr 2.8.7
+Announcing NumExpr 2.8.8
 ========================
 
 Hi everyone,
 
-NumExpr 2.8.7 is a release to deal with issues related to downstream `pandas`
-and other projects where the sanitization blacklist was triggering issue in 
their
-evaluate. Hopefully, the new sanitization code would be much more robust now.
-
-For those who do not wish to have sanitization on by default, it can be changed
-by setting an environment variable, `NUMEXPR_SANITIZE=0`.
-
-If you use `pandas` in your packages it is advisable you pin
-
-`numexpr >= 2.8.7`
-
-in your requirements.
+NumExpr 2.8.8 is a release to deal mainly with issues appearing with
+upcoming `NumPy` 2.0.  Also, some small fixes (support for simple complex
+expressions like `ne.evaluate('1.5j')`) and improvements are included.
 
 Project documentation is available at:
 
 http://numexpr.readthedocs.io/
 
-Changes from 2.8.5 to 2.8.6
+Changes from 2.8.7 to 2.8.8
 ---------------------------
 
-* More permissive rules in sanitizing regular expression: allow to access 
digits
-  after the . with scientific notation.  Thanks to Thomas Vincent.
+* Fix re_evaluate not taking global_dict as argument. Thanks to Teng Liu
+  (@27rabbitlt).
 
-* Don't reject double underscores that are not at the start or end of a 
variable
-  name (pandas uses those), or scientific-notation numbers with digits after 
the
-  decimal point.  Thanks to Rebecca Palmer.
+* Fix parsing of simple complex numbers.  Now, `ne.evaluate('1.5j')` works.
+  Thanks to Teng Liu (@27rabbitlt).
 
-* Do not use `numpy.alltrue` in the test suite, as it has been deprecated
-  (replaced by `numpy.all`).  Thanks to Rebecca Chen.
+* Fixes for upcoming NumPy 2.0:
 
-* Wheels for Python 3.12.  Wheels for 3.7 and 3.8 are not generated anymore.
+  * Replace npy_cdouble with C++ complex. Thanks to Teng Liu (@27rabbitlt).
+  * Add NE_MAXARGS for future numpy change NPY_MAXARGS. Now it is set to 64
+    to match NumPy 2.0 value. Thanks to Teng Liu (@27rabbitlt).
 
 What's Numexpr?
 ---------------
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/PKG-INFO new/numexpr-2.8.8/PKG-INFO
--- old/numexpr-2.8.7/PKG-INFO  2023-09-26 10:15:25.606307000 +0200
+++ new/numexpr-2.8.8/PKG-INFO  2023-12-11 14:06:59.466662000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: numexpr
-Version: 2.8.7
+Version: 2.8.8
 Summary: Fast numerical expression evaluator for NumPy
 Home-page: https://github.com/pydata/numexpr
 Author: David M. Cooke, Francesc Alted, and others
@@ -30,8 +30,8 @@
 ======================================================
 
 :Author: David M. Cooke, Francesc Alted, and others.
-:Maintainer: Robert A. McLeod
-:Contact: [email protected]
+:Maintainer: Francesc Alted
+:Contact: [email protected]
 :URL: https://github.com/pydata/numexpr
 :Documentation: http://numexpr.readthedocs.io/en/latest/
 :Travis CI: |travis|
@@ -51,21 +51,6 @@
 .. |version| image:: https://img.shields.io/pypi/v/numexpr.png
         :target: https://pypi.python.org/pypi/numexpr
 
-IMPORTANT NOTE: NumExpr is looking for maintainers!
----------------------------------------------------
-
-After 5 years as a solo maintainer (and performing a most excellent work), 
Robert McLeod
-is asking for a well deserved break. So the NumExpr project is looking for a 
new
-maintainer for a package that is used in pandas, PyTables and many other 
packages.
-If have benefited of NumExpr capabilities in the past, and willing to 
contribute back to
-the community, we would be happy to hear about you!
-
-We are looking for someone that is knowledgeable about compiling extensions, 
and that is
-ready to spend some cycles in making releases (2 or 3 a year, maybe even 
less!).
-Interested? just open a new ticket here and we will help you onboarding!
-
-Thank you!
-
 
 What is NumExpr?
 ----------------
@@ -96,19 +81,19 @@
 into small chunks that easily fit in the cache of the CPU and passed
 to the virtual machine. The virtual machine then applies the
 operations on each chunk. It's worth noting that all temporaries and
-constants in the expression are also chunked. Chunks are distributed among 
-the available cores of the CPU, resulting in highly parallelized code 
+constants in the expression are also chunked. Chunks are distributed among
+the available cores of the CPU, resulting in highly parallelized code
 execution.
 
 The result is that NumExpr can get the most of your machine computing
 capabilities for array-wise computations. Common speed-ups with regard
 to NumPy are usually between 0.95x (for very simple expressions like
-:code:`'a + 1'`) and 4x (for relatively complex ones like :code:`'a*b-4.1*a > 
2.5*b'`), 
-although much higher speed-ups can be achieved for some functions  and complex 
+:code:`'a + 1'`) and 4x (for relatively complex ones like :code:`'a*b-4.1*a > 
2.5*b'`),
+although much higher speed-ups can be achieved for some functions  and complex
 math operations (up to 15x in some cases).
 
-NumExpr performs best on matrices that are too large to fit in L1 CPU cache. 
-In order to get a better idea on the different speed-ups that can be achieved 
+NumExpr performs best on matrices that are too large to fit in L1 CPU cache.
+In order to get a better idea on the different speed-ups that can be achieved
 on your platform, run the provided benchmarks.
 
 Installation
@@ -117,13 +102,13 @@
 From wheels
 ^^^^^^^^^^^
 
-NumExpr is available for install via `pip` for a wide range of platforms and 
-Python versions (which may be browsed at: 
https://pypi.org/project/numexpr/#files). 
+NumExpr is available for install via `pip` for a wide range of platforms and
+Python versions (which may be browsed at: 
https://pypi.org/project/numexpr/#files).
 Installation can be performed as::
 
     pip install numexpr
 
-If you are using the Anaconda or Miniconda distribution of Python you may 
prefer 
+If you are using the Anaconda or Miniconda distribution of Python you may 
prefer
 to use the `conda` package manager in this case::
 
     conda install numexpr
@@ -131,18 +116,18 @@
 From Source
 ^^^^^^^^^^^
 
-On most \*nix systems your compilers will already be present. However if you 
+On most \*nix systems your compilers will already be present. However if you
 are using a virtual environment with a substantially newer version of Python 
than
 your system Python you may be prompted to install a new version of `gcc` or 
`clang`.
 
-For Windows, you will need to install the Microsoft Visual C++ Build Tools 
-(which are free) first. The version depends on which version of Python you 
have 
+For Windows, you will need to install the Microsoft Visual C++ Build Tools
+(which are free) first. The version depends on which version of Python you have
 installed:
 
 https://wiki.python.org/moin/WindowsCompilers
 
-For Python 3.6+ simply installing the latest version of MSVC build tools 
should 
-be sufficient. Note that wheels found via pip do not include MKL support. 
Wheels 
+For Python 3.6+ simply installing the latest version of MSVC build tools should
+be sufficient. Note that wheels found via pip do not include MKL support. 
Wheels
 available via `conda` will have MKL, if the MKL backend is used for NumPy.
 
 See `requirements.txt` for the required version of NumPy.
@@ -160,19 +145,19 @@
 Enable Intel® MKL support
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-NumExpr includes support for Intel's MKL library. This may provide better 
-performance on Intel architectures, mainly when evaluating transcendental 
-functions (trigonometrical, exponential, ...). 
-
-If you have Intel's MKL, copy the `site.cfg.example` that comes with the 
-distribution to `site.cfg` and edit the latter file to provide correct paths 
to 
-the MKL libraries in your system.  After doing this, you can proceed with the 
+NumExpr includes support for Intel's MKL library. This may provide better
+performance on Intel architectures, mainly when evaluating transcendental
+functions (trigonometrical, exponential, ...).
+
+If you have Intel's MKL, copy the `site.cfg.example` that comes with the
+distribution to `site.cfg` and edit the latter file to provide correct paths to
+the MKL libraries in your system.  After doing this, you can proceed with the
 usual building instructions listed above.
 
-Pay attention to the messages during the building process in order to know 
-whether MKL has been detected or not.  Finally, you can check the speed-ups on 
-your machine by running the `bench/vml_timing.py` script (you can play with 
-different parameters to the `set_vml_accuracy_mode()` and 
`set_vml_num_threads()` 
+Pay attention to the messages during the building process in order to know
+whether MKL has been detected or not.  Finally, you can check the speed-ups on
+your machine by running the `bench/vml_timing.py` script (you can play with
+different parameters to the `set_vml_accuracy_mode()` and 
`set_vml_num_threads()`
 functions in the script so as to see how it would affect performance).
 
 Usage
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/README.rst new/numexpr-2.8.8/README.rst
--- old/numexpr-2.8.7/README.rst        2023-09-26 10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/README.rst        2023-12-11 13:56:26.000000000 +0100
@@ -3,8 +3,8 @@
 ======================================================
 
 :Author: David M. Cooke, Francesc Alted, and others.
-:Maintainer: Robert A. McLeod
-:Contact: [email protected]
+:Maintainer: Francesc Alted
+:Contact: [email protected]
 :URL: https://github.com/pydata/numexpr
 :Documentation: http://numexpr.readthedocs.io/en/latest/
 :Travis CI: |travis|
@@ -24,21 +24,6 @@
 .. |version| image:: https://img.shields.io/pypi/v/numexpr.png
         :target: https://pypi.python.org/pypi/numexpr
 
-IMPORTANT NOTE: NumExpr is looking for maintainers!
----------------------------------------------------
-
-After 5 years as a solo maintainer (and performing a most excellent work), 
Robert McLeod
-is asking for a well deserved break. So the NumExpr project is looking for a 
new
-maintainer for a package that is used in pandas, PyTables and many other 
packages.
-If have benefited of NumExpr capabilities in the past, and willing to 
contribute back to
-the community, we would be happy to hear about you!
-
-We are looking for someone that is knowledgeable about compiling extensions, 
and that is
-ready to spend some cycles in making releases (2 or 3 a year, maybe even 
less!).
-Interested? just open a new ticket here and we will help you onboarding!
-
-Thank you!
-
 
 What is NumExpr?
 ----------------
@@ -69,19 +54,19 @@
 into small chunks that easily fit in the cache of the CPU and passed
 to the virtual machine. The virtual machine then applies the
 operations on each chunk. It's worth noting that all temporaries and
-constants in the expression are also chunked. Chunks are distributed among 
-the available cores of the CPU, resulting in highly parallelized code 
+constants in the expression are also chunked. Chunks are distributed among
+the available cores of the CPU, resulting in highly parallelized code
 execution.
 
 The result is that NumExpr can get the most of your machine computing
 capabilities for array-wise computations. Common speed-ups with regard
 to NumPy are usually between 0.95x (for very simple expressions like
-:code:`'a + 1'`) and 4x (for relatively complex ones like :code:`'a*b-4.1*a > 
2.5*b'`), 
-although much higher speed-ups can be achieved for some functions  and complex 
+:code:`'a + 1'`) and 4x (for relatively complex ones like :code:`'a*b-4.1*a > 
2.5*b'`),
+although much higher speed-ups can be achieved for some functions  and complex
 math operations (up to 15x in some cases).
 
-NumExpr performs best on matrices that are too large to fit in L1 CPU cache. 
-In order to get a better idea on the different speed-ups that can be achieved 
+NumExpr performs best on matrices that are too large to fit in L1 CPU cache.
+In order to get a better idea on the different speed-ups that can be achieved
 on your platform, run the provided benchmarks.
 
 Installation
@@ -90,13 +75,13 @@
 From wheels
 ^^^^^^^^^^^
 
-NumExpr is available for install via `pip` for a wide range of platforms and 
-Python versions (which may be browsed at: 
https://pypi.org/project/numexpr/#files). 
+NumExpr is available for install via `pip` for a wide range of platforms and
+Python versions (which may be browsed at: 
https://pypi.org/project/numexpr/#files).
 Installation can be performed as::
 
     pip install numexpr
 
-If you are using the Anaconda or Miniconda distribution of Python you may 
prefer 
+If you are using the Anaconda or Miniconda distribution of Python you may 
prefer
 to use the `conda` package manager in this case::
 
     conda install numexpr
@@ -104,18 +89,18 @@
 From Source
 ^^^^^^^^^^^
 
-On most \*nix systems your compilers will already be present. However if you 
+On most \*nix systems your compilers will already be present. However if you
 are using a virtual environment with a substantially newer version of Python 
than
 your system Python you may be prompted to install a new version of `gcc` or 
`clang`.
 
-For Windows, you will need to install the Microsoft Visual C++ Build Tools 
-(which are free) first. The version depends on which version of Python you 
have 
+For Windows, you will need to install the Microsoft Visual C++ Build Tools
+(which are free) first. The version depends on which version of Python you have
 installed:
 
 https://wiki.python.org/moin/WindowsCompilers
 
-For Python 3.6+ simply installing the latest version of MSVC build tools 
should 
-be sufficient. Note that wheels found via pip do not include MKL support. 
Wheels 
+For Python 3.6+ simply installing the latest version of MSVC build tools should
+be sufficient. Note that wheels found via pip do not include MKL support. 
Wheels
 available via `conda` will have MKL, if the MKL backend is used for NumPy.
 
 See `requirements.txt` for the required version of NumPy.
@@ -133,19 +118,19 @@
 Enable Intel® MKL support
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-NumExpr includes support for Intel's MKL library. This may provide better 
-performance on Intel architectures, mainly when evaluating transcendental 
-functions (trigonometrical, exponential, ...). 
-
-If you have Intel's MKL, copy the `site.cfg.example` that comes with the 
-distribution to `site.cfg` and edit the latter file to provide correct paths 
to 
-the MKL libraries in your system.  After doing this, you can proceed with the 
+NumExpr includes support for Intel's MKL library. This may provide better
+performance on Intel architectures, mainly when evaluating transcendental
+functions (trigonometrical, exponential, ...).
+
+If you have Intel's MKL, copy the `site.cfg.example` that comes with the
+distribution to `site.cfg` and edit the latter file to provide correct paths to
+the MKL libraries in your system.  After doing this, you can proceed with the
 usual building instructions listed above.
 
-Pay attention to the messages during the building process in order to know 
-whether MKL has been detected or not.  Finally, you can check the speed-ups on 
-your machine by running the `bench/vml_timing.py` script (you can play with 
-different parameters to the `set_vml_accuracy_mode()` and 
`set_vml_num_threads()` 
+Pay attention to the messages during the building process in order to know
+whether MKL has been detected or not.  Finally, you can check the speed-ups on
+your machine by running the `bench/vml_timing.py` script (you can play with
+different parameters to the `set_vml_accuracy_mode()` and 
`set_vml_num_threads()`
 functions in the script so as to see how it would affect performance).
 
 Usage
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/RELEASE_NOTES.rst 
new/numexpr-2.8.8/RELEASE_NOTES.rst
--- old/numexpr-2.8.7/RELEASE_NOTES.rst 2023-09-26 10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/RELEASE_NOTES.rst 2023-12-11 13:56:26.000000000 +0100
@@ -2,6 +2,22 @@
 Release notes for NumExpr 2.8 series
 ====================================
 
+Changes from 2.8.7 to 2.8.8
+---------------------------
+
+* Fix re_evaluate not taking global_dict as argument. Thanks to Teng Liu
+  (@27rabbitlt).
+
+* Fix parsing of simple complex numbers.  Now, `ne.evaluate('1.5j')` works.
+  Thanks to Teng Liu (@27rabbitlt).
+
+* Fixes for upcoming NumPy 2.0:
+
+  * Replace npy_cdouble with C++ complex. Thanks to Teng Liu (@27rabbitlt).
+  * Add NE_MAXARGS for future numpy change NPY_MAXARGS. Now it is set to 64
+    to match NumPy 2.0 value. Thanks to Teng Liu (@27rabbitlt).
+
+
 Changes from 2.8.6 to 2.8.7
 ---------------------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/bench/boolean_timing.py 
new/numexpr-2.8.8/bench/boolean_timing.py
--- old/numexpr-2.8.7/bench/boolean_timing.py   2023-09-26 10:01:22.000000000 
+0200
+++ new/numexpr-2.8.8/bench/boolean_timing.py   2023-12-11 13:56:26.000000000 
+0100
@@ -13,7 +13,7 @@
 import timeit
 import numpy
 
-array_size = 1000*1000
+array_size = 5_000_000
 iterations = 10
 
 numpy_ttime = []
@@ -23,6 +23,7 @@
 numexpr_sttime = []
 numexpr_nttime = []
 
+
 def compare_times(expr, nexpr):
     global numpy_ttime
     global numpy_sttime
@@ -64,7 +65,7 @@
     numexpr_stime = round(numexpr_timer.timeit(number=iterations), 4)
     numexpr_sttime.append(numexpr_stime)
     print("numexpr strided:", numexpr_stime/iterations, end=" ")
-    print("Speed-up of numexpr strided over numpy:", \
+    print("Speed-up of numexpr strided over numpy:",
           round(numpy_stime/numexpr_stime, 4))
 
     evalexpr = 'evaluate("%s", optimization="aggressive")' % expr
@@ -72,7 +73,7 @@
     numexpr_ntime = round(numexpr_timer.timeit(number=iterations), 4)
     numexpr_nttime.append(numexpr_ntime)
     print("numexpr unaligned:", numexpr_ntime/iterations, end=" ")
-    print("Speed-up of numexpr unaligned over numpy:", \
+    print("Speed-up of numexpr unaligned over numpy:",
           round(numpy_ntime/numexpr_ntime, 4))
 
 
@@ -113,7 +114,7 @@
 expressions.append('sqrt(i2**2 + f3**2) > 1')
 expressions.append('(i2>2) | ((f3**2>3) & ~(i2*f3<2))')
 
-def compare(expression=False):
+def compare(expression=None):
     if expression:
         compare_times(expression, 1)
         sys.exit(0)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/bench/timing.py 
new/numexpr-2.8.8/bench/timing.py
--- old/numexpr-2.8.7/bench/timing.py   2023-09-26 10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/bench/timing.py   2023-12-11 13:56:26.000000000 +0100
@@ -11,7 +11,7 @@
 from __future__ import print_function
 import timeit, numpy
 
-array_size = 1e6
+array_size = 5e6
 iterations = 2
 
 # Choose the type you want to benchmark
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/bench/vml_timing.py 
new/numexpr-2.8.8/bench/vml_timing.py
--- old/numexpr-2.8.7/bench/vml_timing.py       2023-09-26 10:01:22.000000000 
+0200
+++ new/numexpr-2.8.8/bench/vml_timing.py       2023-12-11 13:56:26.000000000 
+0100
@@ -14,7 +14,7 @@
 import numpy
 import numexpr
 
-array_size = 1000*1000
+array_size = 5_000_000
 iterations = 10
 
 numpy_ttime = []
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr/complex_functions.hpp 
new/numexpr-2.8.8/numexpr/complex_functions.hpp
--- old/numexpr-2.8.7/numexpr/complex_functions.hpp     2023-09-26 
10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/numexpr/complex_functions.hpp     2023-12-11 
13:56:26.000000000 +0100
@@ -10,16 +10,17 @@
   See LICENSE.txt for details about copyright and rights to use.
 **********************************************************************/
 
-// TODO: Could just use std::complex<float> and std::complex<double>
+// Replace npy_cdouble with std::complex<double>
+#include <complex>
 
 /* constants */
-static npy_cdouble nc_1 = {1., 0.};
-static npy_cdouble nc_half = {0.5, 0.};
-static npy_cdouble nc_i = {0., 1.};
-static npy_cdouble nc_i2 = {0., 0.5};
+static std::complex<double> nc_1(1., 0.);
+static std::complex<double> nc_half(0.5, 0.);
+static std::complex<double> nc_i(0., 1.);
+static std::complex<double> nc_i2(0., 0.5);
 /*
-static npy_cdouble nc_mi = {0., -1.};
-static npy_cdouble nc_pi2 = {M_PI/2., 0.};
+static std::complex<double> nc_mi = {0., -1.};
+static std::complex<double> nc_pi2 = {M_PI/2., 0.};
 */
 
 /* *************************** WARNING *****************************
@@ -31,42 +32,42 @@
 */
 
 static void
-nc_assign(npy_cdouble *x, npy_cdouble *r)
+nc_assign(std::complex<double> *x, std::complex<double> *r)
 {
-  r->real = x->real;
-  r->imag = x->imag;
+  r->real(x->real());
+  r->imag(x->imag());
   return;
 }
 
 static void
-nc_sum(npy_cdouble *a, npy_cdouble *b, npy_cdouble *r)
+nc_sum(std::complex<double> *a, std::complex<double> *b, std::complex<double> 
*r)
 {
-    r->real = a->real + b->real;
-    r->imag = a->imag + b->imag;
+    r->real(a->real() + b->real());
+    r->imag(a->imag() + b->imag());
     return;
 }
 
 static void
-nc_diff(npy_cdouble *a, npy_cdouble *b, npy_cdouble *r)
+nc_diff(std::complex<double> *a, std::complex<double> *b, std::complex<double> 
*r)
 {
-    r->real = a->real - b->real;
-    r->imag = a->imag - b->imag;
+    r->real(a->real() - b->real());
+    r->imag(a->imag() - b->imag());
     return;
 }
 
 static void
-nc_neg(npy_cdouble *a, npy_cdouble *r)
+nc_neg(std::complex<double> *a, std::complex<double> *r)
 {
-    r->real = -a->real;
-    r->imag = -a->imag;
+    r->real(-a->real());
+    r->imag(-a->imag());
     return;
 }
 
 static void
-nc_conj(npy_cdouble *a, npy_cdouble *r)
+nc_conj(std::complex<double> *a, std::complex<double> *r)
 {
-    r->real = a->real;
-    r->imag = -a->imag;
+    r->real(a->real());
+    r->imag(-a->imag());
     return;
 }
 
@@ -85,109 +86,109 @@
 }
 
 static void
-nc_prod(npy_cdouble *a, npy_cdouble *b, npy_cdouble *r)
+nc_prod(std::complex<double> *a, std::complex<double> *b, std::complex<double> 
*r)
 {
-    double ar=a->real, br=b->real, ai=a->imag, bi=b->imag;
-    r->real = ar*br - ai*bi;
-    r->imag = ar*bi + ai*br;
+    double ar=a->real(), br=b->real(), ai=a->imag(), bi=b->imag();
+    r->real(ar*br - ai*bi);
+    r->imag(ar*bi + ai*br);
     return;
 }
 
 static void
-nc_quot(npy_cdouble *a, npy_cdouble *b, npy_cdouble *r)
+nc_quot(std::complex<double> *a, std::complex<double> *b, std::complex<double> 
*r)
 {
-    double ar=a->real, br=b->real, ai=a->imag, bi=b->imag;
+    double ar=a->real(), br=b->real(), ai=a->imag(), bi=b->imag();
     double d = br*br + bi*bi;
-    r->real = (ar*br + ai*bi)/d;
-    r->imag = (ai*br - ar*bi)/d;
+    r->real((ar*br + ai*bi)/d);
+    r->imag((ai*br - ar*bi)/d);
     return;
 }
 
 static void
-nc_sqrt(npy_cdouble *x, npy_cdouble *r)
+nc_sqrt(std::complex<double> *x, std::complex<double> *r)
 {
     double s,d;
-    if (x->real == 0. && x->imag == 0.)
+    if (x->real() == 0. && x->imag() == 0.)
         *r = *x;
     else {
-        s = sqrt((fabs(x->real) + hypot(x->real,x->imag))/2);
-        d = x->imag/(2*s);
-        if (x->real > 0.) {
-            r->real = s;
-            r->imag = d;
+        s = sqrt((fabs(x->real()) + hypot(x->real(),x->imag()))/2);
+        d = x->imag()/(2*s);
+        if (x->real() > 0.) {
+            r->real(s);
+            r->imag(d);
         }
-        else if (x->imag >= 0.) {
-            r->real = d;
-            r->imag = s;
+        else if (x->imag() >= 0.) {
+            r->real(d);
+            r->imag(s);
         }
         else {
-            r->real = -d;
-            r->imag = -s;
+            r->real(-d);
+            r->imag(-s);
         }
     }
     return;
 }
 
 static void
-nc_log(npy_cdouble *x, npy_cdouble *r)
+nc_log(std::complex<double> *x, std::complex<double> *r)
 {
-    double l = hypot(x->real,x->imag);
-    r->imag = atan2(x->imag, x->real);
-    r->real = log(l);
+    double l = hypot(x->real(),x->imag());
+    r->imag(atan2(x->imag(), x->real()));
+    r->real(log(l));
     return;
 }
 
 static void
-nc_log1p(npy_cdouble *x, npy_cdouble *r)
+nc_log1p(std::complex<double> *x, std::complex<double> *r)
 {
-    double l = hypot(x->real + 1.0,x->imag);
-    r->imag = atan2(x->imag, x->real + 1.0);
-    r->real = log(l);
+    double l = hypot(x->real() + 1.0,x->imag());
+    r->imag(atan2(x->imag(), x->real() + 1.0));
+    r->real(log(l));
     return;
 }
 
 static void
-nc_exp(npy_cdouble *x, npy_cdouble *r)
+nc_exp(std::complex<double> *x, std::complex<double> *r)
 {
-    double a = exp(x->real);
-    r->real = a*cos(x->imag);
-    r->imag = a*sin(x->imag);
+    double a = exp(x->real());
+    r->real(a*cos(x->imag()));
+    r->imag(a*sin(x->imag()));
     return;
 }
 
 static void
-nc_expm1(npy_cdouble *x, npy_cdouble *r)
+nc_expm1(std::complex<double> *x, std::complex<double> *r)
 {
-    double a = sin(x->imag / 2);
-    double b = exp(x->real);
-    r->real = expm1(x->real) * cos(x->imag) - 2 * a * a;
-    r->imag = b * sin(x->imag);
+    double a = sin(x->imag() / 2);
+    double b = exp(x->real());
+    r->real(expm1(x->real()) * cos(x->imag()) - 2 * a * a);
+    r->imag(b * sin(x->imag()));
     return;
 }
 
 static void
-nc_pow(npy_cdouble *a, npy_cdouble *b, npy_cdouble *r)
+nc_pow(std::complex<double> *a, std::complex<double> *b, std::complex<double> 
*r)
 {
     npy_intp n;
-    double ar=a->real, br=b->real, ai=a->imag, bi=b->imag;
+    double ar=a->real(), br=b->real(), ai=a->imag(), bi=b->imag();
 
     if (br == 0. && bi == 0.) {
-        r->real = 1.;
-        r->imag = 0.;
+        r->real(1.);
+        r->imag(0.);
         return;
     }
     if (ar == 0. && ai == 0.) {
-        r->real = 0.;
-        r->imag = 0.;
+        r->real(0.);
+        r->imag(0.);
         return;
     }
     if (bi == 0 && (n=(npy_intp)br) == br) {
         if (n > -100 && n < 100) {
-        npy_cdouble p, aa;
+        std::complex<double> p, aa;
         npy_intp mask = 1;
         if (n < 0) n = -n;
         aa = nc_1;
-        p.real = ar; p.imag = ai;
+        p.real(ar); p.imag(ai);
         while (1) {
             if (n & mask)
                 nc_prod(&aa,&p,&aa);
@@ -195,7 +196,7 @@
             if (n < mask || mask <= 0) break;
             nc_prod(&p,&p,&p);
         }
-        r->real = aa.real; r->imag = aa.imag;
+        r->real(aa.real()); r->imag(aa.imag());
         if (br < 0) nc_quot(&nc_1, r, r);
         return;
         }
@@ -210,19 +211,19 @@
 
 
 static void
-nc_prodi(npy_cdouble *x, npy_cdouble *r)
+nc_prodi(std::complex<double> *x, std::complex<double> *r)
 {
-    double xr = x->real;
-    r->real = -x->imag;
-    r->imag = xr;
+    double xr = x->real();
+    r->real(-x->imag());
+    r->imag(xr);
     return;
 }
 
 
 static void
-nc_acos(npy_cdouble *x, npy_cdouble *r)
+nc_acos(std::complex<double> *x, std::complex<double> *r)
 {
-    npy_cdouble a, *pa=&a;
+    std::complex<double> a, *pa=&a;
 
     nc_assign(x, pa);
     nc_prod(x,x,r);
@@ -240,9 +241,9 @@
 }
 
 static void
-nc_acosh(npy_cdouble *x, npy_cdouble *r)
+nc_acosh(std::complex<double> *x, std::complex<double> *r)
 {
-    npy_cdouble t, a, *pa=&a;
+    std::complex<double> t, a, *pa=&a;
 
     nc_assign(x, pa);
     nc_sum(x, &nc_1, &t);
@@ -260,9 +261,9 @@
 }
 
 static void
-nc_asin(npy_cdouble *x, npy_cdouble *r)
+nc_asin(std::complex<double> *x, std::complex<double> *r)
 {
-    npy_cdouble a, *pa=&a;
+    std::complex<double> a, *pa=&a;
     nc_prodi(x, pa);
     nc_prod(x, x, r);
     nc_diff(&nc_1, r, r);
@@ -280,9 +281,9 @@
 
 
 static void
-nc_asinh(npy_cdouble *x, npy_cdouble *r)
+nc_asinh(std::complex<double> *x, std::complex<double> *r)
 {
-    npy_cdouble a, *pa=&a;
+    std::complex<double> a, *pa=&a;
     nc_assign(x, pa);
     nc_prod(x, x, r);
     nc_sum(&nc_1, r, r);
@@ -296,9 +297,9 @@
 }
 
 static void
-nc_atan(npy_cdouble *x, npy_cdouble *r)
+nc_atan(std::complex<double> *x, std::complex<double> *r)
 {
-    npy_cdouble a, *pa=&a;
+    std::complex<double> a, *pa=&a;
     nc_diff(&nc_i, x, pa);
     nc_sum(&nc_i, x, r);
     nc_quot(r, pa, r);
@@ -311,9 +312,9 @@
 }
 
 static void
-nc_atanh(npy_cdouble *x, npy_cdouble *r)
+nc_atanh(std::complex<double> *x, std::complex<double> *r)
 {
-    npy_cdouble a, b, *pa=&a, *pb=&b;
+    std::complex<double> a, b, *pa=&a, *pb=&b;
     nc_assign(x, pa);
     nc_diff(&nc_1, pa, r);
     nc_sum(&nc_1, pa, pb);
@@ -327,20 +328,20 @@
 }
 
 static void
-nc_cos(npy_cdouble *x, npy_cdouble *r)
+nc_cos(std::complex<double> *x, std::complex<double> *r)
 {
-    double xr=x->real, xi=x->imag;
-    r->real = cos(xr)*cosh(xi);
-    r->imag = -sin(xr)*sinh(xi);
+    double xr=x->real(), xi=x->imag();
+    r->real(cos(xr)*cosh(xi));
+    r->imag(-sin(xr)*sinh(xi));
     return;
 }
 
 static void
-nc_cosh(npy_cdouble *x, npy_cdouble *r)
+nc_cosh(std::complex<double> *x, std::complex<double> *r)
 {
-    double xr=x->real, xi=x->imag;
-    r->real = cos(xi)*cosh(xr);
-    r->imag = sin(xi)*sinh(xr);
+    double xr=x->real(), xi=x->imag();
+    r->real(cos(xi)*cosh(xr));
+    r->imag(sin(xi)*sinh(xr));
     return;
 }
 
@@ -348,39 +349,39 @@
 #define M_LOG10_E 0.434294481903251827651128918916605082294397
 
 static void
-nc_log10(npy_cdouble *x, npy_cdouble *r)
+nc_log10(std::complex<double> *x, std::complex<double> *r)
 {
     nc_log(x, r);
-    r->real *= M_LOG10_E;
-    r->imag *= M_LOG10_E;
+    r->real(r->real() * M_LOG10_E);
+    r->imag(r->imag() * M_LOG10_E);
     return;
 }
 
 static void
-nc_sin(npy_cdouble *x, npy_cdouble *r)
+nc_sin(std::complex<double> *x, std::complex<double> *r)
 {
-    double xr=x->real, xi=x->imag;
-    r->real = sin(xr)*cosh(xi);
-    r->imag = cos(xr)*sinh(xi);
+    double xr=x->real(), xi=x->imag();
+    r->real(sin(xr)*cosh(xi));
+    r->imag(cos(xr)*sinh(xi));
     return;
 }
 
 static void
-nc_sinh(npy_cdouble *x, npy_cdouble *r)
+nc_sinh(std::complex<double> *x, std::complex<double> *r)
 {
-    double xr=x->real, xi=x->imag;
-    r->real = cos(xi)*sinh(xr);
-    r->imag = sin(xi)*cosh(xr);
+    double xr=x->real(), xi=x->imag();
+    r->real(cos(xi)*sinh(xr));
+    r->imag(sin(xi)*cosh(xr));
     return;
 }
 
 static void
-nc_tan(npy_cdouble *x, npy_cdouble *r)
+nc_tan(std::complex<double> *x, std::complex<double> *r)
 {
     double sr,cr,shi,chi;
     double rs,is,rc,ic;
     double d;
-    double xr=x->real, xi=x->imag;
+    double xr=x->real(), xi=x->imag();
     sr = sin(xr);
     cr = cos(xr);
     shi = sinh(xi);
@@ -390,18 +391,18 @@
     rc = cr*chi;
     ic = -sr*shi;
     d = rc*rc + ic*ic;
-    r->real = (rs*rc+is*ic)/d;
-    r->imag = (is*rc-rs*ic)/d;
+    r->real((rs*rc+is*ic)/d);
+    r->imag((is*rc-rs*ic)/d);
     return;
 }
 
 static void
-nc_tanh(npy_cdouble *x, npy_cdouble *r)
+nc_tanh(std::complex<double> *x, std::complex<double> *r)
 {
     double si,ci,shr,chr;
     double rs,is,rc,ic;
     double d;
-    double xr=x->real, xi=x->imag;
+    double xr=x->real(), xi=x->imag();
     si = sin(xi);
     ci = cos(xi);
     shr = sinh(xr);
@@ -411,16 +412,16 @@
     rc = ci*chr;
     ic = si*shr;
     d = rc*rc + ic*ic;
-    r->real = (rs*rc+is*ic)/d;
-    r->imag = (is*rc-rs*ic)/d;
+    r->real((rs*rc+is*ic)/d);
+    r->imag((is*rc-rs*ic)/d);
     return;
 }
 
 static void
-nc_abs(npy_cdouble *x, npy_cdouble *r)
+nc_abs(std::complex<double> *x, std::complex<double> *r)
 {
-    r->real = sqrt(x->real*x->real + x->imag*x->imag);
-    r->imag = 0;
+    r->real(sqrt(x->real()*x->real() + x->imag()*x->imag()));
+    r->imag(0);
 }
 
 #endif // NUMEXPR_COMPLEX_FUNCTIONS_HPP
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr/interp_body.cpp 
new/numexpr-2.8.8/numexpr/interp_body.cpp
--- old/numexpr-2.8.7/numexpr/interp_body.cpp   2023-09-26 10:01:22.000000000 
+0200
+++ new/numexpr-2.8.8/numexpr/interp_body.cpp   2023-12-11 13:56:26.000000000 
+0100
@@ -199,7 +199,7 @@
         #define s3    ((char   *)x3+j*sb3)
         /* Some temporaries */
         double da, db;
-        npy_cdouble ca, cb;
+        std::complex<double> ca, cb;
 
         switch (op) {
 
@@ -432,19 +432,19 @@
                                                 (const MKL_Complex16*)x1,
                                                 (MKL_Complex16*)dest));
 #else
-            VEC_ARG1(ca.real = c1r;
-                     ca.imag = c1i;
+            VEC_ARG1(ca.real(c1r);
+                     ca.imag(c1i);
                      functions_cc[arg2](&ca, &ca);
-                     cr_dest = ca.real;
-                     ci_dest = ca.imag);
+                     cr_dest = ca.real();
+                     ci_dest = ca.imag());
 #endif
-        case OP_FUNC_CCCN: VEC_ARG2(ca.real = c1r;
-                                    ca.imag = c1i;
-                                    cb.real = c2r;
-                                    cb.imag = c2i;
+        case OP_FUNC_CCCN: VEC_ARG2(ca.real(c1r);
+                                    ca.imag(c1i);
+                                    cb.real(c2r);
+                                    cb.imag(c2i);
                                     functions_ccc[arg3](&ca, &cb, &ca);
-                                    cr_dest = ca.real;
-                                    ci_dest = ca.imag);
+                                    cr_dest = ca.real();
+                                    ci_dest = ca.imag());
 
         case OP_REAL_DC: VEC_ARG1(d_dest = c1r);
         case OP_IMAG_DC: VEC_ARG1(d_dest = c1i);
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr/interpreter.cpp 
new/numexpr-2.8.8/numexpr/interpreter.cpp
--- old/numexpr-2.8.7/numexpr/interpreter.cpp   2023-09-26 10:01:22.000000000 
+0200
+++ new/numexpr-2.8.8/numexpr/interpreter.cpp   2023-12-11 13:56:26.000000000 
+0100
@@ -246,7 +246,7 @@
 
 
 
-typedef void (*FuncCCPtr)(npy_cdouble*, npy_cdouble*);
+typedef void (*FuncCCPtr)(std::complex<double>*, std::complex<double>*);
 
 FuncCCPtr functions_cc[] = {
 #define FUNC_CC(fop, s, f, ...) f,
@@ -295,7 +295,7 @@
 #endif
 
 
-typedef void (*FuncCCCPtr)(npy_cdouble*, npy_cdouble*, npy_cdouble*);
+typedef void (*FuncCCCPtr)(std::complex<double>*, std::complex<double>*, 
std::complex<double>*);
 
 FuncCCCPtr functions_ccc[] = {
 #define FUNC_CCC(fop, s, f) f,
@@ -980,10 +980,10 @@
 PyObject *
 NumExpr_run(NumExprObject *self, PyObject *args, PyObject *kwds)
 {
-    PyArrayObject *operands[NPY_MAXARGS];
-    PyArray_Descr *dtypes[NPY_MAXARGS], **dtypes_tmp;
+    PyArrayObject *operands[NE_MAXARGS];
+    PyArray_Descr *dtypes[NE_MAXARGS], **dtypes_tmp;
     PyObject *tmp, *ret;
-    npy_uint32 op_flags[NPY_MAXARGS];
+    npy_uint32 op_flags[NE_MAXARGS];
     NPY_CASTING casting = NPY_SAFE_CASTING;
     NPY_ORDER order = NPY_KEEPORDER;
     unsigned int i, n_inputs;
@@ -997,8 +997,8 @@
     bool reduction_outer_loop = false, need_output_buffering = false, 
full_reduction = false;
 
     // To specify axes when doing a reduction
-    int op_axes_values[NPY_MAXARGS][NPY_MAXDIMS],
-         op_axes_reduction_values[NPY_MAXARGS];
+    int op_axes_values[NE_MAXARGS][NPY_MAXDIMS],
+         op_axes_reduction_values[NE_MAXARGS];
     int *op_axes_ptrs[NPY_MAXDIMS];
     int oa_ndim = 0;
     int **op_axes = NULL;
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr/necompiler.py 
new/numexpr-2.8.8/numexpr/necompiler.py
--- old/numexpr-2.8.7/numexpr/necompiler.py     2023-09-26 10:01:22.000000000 
+0200
+++ new/numexpr-2.8.8/numexpr/necompiler.py     2023-12-11 13:56:26.000000000 
+0100
@@ -265,7 +265,7 @@
 
 _flow_pat = r'[\;\[\:]'
 _dunder_pat = r'(^|[^\w])__[\w]+__($|[^\w])'
-_attr_pat = r'\.\b(?!(real|imag|\d*[eE]?[+-]?\d+)\b)'
+_attr_pat = r'\.\b(?!(real|imag|(\d*[eE]?[+-]?\d+)|\d*j)\b)'
 _blacklist_re = re.compile(f'{_flow_pat}|{_dunder_pat}|{_attr_pat}')
 
 def stringToExpression(s, types, context, sanitize: bool=True):
@@ -275,6 +275,7 @@
     # parse into its homebrew AST. This is to protect the call to `eval` below.
     # We forbid `;`, `:`. `[` and `__`, and attribute access via '.'.
     # We cannot ban `.real` or `.imag` however...
+    # We also cannot ban `.\d*j`, where `\d*` is some digits (or none), e.g. 
1.5j, 1.j
     if sanitize:
         no_whitespace = re.sub(r'\s+', '', s)
         if _blacklist_re.search(no_whitespace) is not None:
@@ -970,11 +971,12 @@
                  out=out, order=order, casting=casting, 
                  _frame_depth=_frame_depth, sanitize=sanitize, **kwargs)
     if e is None:
-        return re_evaluate(local_dict=local_dict, _frame_depth=_frame_depth)
+        return re_evaluate(local_dict=local_dict, global_dict=global_dict, 
_frame_depth=_frame_depth)
     else:
         raise e
     
 def re_evaluate(local_dict: Optional[Dict] = None, 
+                global_dict: Optional[Dict] = None,
                 _frame_depth: int=2) -> numpy.ndarray:
     """
     Re-evaluate the previous executed array expression without any check.
@@ -998,7 +1000,7 @@
     except KeyError:
         raise RuntimeError("A previous evaluate() execution was not found, 
please call `validate` or `evaluate` once before `re_evaluate`")
     argnames = _numexpr_last['argnames']
-    args = getArguments(argnames, local_dict, _frame_depth=_frame_depth)
+    args = getArguments(argnames, local_dict, global_dict, 
_frame_depth=_frame_depth)
     kwargs = _numexpr_last['kwargs']
     with evaluate_lock:
         return compiled_ex(*args, **kwargs)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr/numexpr_config.hpp 
new/numexpr-2.8.8/numexpr/numexpr_config.hpp
--- old/numexpr-2.8.7/numexpr/numexpr_config.hpp        2023-09-26 
10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/numexpr/numexpr_config.hpp        2023-12-11 
13:56:26.000000000 +0100
@@ -23,6 +23,10 @@
 // environment variable, "NUMEXPR_MAX_THREADS"
 #define DEFAULT_MAX_THREADS 64
 
+// Remove dependence on NPY_MAXARGS, which would be a runtime constant instead 
of compiletime
+// constant. If numpy raises NPY_MAXARGS, we should notice and raise this as 
well
+#define NE_MAXARGS 64
+
 #if defined(_WIN32)
   #include "win32/pthread.h"
   #include <process.h>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr/version.py 
new/numexpr-2.8.8/numexpr/version.py
--- old/numexpr-2.8.7/numexpr/version.py        2023-09-26 10:15:25.000000000 
+0200
+++ new/numexpr-2.8.8/numexpr/version.py        2023-12-11 14:06:59.000000000 
+0100
@@ -1,4 +1,4 @@
 # THIS FILE IS GENERATED BY `SETUP.PY`
-version = '2.8.7'
-numpy_build_version = '1.23.2'
+version = '2.8.8'
+numpy_build_version = '1.26.1'
 platform_machine = 'AMD64'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/numexpr.egg-info/PKG-INFO 
new/numexpr-2.8.8/numexpr.egg-info/PKG-INFO
--- old/numexpr-2.8.7/numexpr.egg-info/PKG-INFO 2023-09-26 10:15:25.000000000 
+0200
+++ new/numexpr-2.8.8/numexpr.egg-info/PKG-INFO 2023-12-11 14:06:59.000000000 
+0100
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: numexpr
-Version: 2.8.7
+Version: 2.8.8
 Summary: Fast numerical expression evaluator for NumPy
 Home-page: https://github.com/pydata/numexpr
 Author: David M. Cooke, Francesc Alted, and others
@@ -30,8 +30,8 @@
 ======================================================
 
 :Author: David M. Cooke, Francesc Alted, and others.
-:Maintainer: Robert A. McLeod
-:Contact: [email protected]
+:Maintainer: Francesc Alted
+:Contact: [email protected]
 :URL: https://github.com/pydata/numexpr
 :Documentation: http://numexpr.readthedocs.io/en/latest/
 :Travis CI: |travis|
@@ -51,21 +51,6 @@
 .. |version| image:: https://img.shields.io/pypi/v/numexpr.png
         :target: https://pypi.python.org/pypi/numexpr
 
-IMPORTANT NOTE: NumExpr is looking for maintainers!
----------------------------------------------------
-
-After 5 years as a solo maintainer (and performing a most excellent work), 
Robert McLeod
-is asking for a well deserved break. So the NumExpr project is looking for a 
new
-maintainer for a package that is used in pandas, PyTables and many other 
packages.
-If have benefited of NumExpr capabilities in the past, and willing to 
contribute back to
-the community, we would be happy to hear about you!
-
-We are looking for someone that is knowledgeable about compiling extensions, 
and that is
-ready to spend some cycles in making releases (2 or 3 a year, maybe even 
less!).
-Interested? just open a new ticket here and we will help you onboarding!
-
-Thank you!
-
 
 What is NumExpr?
 ----------------
@@ -96,19 +81,19 @@
 into small chunks that easily fit in the cache of the CPU and passed
 to the virtual machine. The virtual machine then applies the
 operations on each chunk. It's worth noting that all temporaries and
-constants in the expression are also chunked. Chunks are distributed among 
-the available cores of the CPU, resulting in highly parallelized code 
+constants in the expression are also chunked. Chunks are distributed among
+the available cores of the CPU, resulting in highly parallelized code
 execution.
 
 The result is that NumExpr can get the most of your machine computing
 capabilities for array-wise computations. Common speed-ups with regard
 to NumPy are usually between 0.95x (for very simple expressions like
-:code:`'a + 1'`) and 4x (for relatively complex ones like :code:`'a*b-4.1*a > 
2.5*b'`), 
-although much higher speed-ups can be achieved for some functions  and complex 
+:code:`'a + 1'`) and 4x (for relatively complex ones like :code:`'a*b-4.1*a > 
2.5*b'`),
+although much higher speed-ups can be achieved for some functions  and complex
 math operations (up to 15x in some cases).
 
-NumExpr performs best on matrices that are too large to fit in L1 CPU cache. 
-In order to get a better idea on the different speed-ups that can be achieved 
+NumExpr performs best on matrices that are too large to fit in L1 CPU cache.
+In order to get a better idea on the different speed-ups that can be achieved
 on your platform, run the provided benchmarks.
 
 Installation
@@ -117,13 +102,13 @@
 From wheels
 ^^^^^^^^^^^
 
-NumExpr is available for install via `pip` for a wide range of platforms and 
-Python versions (which may be browsed at: 
https://pypi.org/project/numexpr/#files). 
+NumExpr is available for install via `pip` for a wide range of platforms and
+Python versions (which may be browsed at: 
https://pypi.org/project/numexpr/#files).
 Installation can be performed as::
 
     pip install numexpr
 
-If you are using the Anaconda or Miniconda distribution of Python you may 
prefer 
+If you are using the Anaconda or Miniconda distribution of Python you may 
prefer
 to use the `conda` package manager in this case::
 
     conda install numexpr
@@ -131,18 +116,18 @@
 From Source
 ^^^^^^^^^^^
 
-On most \*nix systems your compilers will already be present. However if you 
+On most \*nix systems your compilers will already be present. However if you
 are using a virtual environment with a substantially newer version of Python 
than
 your system Python you may be prompted to install a new version of `gcc` or 
`clang`.
 
-For Windows, you will need to install the Microsoft Visual C++ Build Tools 
-(which are free) first. The version depends on which version of Python you 
have 
+For Windows, you will need to install the Microsoft Visual C++ Build Tools
+(which are free) first. The version depends on which version of Python you have
 installed:
 
 https://wiki.python.org/moin/WindowsCompilers
 
-For Python 3.6+ simply installing the latest version of MSVC build tools 
should 
-be sufficient. Note that wheels found via pip do not include MKL support. 
Wheels 
+For Python 3.6+ simply installing the latest version of MSVC build tools should
+be sufficient. Note that wheels found via pip do not include MKL support. 
Wheels
 available via `conda` will have MKL, if the MKL backend is used for NumPy.
 
 See `requirements.txt` for the required version of NumPy.
@@ -160,19 +145,19 @@
 Enable Intel® MKL support
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-NumExpr includes support for Intel's MKL library. This may provide better 
-performance on Intel architectures, mainly when evaluating transcendental 
-functions (trigonometrical, exponential, ...). 
-
-If you have Intel's MKL, copy the `site.cfg.example` that comes with the 
-distribution to `site.cfg` and edit the latter file to provide correct paths 
to 
-the MKL libraries in your system.  After doing this, you can proceed with the 
+NumExpr includes support for Intel's MKL library. This may provide better
+performance on Intel architectures, mainly when evaluating transcendental
+functions (trigonometrical, exponential, ...).
+
+If you have Intel's MKL, copy the `site.cfg.example` that comes with the
+distribution to `site.cfg` and edit the latter file to provide correct paths to
+the MKL libraries in your system.  After doing this, you can proceed with the
 usual building instructions listed above.
 
-Pay attention to the messages during the building process in order to know 
-whether MKL has been detected or not.  Finally, you can check the speed-ups on 
-your machine by running the `bench/vml_timing.py` script (you can play with 
-different parameters to the `set_vml_accuracy_mode()` and 
`set_vml_num_threads()` 
+Pay attention to the messages during the building process in order to know
+whether MKL has been detected or not.  Finally, you can check the speed-ups on
+your machine by running the `bench/vml_timing.py` script (you can play with
+different parameters to the `set_vml_accuracy_mode()` and 
`set_vml_num_threads()`
 functions in the script so as to see how it would affect performance).
 
 Usage
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/setup.cfg new/numexpr-2.8.8/setup.cfg
--- old/numexpr-2.8.7/setup.cfg 2023-09-26 10:15:25.606307000 +0200
+++ new/numexpr-2.8.8/setup.cfg 2023-12-11 14:06:59.466662000 +0100
@@ -1,6 +1,6 @@
 [metadata]
 name = numexpr
-version = 2.8.7
+version = 2.8.8
 description = Fast numerical expression evaluator for NumPy
 author = David M. Cooke, Francesc Alted, and others
 maintainer = Robert A. McLeod
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/numexpr-2.8.7/setup.py new/numexpr-2.8.8/setup.py
--- old/numexpr-2.8.7/setup.py  2023-09-26 10:01:22.000000000 +0200
+++ new/numexpr-2.8.8/setup.py  2023-12-11 13:56:26.000000000 +0100
@@ -11,15 +11,16 @@
 
 import os, os.path as op
 from setuptools import setup, Extension
-from setuptools.config import read_configuration
 import platform
+import configparser
 import numpy as np
 
 with open('requirements.txt') as f:
     requirements = f.read().splitlines()
 
 with open('numexpr/version.py', 'w') as fh:
-    cfg = read_configuration("setup.cfg")
+    cfg = configparser.ConfigParser()
+    cfg.read('setup.cfg')
     fh.write('# THIS FILE IS GENERATED BY `SETUP.PY`\n')
     fh.write("version = '%s'\n" % cfg['metadata']['version'])
     try:
@@ -54,12 +55,11 @@
     """
     Parses `site.cfg`, if it exists, to determine the location of Intel oneAPI 
MKL.
 
-    To compile NumExpr with MKL (VML) support, typically you need to copy the 
-    provided `site.cfg.example` to `site.cfg` and then edit the paths in the 
-    configuration lines for `include_dirs` and `library_dirs` paths to point 
+    To compile NumExpr with MKL (VML) support, typically you need to copy the
+    provided `site.cfg.example` to `site.cfg` and then edit the paths in the
+    configuration lines for `include_dirs` and `library_dirs` paths to point
     to the appropriate directories on your machine.
     """
-    import configparser
     site = configparser.ConfigParser()
     if not op.isfile('site.cfg'):
         return
@@ -77,7 +77,7 @@
             site['mkl']['libraries'].replace(os.pathsep, ',').split(','))
         def_macros.append(('USE_VML', None))
         print(f'FOUND MKL IMPORT')
-        
+
 
 def setup_package():
 

Reply via email to