Hello community,

here is the log from the commit of package python-uncertainties for 
openSUSE:Factory checked in at 2019-06-03 18:57:34
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-uncertainties (Old)
 and      /work/SRC/openSUSE:Factory/.python-uncertainties.new.5148 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-uncertainties"

Mon Jun  3 18:57:34 2019 rev:5 rq:707107 version:3.1.1

Changes:
--------
--- 
/work/SRC/openSUSE:Factory/python-uncertainties/python-uncertainties.changes    
    2018-10-31 13:20:39.835100971 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-uncertainties.new.5148/python-uncertainties.changes
      2019-06-03 18:57:36.296379125 +0200
@@ -1,0 +2,6 @@
+Mon Jun  3 09:24:16 UTC 2019 - Tomáš Chvátal <[email protected]>
+
+- Update to 3.1.1:
+  * No upstream changelog provided
+
+-------------------------------------------------------------------

Old:
----
  uncertainties-3.0.3.tar.gz

New:
----
  uncertainties-3.1.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-uncertainties.spec ++++++
--- /var/tmp/diff_new_pack.KPMgQv/_old  2019-06-03 18:57:38.800378196 +0200
+++ /var/tmp/diff_new_pack.KPMgQv/_new  2019-06-03 18:57:38.836378182 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-uncertainties
 #
-# Copyright (c) 2018 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -18,16 +18,21 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-uncertainties
-Version:        3.0.3
+Version:        3.1.1
 Release:        0
 Summary:        Uncertainties on the Quantities Involved (aka "Error 
Propagation")
 License:        BSD-3-Clause
 Group:          Development/Languages/Python
-URL:            http://pythonhosted.org/uncertainties/
+URL:            https://github.com/lebigot/uncertainties/
 Source:         
https://files.pythonhosted.org/packages/source/u/uncertainties/uncertainties-%{version}.tar.gz
+BuildRequires:  %{python_module nose}
+BuildRequires:  %{python_module numpy}
 BuildRequires:  %{python_module setuptools}
 BuildRequires:  fdupes
 BuildRequires:  python-rpm-macros
+# Package uses 2to3 to generate python3 compatible code
+BuildRequires:  python3-testsuite
+BuildRequires:  python3-tools
 BuildArch:      noarch
 %if 0%{?suse_version}
 Recommends:     python-numpy
@@ -41,12 +46,19 @@
 
 %prep
 %setup -q -n uncertainties-%{version}
+# crazy directory layout
+rm -r uncertainties-py23
+mv uncertainties-py27 uncertainties
 
 %build
 %python_build
 
 %install
 %python_install
+%python_expand %fdupes %{buildroot}%{$python_sitelib}
+
+%check
+%python_expand PYTHONPATH=%{buildroot}%{$python_sitelib} $python setup.py 
nosetests -v
 
 %files %{python_files}
 %license LICENSE.txt

++++++ uncertainties-3.0.3.tar.gz -> uncertainties-3.1.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/PKG-INFO 
new/uncertainties-3.1.1/PKG-INFO
--- old/uncertainties-3.0.3/PKG-INFO    2018-10-28 23:11:47.000000000 +0100
+++ new/uncertainties-3.1.1/PKG-INFO    2019-05-30 20:02:29.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: uncertainties
-Version: 3.0.3
+Version: 3.1.1
 Summary: Transparent calculations with uncertainties on the quantities 
involved (aka error propagation); fast calculation of derivatives
 Home-page: http://uncertainties-python-package.readthedocs.io/
 Author: Eric O. LEBIGOT (EOL)
@@ -108,6 +108,10 @@
         
         Main changes:
         
+        - 3.1: Variables built through a correlation or covariance matrix, and 
that
+          have uncertainties that span many orders of magnitude are now 
+          calculated more accurately (improved ``correlated_values()`` and
+          ``correlated_values_norm()`` functions).
         - 3.0: Massive speedup for some operations involving large numbers of 
numbers with uncertainty, like ``sum(ufloat(1, 1) for _ in xrange(100000))`` 
(this is about 5,000 times faster than before).
         - 2.4.8: Friendlier completions in Python shells, etc.: internal 
functions should not appear anymore (for the user modules: ``uncertainties``, 
``uncertainties.umath`` and  ``uncertainties.unumpy``). Parsing the shorthand 
notation (e.g. ``3.1(2)``) now works with infinite values (e.g. ``-inf(inf)``); 
this mirrors the ability to print such numbers with uncertainty. The Particle 
Data Group rounding rule is applied in more cases (e.g. printing 724.2±26.2 now 
gives ``724±26``). The shorthand+LaTeX formatting of numbers with an infinite 
nominal value is fixed. ``uncertainties.unumpy.matrix`` now uses ``.std_devs`` 
instead of ``.std_devs()``, for consistency with floats with uncertainty 
(automatic conversion of code added to ``uncertainties.1to2``).
         - 2.4.7: String formatting now works for ``(-)inf+/-...`` numbers.
@@ -261,6 +265,7 @@
 Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: Implementation :: Jython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: Education
@@ -271,7 +276,7 @@
 Classifier: Topic :: Software Development :: Libraries
 Classifier: Topic :: Software Development :: Libraries :: Python Modules
 Classifier: Topic :: Utilities
-Provides-Extra: all
-Provides-Extra: optional
-Provides-Extra: tests
 Provides-Extra: docs
+Provides-Extra: tests
+Provides-Extra: optional
+Provides-Extra: all
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/README.rst 
new/uncertainties-3.1.1/README.rst
--- old/uncertainties-3.0.3/README.rst  2018-10-28 23:09:33.000000000 +0100
+++ new/uncertainties-3.1.1/README.rst  2019-05-19 22:33:18.000000000 +0200
@@ -10,6 +10,9 @@
    :target: https://codecov.io/gh/lebigot/uncertainties/
 .. image:: 
https://readthedocs.org/projects/uncertainties-python-package/badge/?version=latest
    :target: 
http://uncertainties-python-package.readthedocs.io/en/latest/?badge=latest
+.. image:: https://img.shields.io/pypi/v/uncertainties.svg
+   :target: https://pypi.org/project/uncertainties/
+
    
 This is the ``uncertainties`` Python package, which performs **transparent
 calculations with uncertainties** (aka "error propagation"):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/doc/index.rst 
new/uncertainties-3.1.1/doc/index.rst
--- old/uncertainties-3.0.3/doc/index.rst       2018-10-28 23:09:33.000000000 
+0100
+++ new/uncertainties-3.1.1/doc/index.rst       2019-05-19 22:33:18.000000000 
+0200
@@ -138,7 +138,7 @@
 
    conda install -c conda-forge uncertainties
 
-If you have `pip <hhttps://pypi.org/project/pip/`_, you can try to install
+If you have `pip <https://pypi.org/project/pip/>`_, you can try to install
 the latest version with
 
 .. code-block:: sh
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/doc/user_guide.rst 
new/uncertainties-3.1.1/doc/user_guide.rst
--- old/uncertainties-3.0.3/doc/user_guide.rst  2018-10-28 23:09:33.000000000 
+0100
+++ new/uncertainties-3.1.1/doc/user_guide.rst  2019-05-19 22:33:18.000000000 
+0200
@@ -134,8 +134,8 @@
 .. index:: correlations; detailed example
 
 
-Correlated variables
-====================
+Automatic correlations
+======================
 
 Correlations between variables are **automatically handled** whatever
 the number of variables involved, and whatever the complexity of the
@@ -350,10 +350,16 @@
 Use of a correlation matrix
 ---------------------------
 
-Alternatively, correlated values can be defined through a
-*correlation* matrix (the correlation matrix is the covariance matrix
-normalized with individual standard deviations; it has ones on its
-diagonal), along with a list of nominal values and standard deviations:
+Alternatively, correlated values can be defined through:
+
+- a sequence of nominal values and standard deviations, and
+- a *correlation* matrix between each variable of this sequence
+  (the correlation matrix is the covariance matrix
+  normalized with individual standard deviations; it has ones on its
+  diagonal)—in the form of a NumPy array-like object, e.g. a 
+  list of lists, or a NumPy array.
+
+Example: 
 
 >>> (u3, v3, sum3) = uncertainties.correlated_values_norm(
 ...     [(1, 0.1), (10, 0.1), (21, 0.22360679774997899)], corr_matrix)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/setup.py 
new/uncertainties-3.1.1/setup.py
--- old/uncertainties-3.0.3/setup.py    2018-10-28 23:09:33.000000000 +0100
+++ new/uncertainties-3.1.1/setup.py    2019-05-30 20:02:20.000000000 +0200
@@ -42,7 +42,7 @@
 # Common options for distutils/setuptools's setup():
 setup_options = dict(
     name='uncertainties',
-    version='3.0.3',
+    version='3.1.1',
     author='Eric O. LEBIGOT (EOL)',
     author_email='[email protected]',
     url='http://uncertainties-python-package.readthedocs.io/',
@@ -160,6 +160,10 @@
 
 Main changes:
 
+- 3.1: Variables built through a correlation or covariance matrix, and that
+  have uncertainties that span many orders of magnitude are now 
+  calculated more accurately (improved ``correlated_values()`` and
+  ``correlated_values_norm()`` functions).
 - 3.0: Massive speedup for some operations involving large numbers of numbers 
with uncertainty, like ``sum(ufloat(1, 1) for _ in xrange(100000))`` (this is 
about 5,000 times faster than before).
 - 2.4.8: Friendlier completions in Python shells, etc.: internal functions 
should not appear anymore (for the user modules: ``uncertainties``, 
``uncertainties.umath`` and  ``uncertainties.unumpy``). Parsing the shorthand 
notation (e.g. ``3.1(2)``) now works with infinite values (e.g. ``-inf(inf)``); 
this mirrors the ability to print such numbers with uncertainty. The Particle 
Data Group rounding rule is applied in more cases (e.g. printing 724.2±26.2 now 
gives ``724±26``). The shorthand+LaTeX formatting of numbers with an infinite 
nominal value is fixed. ``uncertainties.unumpy.matrix`` now uses ``.std_devs`` 
instead of ``.std_devs()``, for consistency with floats with uncertainty 
(automatic conversion of code added to ``uncertainties.1to2``).
 - 2.4.7: String formatting now works for ``(-)inf+/-...`` numbers.
@@ -320,6 +324,7 @@
         'Programming Language :: Python :: 3.4',
         'Programming Language :: Python :: 3.5',
         'Programming Language :: Python :: 3.6',
+        'Programming Language :: Python :: 3.7',
         'Programming Language :: Python :: Implementation :: Jython',
         'Programming Language :: Python :: Implementation :: PyPy',
         'Topic :: Education',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/uncertainties-py23/__init__.py 
new/uncertainties-3.1.1/uncertainties-py23/__init__.py
--- old/uncertainties-3.0.3/uncertainties-py23/__init__.py      2018-10-28 
23:08:27.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties-py23/__init__.py      2019-05-30 
19:48:32.000000000 +0200
@@ -225,7 +225,7 @@
 from uncertainties.core import __all__  # For a correct help(uncertainties)
 
 # Numerical version:
-__version_info__ = (3, 0, 3)
+__version_info__ = (3, 1, 1)
 __version__ = '.'.join(map(str, __version_info__))
 
 __author__ = 'Eric O. LEBIGOT (EOL) <[email protected]>'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/uncertainties-py23/core.py 
new/uncertainties-3.1.1/uncertainties-py23/core.py
--- old/uncertainties-3.0.3/uncertainties-py23/core.py  2018-10-28 
23:08:27.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties-py23/core.py  2019-05-30 
19:48:32.000000000 +0200
@@ -150,28 +150,84 @@
         The numbers with uncertainties returned depend on newly
         created, independent variables (Variable objects).
 
-        If 'tags' is not None, it must list the tag of each new
-        independent variable.
-
         nom_values -- sequence with the nominal (real) values of the
         numbers with uncertainties to be returned.
 
-        covariance_mat -- full covariance matrix of the returned
-        numbers with uncertainties (not the statistical correlation
-        matrix, i.e., not the normalized covariance matrix). For
-        example, the first element of this matrix is the variance of
-        the first returned number with uncertainty.
+        covariance_mat -- full covariance matrix of the returned numbers with
+        uncertainties. For example, the first element of this matrix is the
+        variance of the first number with uncertainty. This matrix must be a
+        NumPy array-like (list of lists, NumPy array, etc.). 
+
+        tags -- if 'tags' is not None, it must list the tag of each new
+        independent variable.
         """
 
+        # !!! It would in principle be possible to handle 0 variance
+        # variables by first selecting the sub-matrix that does not contain
+        # such variables (with the help of numpy.ix_()), and creating 
+        # them separately.
+        
+        std_devs = numpy.sqrt(numpy.diag(covariance_mat))
+
+        # For numerical stability reasons, we go through the correlation
+        # matrix, because it is insensitive to any change of scale in the
+        # quantities returned. However, care must be taken with 0 variance
+        # variables: calculating the correlation matrix cannot be simply done
+        # by dividing by standard deviations. We thus use specific
+        # normalization values, with no null value:
+        norm_vector = std_devs.copy()
+        norm_vector[norm_vector==0] = 1
+
+        return correlated_values_norm(
+            # !! The following zip() is a bit suboptimal: correlated_values()
+            # separates back the nominal values and the standard deviations:
+            zip(nom_values, std_devs),
+            covariance_mat/norm_vector/norm_vector[:,numpy.newaxis],
+            tags)
+
+    __all__.append('correlated_values')
+
+    def correlated_values_norm(values_with_std_dev, correlation_mat,
+                               tags=None):
+        '''
+        Return correlated values like correlated_values(), but takes
+        instead as input:
+
+        - nominal (float) values along with their standard deviation, and
+        - a correlation matrix (i.e. a normalized covariance matrix).
+
+        values_with_std_dev -- sequence of (nominal value, standard
+        deviation) pairs. The returned, correlated values have these
+        nominal values and standard deviations.
+
+        correlation_mat -- correlation matrix between the given values, except
+        that any value with a 0 standard deviation must have its correlations
+        set to 0, with a diagonal element set to an arbitrary value (something
+        close to 0-1 is recommended, for a better numerical precision).  When
+        no value has a 0 variance, this is the covariance matrix normalized by
+        standard deviations, and thus a symmetric matrix with ones on its
+        diagonal.  This matrix must be an NumPy array-like (list of lists,
+        NumPy array, etc.).
+
+        tags -- like for correlated_values().
+        '''
+
         # If no tags were given, we prepare tags for the newly created
         # variables:
         if tags is None:
-            tags = (None,) * len(nom_values)
+            tags = (None,) * len(values_with_std_dev)
 
+        (nominal_values, std_devs) = numpy.transpose(values_with_std_dev)
+
+        # We diagonalize the correlation matrix instead of the
+        # covariance matrix, because this is generally more stable
+        # numerically. In fact, the covariance matrix can have
+        # coefficients with arbitrary values, through changes of units
+        # of its input variables. This creates numerical instabilities.
+        #
         # The covariance matrix is diagonalized in order to define
         # the independent variables that model the given values:
-
-        (variances, transform) = numpy.linalg.eigh(covariance_mat)
+        (variances, transform) = numpy.linalg.eigh(correlation_mat)
 
         # Numerical errors might make some variances negative: we set
         # them to zero:
@@ -187,43 +243,19 @@
             Variable(0, sqrt(variance), tag)
             for (variance, tag) in zip(variances, tags))
 
+        # The coordinates of each new uncertainty as a function of the
+        # new variables must include the variable scale (standard deviation):
+        transform *= std_devs[:, numpy.newaxis] 
+        
         # Representation of the initial correlated values:
         values_funcs = tuple(
             AffineScalarFunc(
                 value,
-                LinearCombination(dict(itertools.izip(variables, coords))))
-            for (coords, value) in zip(transform, nom_values))
+                LinearCombination(dict(zip(variables, coords))))
+            for (coords, value) in zip(transform, nominal_values))
 
         return values_funcs
 
-    __all__.append('correlated_values')
-
-    def correlated_values_norm(values_with_std_dev, correlation_mat,
-                               tags=None):
-        '''
-        Return correlated values like correlated_values(), but takes
-        instead as input:
-
-        - nominal (float) values along with their standard deviation, and
-
-        - a correlation matrix (i.e. a normalized covariance matrix,
-          normalized with individual standard deviations).
-
-        values_with_std_dev -- sequence of (nominal value, standard
-        deviation) pairs. The returned, correlated values have these
-        nominal values and standard deviations.
-
-        correlation_mat -- correlation matrix (i.e. the normalized
-        covariance matrix, a matrix with ones on its diagonal).
-        '''
-
-        (nominal_values, std_devs) = numpy.transpose(values_with_std_dev)
-
-        return correlated_values(
-            nominal_values,
-            correlation_mat*std_devs*std_devs[numpy.newaxis].T,
-            tags)
-
     __all__.append('correlated_values_norm')
 
 ###############################################################################
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/uncertainties-3.0.3/uncertainties-py23/test_uncertainties.py 
new/uncertainties-3.1.1/uncertainties-py23/test_uncertainties.py
--- old/uncertainties-3.0.3/uncertainties-py23/test_uncertainties.py    
2018-10-28 23:08:27.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties-py23/test_uncertainties.py    
2019-05-30 19:48:32.000000000 +0200
@@ -2128,7 +2128,8 @@
         both their nominal value and uncertainty are equal (up to the
         given precision).
 
-        m1, m2 -- NumPy matrices.
+        m1, m2 -- NumPy arrays.
+
         precision -- precision passed through to
         uncertainties.test_uncertainties.numbers_close().
         """
@@ -2151,6 +2152,7 @@
             if not numbers_close(elmt1.std_dev,
                                  elmt2.std_dev, precision):
                 return False
+        
         return True
 
 
@@ -2273,6 +2275,44 @@
         assert numbers_close(corr_matrix[0,0], 1)
         assert numbers_close(corr_matrix[1,2], 2*v.std_dev/sum_value.std_dev)
 
+        ####################
+
+        # Test of numerical robustness despite wildly different
+        # orders of magnitude (see
+        # https://github.com/lebigot/uncertainties/issues/95):
+        cov = numpy.diag([1e-70, 1e-70, 1e10])
+        cov[0, 1] = cov[1, 0] = 0.9e-70
+        cov[[0, 1], 2] = -3e-34
+        cov[2, [0, 1]] = -3e-34
+        variables = uncert_core.correlated_values([0]*3, cov)
+
+        # Since the numbers are very small, we need to compare them
+        # in a stricter way, that handles the case of a 0 variance
+        # in `variables`:
+        assert numbers_close(
+                1e66*cov[0,0], 1e66*variables[0].s**2, tolerance=1e-5)
+        assert numbers_close(
+                1e66*cov[1,1], 1e66*variables[1].s**2, tolerance=1e-5)
+
+        ####################
+
+        # 0 variances are a bit special, since the correlation matrix
+        # cannot be calculated naively, so we test that there is no
+        # specific problem in this case:
+
+        cov = numpy.diag([0, 0, 10])
+        nom_values = [1, 2, 3]
+        variables = uncert_core.correlated_values(nom_values, cov)
+
+        for (variable, nom_value, variance) in zip(
+            variables, nom_values, cov.diagonal()):
+            
+            assert numbers_close(variable.n, nom_value)
+            assert numbers_close(variable.s**2, variance) 
+        
+        assert arrays_close(
+            cov,
+            numpy.array(uncert_core.covariance_matrix(variables)))
 
     def test_correlated_values_correlation_mat():
         '''
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/uncertainties-py27/__init__.py 
new/uncertainties-3.1.1/uncertainties-py27/__init__.py
--- old/uncertainties-3.0.3/uncertainties-py27/__init__.py      2018-10-28 
23:00:31.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties-py27/__init__.py      2019-05-30 
19:47:40.000000000 +0200
@@ -225,7 +225,7 @@
 from .core import __all__  # For a correct help(uncertainties)
 
 # Numerical version:
-__version_info__ = (3, 0, 3)
+__version_info__ = (3, 1, 1)
 __version__ = '.'.join(map(str, __version_info__))
 
 __author__ = 'Eric O. LEBIGOT (EOL) <[email protected]>'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/uncertainties-py27/core.py 
new/uncertainties-3.1.1/uncertainties-py27/core.py
--- old/uncertainties-3.0.3/uncertainties-py27/core.py  2018-10-28 
23:00:31.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties-py27/core.py  2019-05-30 
19:47:40.000000000 +0200
@@ -143,28 +143,84 @@
         The numbers with uncertainties returned depend on newly
         created, independent variables (Variable objects).
 
-        If 'tags' is not None, it must list the tag of each new
-        independent variable.
-
         nom_values -- sequence with the nominal (real) values of the
         numbers with uncertainties to be returned.
 
-        covariance_mat -- full covariance matrix of the returned
-        numbers with uncertainties (not the statistical correlation
-        matrix, i.e., not the normalized covariance matrix). For
-        example, the first element of this matrix is the variance of
-        the first returned number with uncertainty.
+        covariance_mat -- full covariance matrix of the returned numbers with
+        uncertainties. For example, the first element of this matrix is the
+        variance of the first number with uncertainty. This matrix must be a
+        NumPy array-like (list of lists, NumPy array, etc.). 
+
+        tags -- if 'tags' is not None, it must list the tag of each new
+        independent variable.
         """
 
+        # !!! It would in principle be possible to handle 0 variance
+        # variables by first selecting the sub-matrix that does not contain
+        # such variables (with the help of numpy.ix_()), and creating 
+        # them separately.
+        
+        std_devs = numpy.sqrt(numpy.diag(covariance_mat))
+
+        # For numerical stability reasons, we go through the correlation
+        # matrix, because it is insensitive to any change of scale in the
+        # quantities returned. However, care must be taken with 0 variance
+        # variables: calculating the correlation matrix cannot be simply done
+        # by dividing by standard deviations. We thus use specific
+        # normalization values, with no null value:
+        norm_vector = std_devs.copy()
+        norm_vector[norm_vector==0] = 1
+
+        return correlated_values_norm(
+            # !! The following zip() is a bit suboptimal: correlated_values()
+            # separates back the nominal values and the standard deviations:
+            zip(nom_values, std_devs),
+            covariance_mat/norm_vector/norm_vector[:,numpy.newaxis],
+            tags)
+
+    __all__.append('correlated_values')
+
+    def correlated_values_norm(values_with_std_dev, correlation_mat,
+                               tags=None):
+        '''
+        Return correlated values like correlated_values(), but takes
+        instead as input:
+
+        - nominal (float) values along with their standard deviation, and
+        - a correlation matrix (i.e. a normalized covariance matrix).
+
+        values_with_std_dev -- sequence of (nominal value, standard
+        deviation) pairs. The returned, correlated values have these
+        nominal values and standard deviations.
+
+        correlation_mat -- correlation matrix between the given values, except
+        that any value with a 0 standard deviation must have its correlations
+        set to 0, with a diagonal element set to an arbitrary value (something
+        close to 0-1 is recommended, for a better numerical precision).  When
+        no value has a 0 variance, this is the covariance matrix normalized by
+        standard deviations, and thus a symmetric matrix with ones on its
+        diagonal.  This matrix must be an NumPy array-like (list of lists,
+        NumPy array, etc.).
+
+        tags -- like for correlated_values().
+        '''
+
         # If no tags were given, we prepare tags for the newly created
         # variables:
         if tags is None:
-            tags = (None,) * len(nom_values)
+            tags = (None,) * len(values_with_std_dev)
 
+        (nominal_values, std_devs) = numpy.transpose(values_with_std_dev)
+
+        # We diagonalize the correlation matrix instead of the
+        # covariance matrix, because this is generally more stable
+        # numerically. In fact, the covariance matrix can have
+        # coefficients with arbitrary values, through changes of units
+        # of its input variables. This creates numerical instabilities.
+        #
         # The covariance matrix is diagonalized in order to define
         # the independent variables that model the given values:
-
-        (variances, transform) = numpy.linalg.eigh(covariance_mat)
+        (variances, transform) = numpy.linalg.eigh(correlation_mat)
 
         # Numerical errors might make some variances negative: we set
         # them to zero:
@@ -180,43 +236,19 @@
             Variable(0, sqrt(variance), tag)
             for (variance, tag) in zip(variances, tags))
 
+        # The coordinates of each new uncertainty as a function of the
+        # new variables must include the variable scale (standard deviation):
+        transform *= std_devs[:, numpy.newaxis] 
+        
         # Representation of the initial correlated values:
         values_funcs = tuple(
             AffineScalarFunc(
                 value,
-                LinearCombination(dict(itertools.izip(variables, coords))))
-            for (coords, value) in zip(transform, nom_values))
+                LinearCombination(dict(zip(variables, coords))))
+            for (coords, value) in zip(transform, nominal_values))
 
         return values_funcs
 
-    __all__.append('correlated_values')
-
-    def correlated_values_norm(values_with_std_dev, correlation_mat,
-                               tags=None):
-        '''
-        Return correlated values like correlated_values(), but takes
-        instead as input:
-
-        - nominal (float) values along with their standard deviation, and
-
-        - a correlation matrix (i.e. a normalized covariance matrix,
-          normalized with individual standard deviations).
-
-        values_with_std_dev -- sequence of (nominal value, standard
-        deviation) pairs. The returned, correlated values have these
-        nominal values and standard deviations.
-
-        correlation_mat -- correlation matrix (i.e. the normalized
-        covariance matrix, a matrix with ones on its diagonal).
-        '''
-
-        (nominal_values, std_devs) = numpy.transpose(values_with_std_dev)
-
-        return correlated_values(
-            nominal_values,
-            correlation_mat*std_devs*std_devs[numpy.newaxis].T,
-            tags)
-
     __all__.append('correlated_values_norm')
 
 ###############################################################################
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/uncertainties-3.0.3/uncertainties-py27/test_uncertainties.py 
new/uncertainties-3.1.1/uncertainties-py27/test_uncertainties.py
--- old/uncertainties-3.0.3/uncertainties-py27/test_uncertainties.py    
2018-10-28 23:00:31.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties-py27/test_uncertainties.py    
2019-05-30 19:47:40.000000000 +0200
@@ -2132,7 +2132,8 @@
         both their nominal value and uncertainty are equal (up to the
         given precision).
 
-        m1, m2 -- NumPy matrices.
+        m1, m2 -- NumPy arrays.
+
         precision -- precision passed through to
         uncertainties.test_uncertainties.numbers_close().
         """
@@ -2155,6 +2156,7 @@
             if not numbers_close(elmt1.std_dev,
                                  elmt2.std_dev, precision):
                 return False
+        
         return True
 
 
@@ -2277,6 +2279,44 @@
         assert numbers_close(corr_matrix[0,0], 1)
         assert numbers_close(corr_matrix[1,2], 2*v.std_dev/sum_value.std_dev)
 
+        ####################
+
+        # Test of numerical robustness despite wildly different
+        # orders of magnitude (see
+        # https://github.com/lebigot/uncertainties/issues/95):
+        cov = numpy.diag([1e-70, 1e-70, 1e10])
+        cov[0, 1] = cov[1, 0] = 0.9e-70
+        cov[[0, 1], 2] = -3e-34
+        cov[2, [0, 1]] = -3e-34
+        variables = uncert_core.correlated_values([0]*3, cov)
+
+        # Since the numbers are very small, we need to compare them
+        # in a stricter way, that handles the case of a 0 variance
+        # in `variables`:
+        assert numbers_close(
+                1e66*cov[0,0], 1e66*variables[0].s**2, tolerance=1e-5)
+        assert numbers_close(
+                1e66*cov[1,1], 1e66*variables[1].s**2, tolerance=1e-5)
+
+        ####################
+
+        # 0 variances are a bit special, since the correlation matrix
+        # cannot be calculated naively, so we test that there is no
+        # specific problem in this case:
+
+        cov = numpy.diag([0, 0, 10])
+        nom_values = [1, 2, 3]
+        variables = uncert_core.correlated_values(nom_values, cov)
+
+        for (variable, nom_value, variance) in zip(
+            variables, nom_values, cov.diagonal()):
+            
+            assert numbers_close(variable.n, nom_value)
+            assert numbers_close(variable.s**2, variance) 
+        
+        assert arrays_close(
+            cov,
+            numpy.array(uncert_core.covariance_matrix(variables)))
 
     def test_correlated_values_correlation_mat():
         '''
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/uncertainties-3.0.3/uncertainties.egg-info/PKG-INFO 
new/uncertainties-3.1.1/uncertainties.egg-info/PKG-INFO
--- old/uncertainties-3.0.3/uncertainties.egg-info/PKG-INFO     2018-10-28 
23:11:47.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties.egg-info/PKG-INFO     2019-05-30 
20:02:28.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: uncertainties
-Version: 3.0.3
+Version: 3.1.1
 Summary: Transparent calculations with uncertainties on the quantities 
involved (aka error propagation); fast calculation of derivatives
 Home-page: http://uncertainties-python-package.readthedocs.io/
 Author: Eric O. LEBIGOT (EOL)
@@ -108,6 +108,10 @@
         
         Main changes:
         
+        - 3.1: Variables built through a correlation or covariance matrix, and 
that
+          have uncertainties that span many orders of magnitude are now 
+          calculated more accurately (improved ``correlated_values()`` and
+          ``correlated_values_norm()`` functions).
         - 3.0: Massive speedup for some operations involving large numbers of 
numbers with uncertainty, like ``sum(ufloat(1, 1) for _ in xrange(100000))`` 
(this is about 5,000 times faster than before).
         - 2.4.8: Friendlier completions in Python shells, etc.: internal 
functions should not appear anymore (for the user modules: ``uncertainties``, 
``uncertainties.umath`` and  ``uncertainties.unumpy``). Parsing the shorthand 
notation (e.g. ``3.1(2)``) now works with infinite values (e.g. ``-inf(inf)``); 
this mirrors the ability to print such numbers with uncertainty. The Particle 
Data Group rounding rule is applied in more cases (e.g. printing 724.2±26.2 now 
gives ``724±26``). The shorthand+LaTeX formatting of numbers with an infinite 
nominal value is fixed. ``uncertainties.unumpy.matrix`` now uses ``.std_devs`` 
instead of ``.std_devs()``, for consistency with floats with uncertainty 
(automatic conversion of code added to ``uncertainties.1to2``).
         - 2.4.7: String formatting now works for ``(-)inf+/-...`` numbers.
@@ -261,6 +265,7 @@
 Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: Implementation :: Jython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: Education
@@ -271,7 +276,7 @@
 Classifier: Topic :: Software Development :: Libraries
 Classifier: Topic :: Software Development :: Libraries :: Python Modules
 Classifier: Topic :: Utilities
-Provides-Extra: all
-Provides-Extra: optional
-Provides-Extra: tests
 Provides-Extra: docs
+Provides-Extra: tests
+Provides-Extra: optional
+Provides-Extra: all
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/uncertainties-3.0.3/uncertainties.egg-info/requires.txt 
new/uncertainties-3.1.1/uncertainties.egg-info/requires.txt
--- old/uncertainties-3.0.3/uncertainties.egg-info/requires.txt 2018-10-28 
23:11:47.000000000 +0100
+++ new/uncertainties-3.1.1/uncertainties.egg-info/requires.txt 2019-05-30 
20:02:28.000000000 +0200
@@ -1,8 +1,8 @@
 
 [all]
-nose
 numpy
 sphinx
+nose
 
 [docs]
 sphinx


Reply via email to