Hello community,

here is the log from the commit of package python-xarray for openSUSE:Factory 
checked in at 2019-05-13 14:48:38
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-xarray (Old)
 and      /work/SRC/openSUSE:Factory/.python-xarray.new.5148 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-xarray"

Mon May 13 14:48:38 2019 rev:10 rq:697053 version:0.12.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-xarray/python-xarray.changes      
2019-03-18 10:42:50.047165098 +0100
+++ /work/SRC/openSUSE:Factory/.python-xarray.new.5148/python-xarray.changes    
2019-05-13 14:48:40.402506854 +0200
@@ -1,0 +2,18 @@
+Tue Apr 23 09:44:22 UTC 2019 - Tomáš Chvátal <tchva...@suse.com>
+
+- Just use %pytest macro
+
+-------------------------------------------------------------------
+Sun Apr  7 11:37:34 UTC 2019 - Sebastian Wagner <sebix+novell....@sebix.at>
+
+- Update to version 0.12.1:
+ - Enhancements
+  - Allow ``expand_dims`` method to support inserting/broadcasting dimensions
+    with size > 1. (:issue:`2710`)
+ - Bug fixes
+  - Dataset.copy(deep=True) now creates a deep copy of the attrs 
(:issue:`2835`).
+  - Fix incorrect ``indexes`` resulting from various ``Dataset`` operations
+    (e.g., ``swap_dims``, ``isel``, ``reindex``, ``[]``) (:issue:`2842`,
+    :issue:`2856`).
+
+-------------------------------------------------------------------

Old:
----
  xarray-0.12.0.tar.gz

New:
----
  xarray-0.12.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-xarray.spec ++++++
--- /var/tmp/diff_new_pack.tp5KcJ/_old  2019-05-13 14:48:41.870510506 +0200
+++ /var/tmp/diff_new_pack.tp5KcJ/_new  2019-05-13 14:48:41.870510506 +0200
@@ -19,7 +19,7 @@
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 %define skip_python2 1
 Name:           python-xarray
-Version:        0.12.0
+Version:        0.12.1
 Release:        0
 Summary:        N-D labeled arrays and datasets in Python
 License:        Apache-2.0
@@ -65,8 +65,7 @@
 %python_expand %fdupes %{buildroot}%{$python_sitelib}
 
 %check
-#ignore netcdf fails for now, known upstream: 
https://github.com/pydata/xarray/issues/2050
-%python_expand py.test-%{$python_bin_suffix} xarray
+%pytest
 
 %files %{python_files}
 %doc README.rst

++++++ xarray-0.12.0.tar.gz -> xarray-0.12.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/PKG-INFO new/xarray-0.12.1/PKG-INFO
--- old/xarray-0.12.0/PKG-INFO  2019-03-16 05:02:46.000000000 +0100
+++ new/xarray-0.12.1/PKG-INFO  2019-04-05 03:32:35.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: xarray
-Version: 0.12.0
+Version: 0.12.1
 Summary: N-D labeled arrays and datasets in Python
 Home-page: https://github.com/pydata/xarray
 Author: xarray Developers
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/README.rst new/xarray-0.12.1/README.rst
--- old/xarray-0.12.0/README.rst        2019-01-26 23:13:47.000000000 +0100
+++ new/xarray-0.12.1/README.rst        2019-03-31 19:04:07.000000000 +0200
@@ -97,7 +97,7 @@
 Xarray and want to support our mission, please consider making a donation_
 to support our efforts.
 
-.. _donation: https://www.flipcause.com/secure/cause_pdetails/NDE2NTU=
+.. _donation: https://numfocus.salsalabs.org/donate-to-xarray/
 
 History
 -------
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/doc/data-structures.rst 
new/xarray-0.12.1/doc/data-structures.rst
--- old/xarray-0.12.0/doc/data-structures.rst   2019-03-14 06:28:28.000000000 
+0100
+++ new/xarray-0.12.1/doc/data-structures.rst   2019-03-31 19:04:07.000000000 
+0200
@@ -353,13 +353,6 @@
 This is particularly useful in an exploratory context, because you can
 tab-complete these variable names with tools like IPython.
 
-.. warning::
-
-  We are changing the behavior of iterating over a Dataset the next major
-  release of xarray, to only include data variables instead of both data
-  variables and coordinates. In the meantime, prefer iterating over
-  ``ds.data_vars`` or ``ds.coords``.
-
 Dictionary like methods
 ~~~~~~~~~~~~~~~~~~~~~~~
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/doc/index.rst 
new/xarray-0.12.1/doc/index.rst
--- old/xarray-0.12.0/doc/index.rst     2019-03-14 17:12:15.000000000 +0100
+++ new/xarray-0.12.1/doc/index.rst     2019-03-31 19:04:07.000000000 +0200
@@ -140,7 +140,7 @@
 Xarray and want to support our mission, please consider making a donation_
 to support our efforts.
 
-.. _donation: https://www.flipcause.com/secure/cause_pdetails/NDE2NTU=
+.. _donation: https://numfocus.salsalabs.org/donate-to-xarray/
 
 
 History
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/doc/whats-new.rst 
new/xarray-0.12.1/doc/whats-new.rst
--- old/xarray-0.12.0/doc/whats-new.rst 2019-03-16 05:00:29.000000000 +0100
+++ new/xarray-0.12.1/doc/whats-new.rst 2019-04-05 03:31:11.000000000 +0200
@@ -13,42 +13,97 @@
     import xarray as xr
     np.random.seed(123456)
 
+.. _whats-new.0.12.1:
+
+v0.12.1 (4 April 2019)
+----------------------
+
+Enhancements
+~~~~~~~~~~~~
+
+- Allow ``expand_dims`` method to support inserting/broadcasting dimensions
+  with size > 1. (:issue:`2710`)
+  By `Martin Pletcher <https://github.com/pletchm>`_.
+
+Bug fixes
+~~~~~~~~~
+
+- Dataset.copy(deep=True) now creates a deep copy of the attrs (:issue:`2835`).
+  By `Andras Gefferth <https://github.com/kefirbandi>`_.
+- Fix incorrect ``indexes`` resulting from various ``Dataset`` operations
+  (e.g., ``swap_dims``, ``isel``, ``reindex``, ``[]``) (:issue:`2842`,
+  :issue:`2856`).
+  By `Stephan Hoyer <https://github.com/shoyer>`_.
+
 .. _whats-new.0.12.0:
 
 v0.12.0 (15 March 2019)
 -----------------------
 
-Breaking changes
-~~~~~~~~~~~~~~~~
+Highlights include:
+
+- Removed support for Python 2. This is the first version of xarray that is
+  Python 3 only!
+- New :py:meth:`~xarray.DataArray.coarsen` and
+  :py:meth:`~xarray.DataArray.integrate` methods. See :ref:`comput.coarsen`
+  and :ref:`compute.using_coordinates` for details.
+- Many improvements to cftime support. See below for details.
+
+Deprecations
+~~~~~~~~~~~~
 
-- Remove support for Python 2. This is the first version of xarray that is
-  Python 3 only. (:issue:`1876`).
-  By `Joe Hamman <https://github.com/jhamman>`_.
 - The ``compat`` argument to ``Dataset`` and the ``encoding`` argument to
   ``DataArray`` are deprecated and will be removed in a future release.
   (:issue:`1188`)
   By `Maximilian Roos <https://github.com/max-sixty>`_.
 
-Enhancements
-~~~~~~~~~~~~
-- Added ability to open netcdf4/hdf5 file-like objects with ``open_dataset``.
-  Requires (h5netcdf>0.7 and h5py>2.9.0). (:issue:`2781`)
-  By `Scott Henderson <https://github.com/scottyhq>`_
+cftime related enhancements
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Resampling of standard and non-standard calendars indexed by
+  :py:class:`~xarray.CFTimeIndex` is now possible. (:issue:`2191`).
+  By `Jwen Fai Low <https://github.com/jwenfai>`_ and
+  `Spencer Clark <https://github.com/spencerkclark>`_.
+
+- Taking the mean of arrays of :py:class:`cftime.datetime` objects, and
+  by extension, use of :py:meth:`~xarray.DataArray.coarsen` with
+  :py:class:`cftime.datetime` coordinates is now possible. By `Spencer Clark
+  <https://github.com/spencerkclark>`_.
+
 - Internal plotting now supports ``cftime.datetime`` objects as time series.
   (:issue:`2164`)
   By `Julius Busecke <https://github.com/jbusecke>`_ and
   `Spencer Clark <https://github.com/spencerkclark>`_.
+
+- :py:meth:`~xarray.cftime_range` now supports QuarterBegin and QuarterEnd 
offsets (:issue:`2663`).
+  By `Jwen Fai Low <https://github.com/jwenfai>`_
+
+- :py:meth:`~xarray.open_dataset` now accepts a ``use_cftime`` argument, which
+  can be used to require that ``cftime.datetime`` objects are always used, or
+  never used when decoding dates encoded with a standard calendar.  This can be
+  used to ensure consistent date types are returned when using
+  :py:meth:`~xarray.open_mfdataset` (:issue:`1263`) and/or to silence
+  serialization warnings raised if dates from a standard calendar are found to
+  be outside the :py:class:`pandas.Timestamp`-valid range (:issue:`2754`).  By
+  `Spencer Clark <https://github.com/spencerkclark>`_.
+
+- :py:meth:`pandas.Series.dropna` is now supported for a
+  :py:class:`pandas.Series` indexed by a :py:class:`~xarray.CFTimeIndex`
+  (:issue:`2688`). By `Spencer Clark <https://github.com/spencerkclark>`_.
+
+Other enhancements
+~~~~~~~~~~~~~~~~~~
+
+- Added ability to open netcdf4/hdf5 file-like objects with ``open_dataset``.
+  Requires (h5netcdf>0.7 and h5py>2.9.0). (:issue:`2781`)
+  By `Scott Henderson <https://github.com/scottyhq>`_
 - Add ``data=False`` option to ``to_dict()`` methods. (:issue:`2656`)
   By `Ryan Abernathey <https://github.com/rabernat>`_
-- :py:meth:`~xarray.DataArray.coarsen` and
-  :py:meth:`~xarray.Dataset.coarsen` are newly added.
+- :py:meth:`DataArray.coarsen` and
+  :py:meth:`Dataset.coarsen` are newly added.
   See :ref:`comput.coarsen` for details.
   (:issue:`2525`)
   By `Keisuke Fujii <https://github.com/fujiisoup>`_.
-- Taking the mean of arrays of :py:class:`cftime.datetime` objects, and
-  by extension, use of :py:meth:`~xarray.DataArray.coarsen` with
-  :py:class:`cftime.datetime` coordinates is now possible. By `Spencer Clark
-  <https://github.com/spencerkclark>`_. 
 - Upsampling an array via interpolation with resample is now dask-compatible,
   as long as the array is not chunked along the resampling dimension.
   By `Spencer Clark <https://github.com/spencerkclark>`_.
@@ -57,32 +112,14 @@
   report showing what exactly differs between the two objects (dimensions /
   coordinates / variables / attributes)  (:issue:`1507`).
   By `Benoit Bovy <https://github.com/benbovy>`_.
-- Resampling of standard and non-standard calendars indexed by
-  :py:class:`~xarray.CFTimeIndex` is now possible. (:issue:`2191`).
-  By `Jwen Fai Low <https://github.com/jwenfai>`_ and
-  `Spencer Clark <https://github.com/spencerkclark>`_.
 - Add ``tolerance`` option to ``resample()`` methods ``bfill``, ``pad``,
   ``nearest``. (:issue:`2695`)
   By `Hauke Schulz <https://github.com/observingClouds>`_.
-- :py:meth:`~xarray.DataArray.integrate` and
-  :py:meth:`~xarray.Dataset.integrate` are newly added.
-  See :ref:`_compute.using_coordinates` for the detail.
+- :py:meth:`DataArray.integrate` and
+  :py:meth:`Dataset.integrate` are newly added.
+  See :ref:`compute.using_coordinates` for the detail.
   (:issue:`1332`)
   By `Keisuke Fujii <https://github.com/fujiisoup>`_.
-- :py:meth:`pandas.Series.dropna` is now supported for a
-  :py:class:`pandas.Series` indexed by a :py:class:`~xarray.CFTimeIndex`
-  (:issue:`2688`). By `Spencer Clark <https://github.com/spencerkclark>`_.
-- :py:meth:`~xarray.cftime_range` now supports QuarterBegin and QuarterEnd 
offsets (:issue:`2663`).
-  By `Jwen Fai Low <https://github.com/jwenfai>`_
-- :py:meth:`~xarray.open_dataset` now accepts a ``use_cftime`` argument, which
-  can be used to require that ``cftime.datetime`` objects are always used, or
-  never used when decoding dates encoded with a standard calendar.  This can be
-  used to ensure consistent date types are returned when using
-  :py:meth:`~xarray.open_mfdataset` (:issue:`1263`) and/or to silence
-  serialization warnings raised if dates from a standard calendar are found to
-  be outside the :py:class:`pandas.Timestamp`-valid range (:issue:`2754`).  By
-  `Spencer Clark <https://github.com/spencerkclark>`_.
-
 - Added :py:meth:`~xarray.Dataset.drop_dims` (:issue:`1949`).
   By `Kevin Squire <https://github.com/kmsquire>`_.
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/_version.py 
new/xarray-0.12.1/xarray/_version.py
--- old/xarray-0.12.0/xarray/_version.py        2019-03-16 05:02:46.000000000 
+0100
+++ new/xarray-0.12.1/xarray/_version.py        2019-04-05 03:32:35.000000000 
+0200
@@ -8,11 +8,11 @@
 
 version_json = '''
 {
- "date": "2019-03-15T21:02:04-0700",
+ "date": "2019-04-04T18:31:26-0700",
  "dirty": false,
  "error": null,
- "full-revisionid": "ad977c94eaaa1ad151bb46f2dad319566261c282",
- "version": "0.12.0"
+ "full-revisionid": "aa6abb592ac2464170459ca96409398ec8b4593a",
+ "version": "0.12.1"
 }
 '''  # END VERSION_JSON
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/backends/api.py 
new/xarray-0.12.1/xarray/backends/api.py
--- old/xarray-0.12.0/xarray/backends/api.py    2019-03-16 04:58:49.000000000 
+0100
+++ new/xarray-0.12.1/xarray/backends/api.py    2019-03-31 19:04:35.000000000 
+0200
@@ -230,7 +230,7 @@
     decode_coords : bool, optional
         If True, decode the 'coordinates' attribute to identify coordinates in
         the resulting dataset.
-    engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'cfgrib',
+    engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'cfgrib', \
         'pseudonetcdf'}, optional
         Engine to use when reading files. If not provided, the default engine
         is chosen based on available dependencies, with a preference for
@@ -445,7 +445,7 @@
     decode_coords : bool, optional
         If True, decode the 'coordinates' attribute to identify coordinates in
         the resulting dataset.
-    engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'cfgrib'},
+    engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'cfgrib'}, \
         optional
         Engine to use when reading files. If not provided, the default engine
         is chosen based on available dependencies, with a preference for
@@ -584,7 +584,7 @@
         If provided, call this function on each dataset prior to concatenation.
         You can find the file-name from which each dataset was loaded in
         ``ds.encoding['source']``.
-    engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'cfgrib'},
+    engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'cfgrib'}, \
         optional
         Engine to use when reading files. If not provided, the default engine
         is chosen based on available dependencies, with a preference for
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/backends/lru_cache.py 
new/xarray-0.12.1/xarray/backends/lru_cache.py
--- old/xarray-0.12.0/xarray/backends/lru_cache.py      2019-01-26 
23:17:19.000000000 +0100
+++ new/xarray-0.12.1/xarray/backends/lru_cache.py      2019-03-31 
19:04:35.000000000 +0200
@@ -1,8 +1,9 @@
 import collections
+import collections.abc
 import threading
 
 
-class LRUCache(collections.MutableMapping):
+class LRUCache(collections.abc.MutableMapping):
     """Thread-safe LRUCache based on an OrderedDict.
 
     All dict operations (__getitem__, __setitem__, __contains__) update the
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/coding/cftime_offsets.py 
new/xarray-0.12.1/xarray/coding/cftime_offsets.py
--- old/xarray-0.12.0/xarray/coding/cftime_offsets.py   2019-03-14 
06:28:28.000000000 +0100
+++ new/xarray-0.12.1/xarray/coding/cftime_offsets.py   2019-03-31 
19:04:35.000000000 +0200
@@ -41,15 +41,19 @@
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 import re
+import typing
 from datetime import timedelta
 from functools import partial
-from typing import ClassVar, Optional
 
 import numpy as np
 
+from ..core.pycompat import TYPE_CHECKING
 from .cftimeindex import CFTimeIndex, _parse_iso8601_with_reso
 from .times import format_cftime_datetime
 
+if TYPE_CHECKING:
+    from typing import ClassVar, Optional
+
 
 def get_date_type(calendar):
     """Return the cftime date type for a given calendar name."""
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/alignment.py 
new/xarray-0.12.1/xarray/core/alignment.py
--- old/xarray-0.12.0/xarray/core/alignment.py  2019-02-08 05:46:01.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/alignment.py  2019-04-05 03:30:53.000000000 
+0200
@@ -298,7 +298,7 @@
           * nearest: use nearest valid index value
     tolerance : optional
         Maximum distance between original and new labels for inexact matches.
-        The values of the index at the matching locations most satisfy the
+        The values of the index at the matching locations must satisfy the
         equation ``abs(index[indexer] - target) <= tolerance``.
     copy : bool, optional
         If ``copy=True``, data in the return values is always copied. If
@@ -315,36 +315,51 @@
     """
     from .dataarray import DataArray
 
+    # create variables for the new dataset
+    reindexed = OrderedDict()  # type: OrderedDict[Any, Variable]
+
     # build up indexers for assignment along each dimension
     int_indexers = {}
-    targets = OrderedDict()  # type: OrderedDict[Any, pd.Index]
+    new_indexes = OrderedDict(indexes)
     masked_dims = set()
     unchanged_dims = set()
 
-    # size of reindexed dimensions
-    new_sizes = {}
+    for dim, indexer in indexers.items():
+        if isinstance(indexer, DataArray) and indexer.dims != (dim,):
+            warnings.warn(
+                "Indexer has dimensions {0:s} that are different "
+                "from that to be indexed along {1:s}. "
+                "This will behave differently in the future.".format(
+                    str(indexer.dims), dim),
+                FutureWarning, stacklevel=3)
+
+        target = new_indexes[dim] = utils.safe_cast_to_index(indexers[dim])
+
+        if dim in indexes:
+            index = indexes[dim]
 
-    for name, index in indexes.items():
-        if name in indexers:
             if not index.is_unique:
                 raise ValueError(
                     'cannot reindex or align along dimension %r because the '
-                    'index has duplicate values' % name)
-
-            target = utils.safe_cast_to_index(indexers[name])
-            new_sizes[name] = len(target)
+                    'index has duplicate values' % dim)
 
             int_indexer = get_indexer_nd(index, target, method, tolerance)
 
             # We uses negative values from get_indexer_nd to signify
             # values that are missing in the index.
             if (int_indexer < 0).any():
-                masked_dims.add(name)
+                masked_dims.add(dim)
             elif np.array_equal(int_indexer, np.arange(len(index))):
-                unchanged_dims.add(name)
+                unchanged_dims.add(dim)
+
+            int_indexers[dim] = int_indexer
 
-            int_indexers[name] = int_indexer
-            targets[name] = target
+        if dim in variables:
+            var = variables[dim]
+            args = (var.attrs, var.encoding)  # type: tuple
+        else:
+            args = ()
+        reindexed[dim] = IndexVariable((dim,), target, *args)
 
     for dim in sizes:
         if dim not in indexes and dim in indexers:
@@ -356,25 +371,6 @@
                     'index because its size %r is different from the size of '
                     'the new index %r' % (dim, existing_size, new_size))
 
-    # create variables for the new dataset
-    reindexed = OrderedDict()  # type: OrderedDict[Any, Variable]
-
-    for dim, indexer in indexers.items():
-        if isinstance(indexer, DataArray) and indexer.dims != (dim,):
-            warnings.warn(
-                "Indexer has dimensions {0:s} that are different "
-                "from that to be indexed along {1:s}. "
-                "This will behave differently in the future.".format(
-                    str(indexer.dims), dim),
-                FutureWarning, stacklevel=3)
-
-        if dim in variables:
-            var = variables[dim]
-            args = (var.attrs, var.encoding)  # type: tuple
-        else:
-            args = ()
-        reindexed[dim] = IndexVariable((dim,), indexers[dim], *args)
-
     for name, var in variables.items():
         if name not in indexers:
             key = tuple(slice(None)
@@ -395,9 +391,6 @@
 
             reindexed[name] = new_var
 
-    new_indexes = OrderedDict(indexes)
-    new_indexes.update(targets)
-
     return reindexed, new_indexes
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/computation.py 
new/xarray-0.12.1/xarray/core/computation.py
--- old/xarray-0.12.0/xarray/core/computation.py        2019-03-14 
06:28:28.000000000 +0100
+++ new/xarray-0.12.1/xarray/core/computation.py        2019-03-31 
19:04:35.000000000 +0200
@@ -4,21 +4,22 @@
 import functools
 import itertools
 import operator
+import typing
 from collections import Counter, OrderedDict
 from distutils.version import LooseVersion
 from typing import (
     AbstractSet, Any, Callable, Iterable, List, Mapping, Optional, Sequence,
-    Tuple, TYPE_CHECKING, Union,
-)
+    Tuple, Union)
 
 import numpy as np
 
 from . import duck_array_ops, utils
 from .alignment import deep_align
 from .merge import expand_and_merge_variables
-from .pycompat import dask_array_type
+from .pycompat import TYPE_CHECKING, dask_array_type
 from .utils import is_dict_like
 from .variable import Variable
+
 if TYPE_CHECKING:
     from .dataset import Dataset
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/dataarray.py 
new/xarray-0.12.1/xarray/core/dataarray.py
--- old/xarray-0.12.0/xarray/core/dataarray.py  2019-03-14 17:12:15.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/dataarray.py  2019-04-05 03:30:53.000000000 
+0200
@@ -1,13 +1,14 @@
 import functools
+import sys
 import warnings
 from collections import OrderedDict
 
 import numpy as np
 import pandas as pd
 
+from ..plot.plot import _PlotMethods
 from . import (
     computation, dtypes, groupby, indexing, ops, resample, rolling, utils)
-from ..plot.plot import _PlotMethods
 from .accessors import DatetimeAccessor
 from .alignment import align, reindex_like_indexers
 from .common import AbstractArray, DataWithCoords
@@ -230,9 +231,6 @@
             coords, dims = _infer_coords_and_dims(data.shape, coords, dims)
             variable = Variable(dims, data, attrs, encoding, fastpath=True)
 
-        # uncomment for a useful consistency check:
-        # assert all(isinstance(v, Variable) for v in coords.values())
-
         # These fully describe a DataArray
         self._variable = variable
         self._coords = coords
@@ -1138,7 +1136,7 @@
         ds = self._to_temp_dataset().swap_dims(dims_dict)
         return self._from_temp_dataset(ds)
 
-    def expand_dims(self, dim, axis=None):
+    def expand_dims(self, dim=None, axis=None, **dim_kwargs):
         """Return a new object with an additional axis (or axes) inserted at
         the corresponding position in the array shape.
 
@@ -1147,21 +1145,53 @@
 
         Parameters
         ----------
-        dim : str or sequence of str.
+        dim : str, sequence of str, dict, or None
             Dimensions to include on the new variable.
-            dimensions are inserted with length 1.
+            If provided as str or sequence of str, then dimensions are inserted
+            with length 1. If provided as a dict, then the keys are the new
+            dimensions and the values are either integers (giving the length of
+            the new dimensions) or sequence/ndarray (giving the coordinates of
+            the new dimensions). **WARNING** for python 3.5, if ``dim`` is
+            dict-like, then it must be an ``OrderedDict``. This is to ensure
+            that the order in which the dims are given is maintained.
         axis : integer, list (or tuple) of integers, or None
             Axis position(s) where new axis is to be inserted (position(s) on
             the result array). If a list (or tuple) of integers is passed,
             multiple axes are inserted. In this case, dim arguments should be
             same length list. If axis=None is passed, all the axes will be
             inserted to the start of the result array.
+        **dim_kwargs : int or sequence/ndarray
+            The keywords are arbitrary dimensions being inserted and the values
+            are either the lengths of the new dims (if int is given), or their
+            coordinates. Note, this is an alternative to passing a dict to the
+            dim kwarg and will only be used if dim is None. **WARNING** for
+            python 3.5 ``dim_kwargs`` is not available.
 
         Returns
         -------
         expanded : same type as caller
             This object, but with an additional dimension(s).
         """
+        if isinstance(dim, int):
+            raise TypeError('dim should be str or sequence of strs or dict')
+        elif isinstance(dim, str):
+            dim = OrderedDict(((dim, 1),))
+        elif isinstance(dim, (list, tuple)):
+            if len(dim) != len(set(dim)):
+                raise ValueError('dims should not contain duplicate values.')
+            dim = OrderedDict(((d, 1) for d in dim))
+
+        # TODO: get rid of the below code block when python 3.5 is no longer
+        #   supported.
+        python36_plus = sys.version_info[0] == 3 and sys.version_info[1] > 5
+        not_ordereddict = dim is not None and not isinstance(dim, OrderedDict)
+        if not python36_plus and not_ordereddict:
+            raise TypeError("dim must be an OrderedDict for python <3.6")
+        elif not python36_plus and dim_kwargs:
+            raise ValueError("dim_kwargs isn't available for python <3.6")
+        dim_kwargs = OrderedDict(dim_kwargs)
+
+        dim = either_dict_or_kwargs(dim, dim_kwargs, 'expand_dims')
         ds = self._to_temp_dataset().expand_dims(dim, axis)
         return self._from_temp_dataset(ds)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/dataset.py 
new/xarray-0.12.1/xarray/core/dataset.py
--- old/xarray-0.12.0/xarray/core/dataset.py    2019-03-14 17:12:15.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/dataset.py    2019-04-05 03:30:53.000000000 
+0200
@@ -1,45 +1,44 @@
 import copy
 import functools
 import sys
+import typing
 import warnings
 from collections import OrderedDict, defaultdict
 from collections.abc import Mapping
 from distutils.version import LooseVersion
 from numbers import Number
 from typing import (
-    Any, Callable, Dict, List, Optional, Set, Tuple, TypeVar, TYPE_CHECKING,
-    Union,
-)
+    Any, Callable, Dict, List, Optional, Set, Tuple, TypeVar, Union)
 
 import numpy as np
 import pandas as pd
 
 import xarray as xr
 
+from ..coding.cftimeindex import _parse_array_of_cftime_strings
 from . import (
     alignment, dtypes, duck_array_ops, formatting, groupby, indexing, ops,
     pdcompat, resample, rolling, utils)
-from ..coding.cftimeindex import _parse_array_of_cftime_strings
 from .alignment import align
 from .common import (
     ALL_DIMS, DataWithCoords, ImplementsDatasetReduce,
     _contains_datetime_like_objects)
 from .coordinates import (
     DatasetCoordinates, LevelCoordinatesSource, assert_coordinate_consistent,
-    remap_label_indexers,
-)
+    remap_label_indexers)
 from .duck_array_ops import datetime_to_numeric
 from .indexes import Indexes, default_indexes, isel_variable_and_index
 from .merge import (
     dataset_merge_method, dataset_update_method, merge_data_and_coords,
     merge_variables)
 from .options import OPTIONS, _get_keep_attrs
-from .pycompat import dask_array_type
+from .pycompat import TYPE_CHECKING, dask_array_type
 from .utils import (
-    Frozen, SortedKeysDict, _check_inplace,
-    decode_numpy_dict_values, either_dict_or_kwargs, ensure_us_time_resolution,
-    hashable, maybe_wrap_array)
+    Frozen, SortedKeysDict, _check_inplace, decode_numpy_dict_values,
+    either_dict_or_kwargs, ensure_us_time_resolution, hashable, is_dict_like,
+    maybe_wrap_array)
 from .variable import IndexVariable, Variable, as_variable, broadcast_variables
+
 if TYPE_CHECKING:
     from .dataarray import DataArray
 
@@ -916,7 +915,9 @@
             variables = OrderedDict((k, v.copy(deep=deep, data=data.get(k)))
                                     for k, v in self._variables.items())
 
-        return self._replace(variables)
+        attrs = copy.deepcopy(self._attrs) if deep else copy.copy(self._attrs)
+
+        return self._replace(variables, attrs=attrs)
 
     @property
     def _level_coords(self):
@@ -937,6 +938,7 @@
         """
         variables = OrderedDict()  # type: OrderedDict[Any, Variable]
         coord_names = set()
+        indexes = OrderedDict()  # type: OrderedDict[Any, pd.Index]
 
         for name in names:
             try:
@@ -947,6 +949,8 @@
                 variables[var_name] = var
                 if ref_name in self._coord_names or ref_name in self.dims:
                     coord_names.add(var_name)
+                if (var_name,) == var.dims:
+                    indexes[var_name] = var.to_index()
 
         needed_dims = set()  # type: set
         for v in variables.values():
@@ -958,12 +962,8 @@
             if set(self.variables[k].dims) <= needed_dims:
                 variables[k] = self._variables[k]
                 coord_names.add(k)
-
-        if self._indexes is None:
-            indexes = None
-        else:
-            indexes = OrderedDict((k, v) for k, v in self._indexes.items()
-                                  if k in coord_names)
+                if k in self.indexes:
+                    indexes[k] = self.indexes[k]
 
         return self._replace(variables, coord_names, dims, indexes=indexes)
 
@@ -1502,9 +1502,13 @@
             raise ValueError("dimensions %r do not exist" % invalid)
 
         # all indexers should be int, slice, np.ndarrays, or Variable
-        indexers_list = []
+        indexers_list = []  # type: List[Tuple[Any, Union[slice, Variable]]]
         for k, v in indexers.items():
-            if isinstance(v, (slice, Variable)):
+            if isinstance(v, slice):
+                indexers_list.append((k, v))
+                continue
+
+            if isinstance(v, Variable):
                 pass
             elif isinstance(v, DataArray):
                 v = v.variable
@@ -1523,14 +1527,19 @@
                         v = _parse_array_of_cftime_strings(v, index.date_type)
 
                 if v.ndim == 0:
-                    v = as_variable(v)
+                    v = Variable((), v)
                 elif v.ndim == 1:
-                    v = as_variable((k, v))
+                    v = IndexVariable((k,), v)
                 else:
                     raise IndexError(
                         "Unlabeled multi-dimensional array cannot be "
                         "used for indexing: {}".format(k))
+
+            if v.ndim == 1:
+                v = v.to_index_variable()
+
             indexers_list.append((k, v))
+
         return indexers_list
 
     def _get_indexers_coords_and_indexes(self, indexers):
@@ -1630,7 +1639,7 @@
 
             if name in self.indexes:
                 new_var, new_index = isel_variable_and_index(
-                    var, self.indexes[name], var_indexers)
+                    name, var, self.indexes[name], var_indexers)
                 if new_index is not None:
                     indexes[name] = new_index
             else:
@@ -2116,15 +2125,20 @@
         indexes = OrderedDict(
             (k, v) for k, v in obj.indexes.items() if k not in indexers)
         selected = self._replace_with_new_dims(
-            variables, coord_names, indexes=indexes)
+            variables.copy(), coord_names, indexes=indexes)
 
         # attach indexer as coordinate
         variables.update(indexers)
+        indexes.update(
+            (k, v.to_index()) for k, v in indexers.items() if v.dims == (k,)
+        )
+
         # Extract coordinates from indexers
         coord_vars, new_indexes = (
             selected._get_indexers_coords_and_indexes(coords))
         variables.update(coord_vars)
         indexes.update(new_indexes)
+
         coord_names = (set(variables)
                        .intersection(obj._coord_names)
                        .union(coord_vars))
@@ -2308,28 +2322,24 @@
         coord_names.update(dims_dict.values())
 
         variables = OrderedDict()
+        indexes = OrderedDict()
         for k, v in self.variables.items():
             dims = tuple(dims_dict.get(dim, dim) for dim in v.dims)
             if k in result_dims:
                 var = v.to_index_variable()
+                if k in self.indexes:
+                    indexes[k] = self.indexes[k]
+                else:
+                    indexes[k] = var.to_index()
             else:
                 var = v.to_base_variable()
             var.dims = dims
             variables[k] = var
 
-        indexes = OrderedDict()
-        for k, v in self.indexes.items():
-            if k in dims_dict:
-                new_name = dims_dict[k]
-                new_index = variables[k].to_index()
-                indexes[new_name] = new_index
-            else:
-                indexes[k] = v
-
         return self._replace_with_new_dims(variables, coord_names,
                                            indexes=indexes, inplace=inplace)
 
-    def expand_dims(self, dim, axis=None):
+    def expand_dims(self, dim=None, axis=None, **dim_kwargs):
         """Return a new object with an additional axis (or axes) inserted at
         the corresponding position in the array shape.
 
@@ -2338,15 +2348,27 @@
 
         Parameters
         ----------
-        dim : str or sequence of str.
+        dim : str, sequence of str, dict, or None
             Dimensions to include on the new variable.
-            dimensions are inserted with length 1.
+            If provided as str or sequence of str, then dimensions are inserted
+            with length 1. If provided as a dict, then the keys are the new
+            dimensions and the values are either integers (giving the length of
+            the new dimensions) or sequence/ndarray (giving the coordinates of
+            the new dimensions). **WARNING** for python 3.5, if ``dim`` is
+            dict-like, then it must be an ``OrderedDict``. This is to ensure
+            that the order in which the dims are given is maintained.
         axis : integer, list (or tuple) of integers, or None
             Axis position(s) where new axis is to be inserted (position(s) on
             the result array). If a list (or tuple) of integers is passed,
             multiple axes are inserted. In this case, dim arguments should be
-            the same length list. If axis=None is passed, all the axes will
-            be inserted to the start of the result array.
+            same length list. If axis=None is passed, all the axes will be
+            inserted to the start of the result array.
+        **dim_kwargs : int or sequence/ndarray
+            The keywords are arbitrary dimensions being inserted and the values
+            are either the lengths of the new dims (if int is given), or their
+            coordinates. Note, this is an alternative to passing a dict to the
+            dim kwarg and will only be used if dim is None. **WARNING** for
+            python 3.5 ``dim_kwargs`` is not available.
 
         Returns
         -------
@@ -2354,10 +2376,25 @@
             This object, but with an additional dimension(s).
         """
         if isinstance(dim, int):
-            raise ValueError('dim should be str or sequence of strs or dict')
+            raise TypeError('dim should be str or sequence of strs or dict')
+        elif isinstance(dim, str):
+            dim = OrderedDict(((dim, 1),))
+        elif isinstance(dim, (list, tuple)):
+            if len(dim) != len(set(dim)):
+                raise ValueError('dims should not contain duplicate values.')
+            dim = OrderedDict(((d, 1) for d in dim))
+
+        # TODO: get rid of the below code block when python 3.5 is no longer
+        #   supported.
+        python36_plus = sys.version_info[0] == 3 and sys.version_info[1] > 5
+        not_ordereddict = dim is not None and not isinstance(dim, OrderedDict)
+        if not python36_plus and not_ordereddict:
+            raise TypeError("dim must be an OrderedDict for python <3.6")
+        elif not python36_plus and dim_kwargs:
+            raise ValueError("dim_kwargs isn't available for python <3.6")
+
+        dim = either_dict_or_kwargs(dim, dim_kwargs, 'expand_dims')
 
-        if isinstance(dim, str):
-            dim = [dim]
         if axis is not None and not isinstance(axis, (list, tuple)):
             axis = [axis]
 
@@ -2376,13 +2413,28 @@
                     '{dim} already exists as coordinate or'
                     ' variable name.'.format(dim=d))
 
-        if len(dim) != len(set(dim)):
-            raise ValueError('dims should not contain duplicate values.')
-
         variables = OrderedDict()
+        coord_names = self._coord_names.copy()
+        # If dim is a dict, then ensure that the values are either integers
+        # or iterables.
+        for k, v in dim.items():
+            if hasattr(v, "__iter__"):
+                # If the value for the new dimension is an iterable, then
+                # save the coordinates to the variables dict, and set the
+                # value within the dim dict to the length of the iterable
+                # for later use.
+                variables[k] = xr.IndexVariable((k,), v)
+                coord_names.add(k)
+                dim[k] = variables[k].size
+            elif isinstance(v, int):
+                pass  # Do nothing if the dimensions value is just an int
+            else:
+                raise TypeError('The value of new dimension {k} must be '
+                                'an iterable or an int'.format(k=k))
+
         for k, v in self._variables.items():
             if k not in dim:
-                if k in self._coord_names:  # Do not change coordinates
+                if k in coord_names:  # Do not change coordinates
                     variables[k] = v
                 else:
                     result_ndim = len(v.dims) + len(axis)
@@ -2400,11 +2452,13 @@
                                          ' values.')
                     # We need to sort them to make sure `axis` equals to the
                     # axis positions of the result array.
-                    zip_axis_dim = sorted(zip(axis_pos, dim))
+                    zip_axis_dim = sorted(zip(axis_pos, dim.items()))
+
+                    all_dims = list(zip(v.dims, v.shape))
+                    for d, c in zip_axis_dim:
+                        all_dims.insert(d, c)
+                    all_dims = OrderedDict(all_dims)
 
-                    all_dims = list(v.dims)
-                    for a, d in zip_axis_dim:
-                        all_dims.insert(a, d)
                     variables[k] = v.set_dims(all_dims)
             else:
                 # If dims includes a label of a non-dimension coordinate,
@@ -2412,10 +2466,10 @@
                 variables[k] = v.set_dims(k)
 
         new_dims = self._dims.copy()
-        for d in dim:
-            new_dims[d] = 1
+        new_dims.update(dim)
 
-        return self._replace(variables, dims=new_dims)
+        return self._replace_vars_and_dims(
+            variables, dims=new_dims, coord_names=coord_names)
 
     def set_index(self, indexes=None, append=False, inplace=None,
                   **indexes_kwargs):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/indexes.py 
new/xarray-0.12.1/xarray/core/indexes.py
--- old/xarray-0.12.0/xarray/core/indexes.py    2019-02-07 03:57:56.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/indexes.py    2019-04-05 03:30:53.000000000 
+0200
@@ -1,6 +1,6 @@
 import collections.abc
 from collections import OrderedDict
-from typing import Any, Iterable, Mapping, Optional, Tuple, Union
+from typing import Any, Hashable, Iterable, Mapping, Optional, Tuple, Union
 
 import pandas as pd
 
@@ -59,6 +59,7 @@
 
 
 def isel_variable_and_index(
+    name: Hashable,
     variable: Variable,
     index: pd.Index,
     indexers: Mapping[Any, Union[slice, Variable]],
@@ -75,8 +76,8 @@
 
     new_variable = variable.isel(indexers)
 
-    if new_variable.ndim != 1:
-        # can't preserve a index if result is not 0D
+    if new_variable.dims != (name,):
+        # can't preserve a index if result has new dimensions
         return new_variable, None
 
     # we need to compute the new index
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/merge.py 
new/xarray-0.12.1/xarray/core/merge.py
--- old/xarray-0.12.0/xarray/core/merge.py      2019-02-08 05:46:01.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/merge.py      2019-03-31 19:04:35.000000000 
+0200
@@ -1,18 +1,19 @@
+import typing
 from collections import OrderedDict
-
-from typing import (
-    Any, Dict, List, Mapping, Optional, Set, Tuple, TYPE_CHECKING, Union,
-)
+from typing import Any, Dict, List, Mapping, Optional, Set, Tuple, Union
 
 import pandas as pd
 
 from .alignment import deep_align
+from .pycompat import TYPE_CHECKING
 from .utils import Frozen
 from .variable import (
     Variable, as_variable, assert_unique_multiindex_level_names)
+
 if TYPE_CHECKING:
     from .dataset import Dataset
 
+
 PANDAS_TYPES = (pd.Series, pd.DataFrame, pd.Panel)
 
 _VALID_COMPAT = Frozen({'identical': 0,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/pycompat.py 
new/xarray-0.12.1/xarray/core/pycompat.py
--- old/xarray-0.12.0/xarray/core/pycompat.py   2019-01-26 23:17:19.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/pycompat.py   2019-03-31 19:04:35.000000000 
+0200
@@ -1,4 +1,6 @@
 # flake8: noqa
+import sys
+import typing
 
 import numpy as np
 
@@ -10,3 +12,7 @@
     dask_array_type = (dask.array.Array,)
 except ImportError:  # pragma: no cover
     dask_array_type = ()
+
+# Ensure we have some more recent additions to the typing module.
+# Note that TYPE_CHECKING itself is not available on Python 3.5.1.
+TYPE_CHECKING = sys.version >= '3.5.3' and typing.TYPE_CHECKING
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/core/variable.py 
new/xarray-0.12.1/xarray/core/variable.py
--- old/xarray-0.12.0/xarray/core/variable.py   2019-03-14 17:12:15.000000000 
+0100
+++ new/xarray-0.12.1/xarray/core/variable.py   2019-03-31 19:04:35.000000000 
+0200
@@ -1,8 +1,8 @@
 import functools
 import itertools
+import typing
 from collections import OrderedDict, defaultdict
 from datetime import timedelta
-from typing import Tuple, Type, Union
 
 import numpy as np
 import pandas as pd
@@ -15,9 +15,14 @@
     BasicIndexer, OuterIndexer, PandasIndexAdapter, VectorizedIndexer,
     as_indexable)
 from .options import _get_keep_attrs
-from .pycompat import dask_array_type, integer_types
-from .utils import (OrderedSet, either_dict_or_kwargs,
-                    decode_numpy_dict_values, ensure_us_time_resolution)
+from .pycompat import TYPE_CHECKING, dask_array_type, integer_types
+from .utils import (
+    OrderedSet, decode_numpy_dict_values, either_dict_or_kwargs,
+    ensure_us_time_resolution)
+
+if TYPE_CHECKING:
+    from typing import Tuple, Type, Union
+
 
 try:
     import dask.array as da
@@ -1597,7 +1602,7 @@
                             "prior to calling this method.")
 
         axis = self.get_axis_num(dim)
-        func = bn.nanrankdata if self.dtype.kind is 'f' else bn.rankdata
+        func = bn.nanrankdata if self.dtype.kind == 'f' else bn.rankdata
         ranked = func(self.data, axis=axis)
         if pct:
             count = np.sum(~np.isnan(self.data), axis=axis, keepdims=True)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/testing.py 
new/xarray-0.12.1/xarray/testing.py
--- old/xarray-0.12.0/xarray/testing.py 2019-01-26 23:17:19.000000000 +0100
+++ new/xarray-0.12.1/xarray/testing.py 2019-04-05 03:30:53.000000000 +0200
@@ -1,8 +1,12 @@
 """Testing functions exposed to the user API"""
+from collections import OrderedDict
+
 import numpy as np
+import pandas as pd
 
 from xarray.core import duck_array_ops
 from xarray.core import formatting
+from xarray.core.indexes import default_indexes
 
 
 def _decode_string_data(data):
@@ -143,8 +147,37 @@
                         .format(type(a)))
 
 
-def assert_combined_tile_ids_equal(dict1, dict2):
-    assert len(dict1) == len(dict2)
-    for k, v in dict1.items():
-        assert k in dict2.keys()
-        assert_equal(dict1[k], dict2[k])
+def _assert_indexes_invariants_checks(indexes, possible_coord_variables, dims):
+    import xarray as xr
+
+    assert isinstance(indexes, OrderedDict), indexes
+    assert all(isinstance(v, pd.Index) for v in indexes.values()), \
+        {k: type(v) for k, v in indexes.items()}
+
+    index_vars = {k for k, v in possible_coord_variables.items()
+                  if isinstance(v, xr.IndexVariable)}
+    assert indexes.keys() <= index_vars, (set(indexes), index_vars)
+
+    # Note: when we support non-default indexes, these checks should be opt-in
+    # only!
+    defaults = default_indexes(possible_coord_variables, dims)
+    assert indexes.keys() == defaults.keys(), \
+        (set(indexes), set(defaults))
+    assert all(v.equals(defaults[k]) for k, v in indexes.items()), \
+        (indexes, defaults)
+
+
+def _assert_indexes_invariants(a):
+    """Separate helper function for checking indexes invariants only."""
+    import xarray as xr
+
+    if isinstance(a, xr.DataArray):
+        if a._indexes is not None:
+            _assert_indexes_invariants_checks(a._indexes, a._coords, a.dims)
+    elif isinstance(a, xr.Dataset):
+        if a._indexes is not None:
+            _assert_indexes_invariants_checks(
+                a._indexes, a._variables, a._dims)
+    elif isinstance(a, xr.Variable):
+        # no indexes
+        pass
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/tests/__init__.py 
new/xarray-0.12.1/xarray/tests/__init__.py
--- old/xarray-0.12.0/xarray/tests/__init__.py  2019-03-16 04:58:49.000000000 
+0100
+++ new/xarray-0.12.1/xarray/tests/__init__.py  2019-04-05 03:30:53.000000000 
+0200
@@ -13,8 +13,7 @@
 from xarray.core import utils
 from xarray.core.options import set_options
 from xarray.core.indexing import ExplicitlyIndexed
-from xarray.testing import (assert_equal, assert_identical,  # noqa: F401
-                            assert_allclose, assert_combined_tile_ids_equal)
+import xarray.testing
 from xarray.plot.utils import import_seaborn
 
 try:
@@ -180,3 +179,25 @@
     if base is None:
         base = array
     return base
+
+
+# Internal versions of xarray's test functions that validate additional
+# invariants
+# TODO: add more invariant checks.
+
+def assert_equal(a, b):
+    xarray.testing.assert_equal(a, b)
+    xarray.testing._assert_indexes_invariants(a)
+    xarray.testing._assert_indexes_invariants(b)
+
+
+def assert_identical(a, b):
+    xarray.testing.assert_identical(a, b)
+    xarray.testing._assert_indexes_invariants(a)
+    xarray.testing._assert_indexes_invariants(b)
+
+
+def assert_allclose(a, b, **kwargs):
+    xarray.testing.assert_allclose(a, b, **kwargs)
+    xarray.testing._assert_indexes_invariants(a)
+    xarray.testing._assert_indexes_invariants(b)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/tests/test_combine.py 
new/xarray-0.12.1/xarray/tests/test_combine.py
--- old/xarray-0.12.0/xarray/tests/test_combine.py      2019-03-14 
06:28:28.000000000 +0100
+++ new/xarray-0.12.1/xarray/tests/test_combine.py      2019-04-05 
03:30:53.000000000 +0200
@@ -13,7 +13,7 @@
     _infer_tile_ids_from_nested_list, _new_tile_id)
 
 from . import (
-    InaccessibleArray, assert_array_equal, assert_combined_tile_ids_equal,
+    InaccessibleArray, assert_array_equal,
     assert_equal, assert_identical, raises_regex, requires_dask)
 from .test_dataset import create_test_data
 
@@ -418,6 +418,13 @@
         assert_identical(expected, actual)
 
 
+def assert_combined_tile_ids_equal(dict1, dict2):
+    assert len(dict1) == len(dict2)
+    for k, v in dict1.items():
+        assert k in dict2.keys()
+        assert_equal(dict1[k], dict2[k])
+
+
 class TestTileIDsFromNestedList(object):
     def test_1d(self):
         ds = create_test_data
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/tests/test_dataarray.py 
new/xarray-0.12.1/xarray/tests/test_dataarray.py
--- old/xarray-0.12.0/xarray/tests/test_dataarray.py    2019-03-16 
04:58:49.000000000 +0100
+++ new/xarray-0.12.1/xarray/tests/test_dataarray.py    2019-03-31 
19:04:35.000000000 +0200
@@ -3,6 +3,7 @@
 from collections import OrderedDict
 from copy import deepcopy
 from textwrap import dedent
+import sys
 
 import numpy as np
 import pandas as pd
@@ -1303,7 +1304,7 @@
                           coords={'x': np.linspace(0.0, 1.0, 3)},
                           attrs={'key': 'entry'})
 
-        with raises_regex(ValueError, 'dim should be str or'):
+        with raises_regex(TypeError, 'dim should be str or'):
             array.expand_dims(0)
         with raises_regex(ValueError, 'lengths of dim and axis'):
             # dims and axis argument should be the same length
@@ -1328,6 +1329,16 @@
         array.expand_dims(dim=['y', 'z'], axis=[2, -4])
         array.expand_dims(dim=['y', 'z'], axis=[2, 3])
 
+        array = DataArray(np.random.randn(3, 4), dims=['x', 'dim_0'],
+                          coords={'x': np.linspace(0.0, 1.0, 3)},
+                          attrs={'key': 'entry'})
+        with pytest.raises(TypeError):
+            array.expand_dims(OrderedDict((("new_dim", 3.2),)))
+
+        # Attempt to use both dim and kwargs
+        with pytest.raises(ValueError):
+            array.expand_dims(OrderedDict((("d", 4),)), e=4)
+
     def test_expand_dims(self):
         array = DataArray(np.random.randn(3, 4), dims=['x', 'dim_0'],
                           coords={'x': np.linspace(0.0, 1.0, 3)},
@@ -1392,6 +1403,46 @@
         roundtripped = actual.squeeze(['z'], drop=False)
         assert_identical(array, roundtripped)
 
+    def test_expand_dims_with_greater_dim_size(self):
+        array = DataArray(np.random.randn(3, 4), dims=['x', 'dim_0'],
+                          coords={'x': np.linspace(0.0, 1.0, 3), 'z': 1.0},
+                          attrs={'key': 'entry'})
+        # For python 3.5 and earlier this has to be an ordered dict, to
+        # maintain insertion order.
+        actual = array.expand_dims(
+            OrderedDict((('y', 2), ('z', 1), ('dim_1', ['a', 'b', 'c']))))
+
+        expected_coords = OrderedDict((
+            ('y', [0, 1]), ('z', [1.0]), ('dim_1', ['a', 'b', 'c']),
+            ('x', np.linspace(0, 1, 3)), ('dim_0', range(4))))
+        expected = DataArray(array.values * np.ones([2, 1, 3, 3, 4]),
+                             coords=expected_coords,
+                             dims=list(expected_coords.keys()),
+                             attrs={'key': 'entry'}
+                             ).drop(['y', 'dim_0'])
+        assert_identical(expected, actual)
+
+        # Test with kwargs instead of passing dict to dim arg.
+
+        # TODO: only the code under the if-statement is needed when python 3.5
+        #   is no longer supported.
+        python36_plus = sys.version_info[0] == 3 and sys.version_info[1] > 5
+        if python36_plus:
+            other_way = array.expand_dims(dim_1=['a', 'b', 'c'])
+
+            other_way_expected = DataArray(
+                array.values * np.ones([3, 3, 4]),
+                coords={'dim_1': ['a', 'b', 'c'],
+                        'x': np.linspace(0, 1, 3),
+                        'dim_0': range(4), 'z': 1.0},
+                dims=['dim_1', 'x', 'dim_0'],
+                attrs={'key': 'entry'}).drop('dim_0')
+            assert_identical(other_way_expected, other_way)
+        else:
+            # In python 3.5, using dim_kwargs should raise a ValueError.
+            with raises_regex(ValueError, "dim_kwargs isn't"):
+                array.expand_dims(e=["l", "m", "n"])
+
     def test_set_index(self):
         indexes = [self.mindex.get_level_values(n) for n in self.mindex.names]
         coords = {idx.name: ('x', idx) for idx in indexes}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/tests/test_dataset.py 
new/xarray-0.12.1/xarray/tests/test_dataset.py
--- old/xarray-0.12.0/xarray/tests/test_dataset.py      2019-03-14 
17:12:15.000000000 +0100
+++ new/xarray-0.12.1/xarray/tests/test_dataset.py      2019-04-05 
03:30:53.000000000 +0200
@@ -1885,6 +1885,7 @@
 
     def test_copy(self):
         data = create_test_data()
+        data.attrs['Test'] = [1, 2, 3]
 
         for copied in [data.copy(deep=False), copy(data)]:
             assert_identical(data, copied)
@@ -1899,12 +1900,18 @@
             copied['foo'] = ('z', np.arange(5))
             assert 'foo' not in data
 
+            copied.attrs['foo'] = 'bar'
+            assert 'foo' not in data.attrs
+            assert data.attrs['Test'] is copied.attrs['Test']
+
         for copied in [data.copy(deep=True), deepcopy(data)]:
             assert_identical(data, copied)
             for k, v0 in data.variables.items():
                 v1 = copied.variables[k]
                 assert v0 is not v1
 
+            assert data.attrs['Test'] is not copied.attrs['Test']
+
     def test_copy_with_data(self):
         orig = create_test_data()
         new_data = {k: np.random.randn(*v.shape)
@@ -2002,14 +2009,11 @@
         assert_identical(expected, actual)
         assert isinstance(actual.variables['y'], IndexVariable)
         assert isinstance(actual.variables['x'], Variable)
+        assert actual.indexes['y'].equals(pd.Index(list('abc')))
 
         roundtripped = actual.swap_dims({'y': 'x'})
         assert_identical(original.set_coords('y'), roundtripped)
 
-        actual = original.copy()
-        actual = actual.swap_dims({'x': 'y'})
-        assert_identical(expected, actual)
-
         with raises_regex(ValueError, 'cannot swap'):
             original.swap_dims({'y': 'x'})
         with raises_regex(ValueError, 'replacement dimension'):
@@ -2033,7 +2037,24 @@
         with raises_regex(ValueError, 'already exists'):
             original.expand_dims(dim=['z'])
 
-    def test_expand_dims(self):
+        original = Dataset({'x': ('a', np.random.randn(3)),
+                            'y': (['b', 'a'], np.random.randn(4, 3)),
+                            'z': ('a', np.random.randn(3))},
+                           coords={'a': np.linspace(0, 1, 3),
+                                   'b': np.linspace(0, 1, 4),
+                                   'c': np.linspace(0, 1, 5)},
+                           attrs={'key': 'entry'})
+        with raises_regex(TypeError, 'value of new dimension'):
+            original.expand_dims(OrderedDict((("d", 3.2),)))
+
+        # TODO: only the code under the if-statement is needed when python 3.5
+        #   is no longer supported.
+        python36_plus = sys.version_info[0] == 3 and sys.version_info[1] > 5
+        if python36_plus:
+            with raises_regex(ValueError, 'both keyword and positional'):
+                original.expand_dims(OrderedDict((("d", 4),)), e=4)
+
+    def test_expand_dims_int(self):
         original = Dataset({'x': ('a', np.random.randn(3)),
                             'y': (['b', 'a'], np.random.randn(4, 3))},
                            coords={'a': np.linspace(0, 1, 3),
@@ -2066,6 +2087,92 @@
         roundtripped = actual.squeeze('z')
         assert_identical(original, roundtripped)
 
+    def test_expand_dims_coords(self):
+        original = Dataset({'x': ('a', np.array([1, 2, 3]))})
+        expected = Dataset(
+            {'x': (('b', 'a'), np.array([[1, 2, 3], [1, 2, 3]]))},
+            coords={'b': [1, 2]},
+        )
+        actual = original.expand_dims(OrderedDict(b=[1, 2]))
+        assert_identical(expected, actual)
+        assert 'b' not in original._coord_names
+
+    def test_expand_dims_existing_scalar_coord(self):
+        original = Dataset({'x': 1}, {'a': 2})
+        expected = Dataset({'x': (('a',), [1])}, {'a': [2]})
+        actual = original.expand_dims('a')
+        assert_identical(expected, actual)
+
+    def test_isel_expand_dims_roundtrip(self):
+        original = Dataset({'x': (('a',), [1])}, {'a': [2]})
+        actual = original.isel(a=0).expand_dims('a')
+        assert_identical(actual, original)
+
+    def test_expand_dims_mixed_int_and_coords(self):
+        # Test expanding one dimension to have size > 1 that doesn't have
+        # coordinates, and also expanding another dimension to have size > 1
+        # that DOES have coordinates.
+        original = Dataset({'x': ('a', np.random.randn(3)),
+                            'y': (['b', 'a'], np.random.randn(4, 3))},
+                           coords={'a': np.linspace(0, 1, 3),
+                                   'b': np.linspace(0, 1, 4),
+                                   'c': np.linspace(0, 1, 5)})
+
+        actual = original.expand_dims(
+            OrderedDict((("d", 4), ("e", ["l", "m", "n"]))))
+
+        expected = Dataset(
+            {'x': xr.DataArray(original['x'].values * np.ones([4, 3, 3]),
+                               coords=dict(d=range(4),
+                                           e=['l', 'm', 'n'],
+                                           a=np.linspace(0, 1, 3)),
+                               dims=['d', 'e', 'a']).drop('d'),
+             'y': xr.DataArray(original['y'].values * np.ones([4, 3, 4, 3]),
+                               coords=dict(d=range(4),
+                                           e=['l', 'm', 'n'],
+                                           b=np.linspace(0, 1, 4),
+                                           a=np.linspace(0, 1, 3)),
+                               dims=['d', 'e', 'b', 'a']).drop('d')},
+            coords={'c': np.linspace(0, 1, 5)})
+        assert_identical(actual, expected)
+
+    @pytest.mark.skipif(
+        sys.version_info[:2] > (3, 5),
+        reason="we only raise these errors for Python 3.5",
+    )
+    def test_expand_dims_kwargs_python35(self):
+        original = Dataset({'x': ('a', np.random.randn(3))})
+        with raises_regex(ValueError, "dim_kwargs isn't"):
+            original.expand_dims(e=["l", "m", "n"])
+        with raises_regex(TypeError, "must be an OrderedDict"):
+            original.expand_dims({'e': ["l", "m", "n"]})
+
+    @pytest.mark.skipif(
+        sys.version_info[:2] < (3, 6),
+        reason='keyword arguments are only ordered on Python 3.6+',
+    )
+    def test_expand_dims_kwargs_python36plus(self):
+        original = Dataset({'x': ('a', np.random.randn(3)),
+                            'y': (['b', 'a'], np.random.randn(4, 3))},
+                           coords={'a': np.linspace(0, 1, 3),
+                                   'b': np.linspace(0, 1, 4),
+                                   'c': np.linspace(0, 1, 5)},
+                           attrs={'key': 'entry'})
+        other_way = original.expand_dims(e=["l", "m", "n"])
+        other_way_expected = Dataset(
+            {'x': xr.DataArray(original['x'].values * np.ones([3, 3]),
+                               coords=dict(e=['l', 'm', 'n'],
+                                           a=np.linspace(0, 1, 3)),
+                               dims=['e', 'a']),
+             'y': xr.DataArray(original['y'].values * np.ones([3, 4, 3]),
+                               coords=dict(e=['l', 'm', 'n'],
+                                           b=np.linspace(0, 1, 4),
+                                           a=np.linspace(0, 1, 3)),
+                               dims=['e', 'b', 'a'])},
+            coords={'c': np.linspace(0, 1, 5)},
+            attrs={'key': 'entry'})
+        assert_identical(other_way_expected, other_way)
+
     def test_set_index(self):
         expected = create_test_multiindex()
         mindex = expected['x'].to_index()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray/tests/test_interp.py 
new/xarray-0.12.1/xarray/tests/test_interp.py
--- old/xarray-0.12.0/xarray/tests/test_interp.py       2019-03-14 
06:28:28.000000000 +0100
+++ new/xarray-0.12.1/xarray/tests/test_interp.py       2019-04-05 
03:30:53.000000000 +0200
@@ -291,7 +291,7 @@
     if use_dask:
         da = get_example_data(3)
     else:
-        da = get_example_data(1)
+        da = get_example_data(0)
 
     result = da.interp(x=[-1, 1, 3], kwargs={'fill_value': 0.0})
     assert not np.isnan(result.values).any()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/xarray-0.12.0/xarray.egg-info/PKG-INFO 
new/xarray-0.12.1/xarray.egg-info/PKG-INFO
--- old/xarray-0.12.0/xarray.egg-info/PKG-INFO  2019-03-16 05:02:37.000000000 
+0100
+++ new/xarray-0.12.1/xarray.egg-info/PKG-INFO  2019-04-05 03:32:27.000000000 
+0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: xarray
-Version: 0.12.0
+Version: 0.12.1
 Summary: N-D labeled arrays and datasets in Python
 Home-page: https://github.com/pydata/xarray
 Author: xarray Developers


Reply via email to