Hello community,

here is the log from the commit of package python-jplephem for openSUSE:Factory 
checked in at 2019-01-21 10:48:35
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-jplephem (Old)
 and      /work/SRC/openSUSE:Factory/.python-jplephem.new.28833 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-jplephem"

Mon Jan 21 10:48:35 2019 rev:2 rq:664284 version:2.9

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-jplephem/python-jplephem.changes  
2017-06-12 15:34:12.809282897 +0200
+++ 
/work/SRC/openSUSE:Factory/.python-jplephem.new.28833/python-jplephem.changes   
    2019-01-21 10:49:17.656096904 +0100
@@ -1,0 +2,38 @@
+Thu Jan 10 00:31:25 UTC 2019 - Jan Engelhardt <jeng...@inai.de>
+
+- Use noun phrase for summary.
+
+-------------------------------------------------------------------
+Fri Jan  4 16:33:04 UTC 2019 - Todd R <toddrme2...@gmail.com>
+
+- Update to 2.9
+  * add load_array()
+- Update to 2.8
+  * single memory map instead of many
+- Update to 2.7
+  * Slight tweaks to the documentation
+  * Add messages during excerpt operation
+  * Add excerpt to the command line
+  * Add subcommand for printing comment area
+  * Add test for “daf” subcommand
+  * Add subcommands to jplephem command line
+  * Read as little during excerpting as possible
+  * Start work on excerpt()
+  * Two tweaks to write DAF files more accurately
+  * Better test: array can take up only part of record
+  * Slight tweaks to code
+  * To fix the build,bid a fond farewell to Python 2.6
+  * Git ignore tmp*.py experimental scripts
+  * Full tests of DAF from BytesIO and from real file
+  * Start writing a comprehensive test of DAF class
+  * Avoid antipattern of attribute that shows up later
+  * Add routine for writing a new DAF file summary
+  * Switch DAF class to cached Struct objects 
+  * Introduce the idea of simply read()ing floats, too
+  * Mark `ephem.py` as deprecated
+  * Remove unused import
+  * Make README test instructions more complete
+  * Add note to README about how to run the tests
+  * Add link to Skyfield to README 
+
+-------------------------------------------------------------------

Old:
----
  jplephem-2.6.tar.gz

New:
----
  jplephem-2.9.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-jplephem.spec ++++++
--- /var/tmp/diff_new_pack.1ryRfG/_old  2019-01-21 10:49:21.988091618 +0100
+++ /var/tmp/diff_new_pack.1ryRfG/_new  2019-01-21 10:49:21.988091618 +0100
@@ -1,7 +1,7 @@
 #
 # spec file for package python-jplephem
 #
-# Copyright (c) 2017 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -12,31 +12,30 @@
 # license that conforms to the Open Source Definition (Version 1.9)
 # published by the Open Source Initiative.
 
-# Please submit bugfixes or comments via http://bugs.opensuse.org/
+# Please submit bugfixes or comments via https://bugs.opensuse.org/
+#
 
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
-%bcond_without test
 Name:           python-jplephem
-Version:        2.6
+Version:        2.9
 Release:        0
+Summary:        Planet position predictor using a JPL ephemeris
 License:        MIT
-Summary:        Use a JPL ephemeris to predict planet positions
-Url:            https://github.com/brandon-rhodes/python-jplephem/
 Group:          Development/Languages/Python
+Url:            https://github.com/brandon-rhodes/python-jplephem/
 Source:         
https://files.pythonhosted.org/packages/source/j/jplephem/jplephem-%{version}.tar.gz
 # Test files
 Source10:       
http://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/a_old_versions/de405.bsp
 Source11:       
http://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/a_old_versions/de421.bsp
 Source12:       
http://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/de430.bsp
 Source13:       ftp://ssd.jpl.nasa.gov/pub/eph/planets/test-data/430/testpo.430
-BuildRequires:  fdupes
-BuildRequires:  python-rpm-macros
 BuildRequires:  %{python_module devel}
-BuildRequires:  %{python_module setuptools}
 BuildRequires:  %{python_module numpy}
+BuildRequires:  %{python_module setuptools}
+BuildRequires:  fdupes
+BuildRequires:  python-rpm-macros
 Requires:       python-numpy
-BuildRoot:      %{_tmppath}/%{name}-%{version}-build
 BuildArch:      noarch
 
 %python_subpackages
@@ -60,13 +59,10 @@
 %python_install
 %python_expand %fdupes %{buildroot}%{$python_sitelib}
 
-%if %{with test}
 %check
 %python_exec -m jplephem.jpltest
-%endif
 
 %files %{python_files}
-%defattr(-,root,root,-)
 %{python_sitelib}/*
 
 %changelog




++++++ jplephem-2.6.tar.gz -> jplephem-2.9.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/LICENSE.txt new/jplephem-2.9/LICENSE.txt
--- old/jplephem-2.6/LICENSE.txt        1970-01-01 01:00:00.000000000 +0100
+++ new/jplephem-2.9/LICENSE.txt        2019-01-04 04:57:52.000000000 +0100
@@ -0,0 +1,19 @@
+Copyright 2012-2018 Brandon Rhodes (bran...@rhodesmill.org)
+
+Permission is hereby granted, free of charge, to any person obtaining a
+copy of this software and associated documentation files (the "Software"),
+to deal in the Software without restriction, including without limitation
+the rights to use, copy, modify, merge, publish, distribute, sublicense,
+and/or sell copies of the Software, and to permit persons to whom the
+Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+DEALINGS IN THE SOFTWARE.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/PKG-INFO new/jplephem-2.9/PKG-INFO
--- old/jplephem-2.6/PKG-INFO   2016-12-20 06:40:00.000000000 +0100
+++ new/jplephem-2.9/PKG-INFO   2019-01-04 05:35:00.000000000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: jplephem
-Version: 2.6
+Version: 2.9
 Summary: Use a JPL ephemeris to predict planet positions.
 Home-page: UNKNOWN
 Author: Brandon Rhodes
@@ -42,16 +42,33 @@
         Command Line Tool
         -----------------
         
-        If you have downloaded a ``.bsp`` file and want to learn what ephemeris
-        segments are stored inside of it, you can have ``jplephem`` print them
-        out by invoking the module directly from the command line::
-        
-            python -m jplephem de430.bsp
-        
-        This will print out a summary identical to the one shown in the
-        following section, but without requiring that you type and run any
-        Python code.
+        If you have downloaded a ``.bsp`` file, you can run ``jplephem`` from
+        the command line to display the data inside of it::
         
+            python -m jplephem comment de430.bsp
+            python -m jplephem dap de430.bsp
+            python -m jplephem spk de430.bsp
+        
+        You can also take a large ephemeris and produce a smaller excerpt by
+        limiting the range of dates that it covers::
+        
+            python -m jplephem excerpt 2018/1/1 2018/4/1 de421.bsp outjup.bsp
+        
+        If the input ephemeris is a URL, then `jplephem` will try to save
+        bandwidth by fetching only the blocks of the remote file that are
+        necessary to cover the dates you have specified.  For example, the
+        Jupiter satellite ephemeris `jup310.bsp` is famously large, weighing in
+        a nearly a gigabyte.  But if all you need are Jupiter's satellites for 
a
+        few months, you can download considerably less data::
+        
+            $ python -m jplephem excerpt 2018/1/1 2018/4/1 \
+                
https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/satellites/jup310.bsp \
+                excerpt.bsp
+            $ ls -lh excerpt.bsp
+            -rw-r----- 1 brandon brandon 1.2M Feb 11 13:36 excerpt.bsp
+        
+        In this case only about one-thousandth of the ephemeris's data needed 
to
+        be downloaded, a download which took less than one minute.
         
         Getting Started With DE430
         --------------------------
@@ -183,6 +200,13 @@
            |  segment.end_i - index where segment ends
           ...
         
+        * If you want to access the raw coefficients, use the segment
+          ``load_array()`` method.  It returns two floats and a NumPy array:
+        
+          >>> initial_epoch, interval_length, coefficients = 
segment.load_array()
+          >>> print(coefficients.shape)
+          (3, 100448, 13)
+        
         * The square-bracket lookup mechanism ``kernel[3,399]`` is a
           non-standard convenience that returns only the last matching segment
           in the file.  While the SPK standard does say that the last segment
@@ -200,7 +224,6 @@
           the position, and then only proceed to the velocity once you are sure
           that the light-time error is now small enough.
         
-        
         High-Precision Dates
         --------------------
         
@@ -286,6 +309,22 @@
         Changelog
         ---------
         
+        **2019 January 3 — Version 2.9**
+        
+        * Added the ``load_array()`` method to the segment class.
+        
+        **2018 July 22 — Version 2.8**
+        
+        * Switched to a making a single memory map of the entire file, to avoid
+          running out of file descriptors when users load an ephemeris with
+          hundreds of segments.
+        
+        **2018 February 11 — Version 2.7**
+        
+        * Expanded the command line tool, most notably with the ability to 
fetch
+          over HTTP only those sections of a large ephemeris that cover a
+          specific range of dates, producing a smaller ``.bsp`` file.
+        
         **2016 December 19 — Version 2.6**
         
         * Fixed the ability to invoke the module from the command line with
@@ -378,10 +417,11 @@
 Classifier: Intended Audience :: Science/Research
 Classifier: License :: OSI Approved :: MIT License
 Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 2.6
 Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
 Classifier: Programming Language :: Python :: 3.3
 Classifier: Programming Language :: Python :: 3.4
+Classifier: Programming Language :: Python :: 3.5
+Classifier: Programming Language :: Python :: 3.6
 Classifier: Topic :: Scientific/Engineering :: Astronomy
 Requires: numpy
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/README.md new/jplephem-2.9/README.md
--- old/jplephem-2.6/README.md  1970-01-01 01:00:00.000000000 +0100
+++ new/jplephem-2.9/README.md  2018-02-03 18:27:37.000000000 +0100
@@ -0,0 +1,29 @@
+
+Welcome to the repository for the `jplephem` Python library!
+
+The package is a Python implementation of the math that standard JPL
+ephemerides use to predict raw (x,y,z) planetary positions.  It is one
+of the foundations of the Skyfield astronomy library for Python:
+
+http://rhodesmill.org/skyfield/
+
+But you can also use `jplephem` standalone to generate raw vectors.  If
+that is your use case, then simply head over to its documentation and
+download link on the Python Package Index:
+
+https://pypi.python.org/pypi/jplephem
+
+If you want to install it with `conda`, there is a recipe at:
+
+https://github.com/conda-forge/jplephem-feedstock
+
+This repository is where `jplephem` is maintained.  You will find its
+source code beneath the `jplephem` directory that sits alongside the
+`setup.py` file.  You can run its tests with:
+
+    wget 
https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/a_old_versions/de405.bsp
+    wget 
https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/a_old_versions/de421.bsp
+    pip install de421
+    python -m unittest discover jplephem
+
+Enjoy!
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/__init__.py 
new/jplephem-2.9/jplephem/__init__.py
--- old/jplephem-2.6/jplephem/__init__.py       2016-12-20 06:39:32.000000000 
+0100
+++ new/jplephem-2.9/jplephem/__init__.py       2019-01-04 05:34:01.000000000 
+0100
@@ -37,16 +37,33 @@
 Command Line Tool
 -----------------
 
-If you have downloaded a ``.bsp`` file and want to learn what ephemeris
-segments are stored inside of it, you can have ``jplephem`` print them
-out by invoking the module directly from the command line::
-
-    python -m jplephem de430.bsp
-
-This will print out a summary identical to the one shown in the
-following section, but without requiring that you type and run any
-Python code.
+If you have downloaded a ``.bsp`` file, you can run ``jplephem`` from
+the command line to display the data inside of it::
 
+    python -m jplephem comment de430.bsp
+    python -m jplephem dap de430.bsp
+    python -m jplephem spk de430.bsp
+
+You can also take a large ephemeris and produce a smaller excerpt by
+limiting the range of dates that it covers::
+
+    python -m jplephem excerpt 2018/1/1 2018/4/1 de421.bsp outjup.bsp
+
+If the input ephemeris is a URL, then `jplephem` will try to save
+bandwidth by fetching only the blocks of the remote file that are
+necessary to cover the dates you have specified.  For example, the
+Jupiter satellite ephemeris `jup310.bsp` is famously large, weighing in
+a nearly a gigabyte.  But if all you need are Jupiter's satellites for a
+few months, you can download considerably less data::
+
+    $ python -m jplephem excerpt 2018/1/1 2018/4/1 \\
+        
https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/satellites/jup310.bsp \\
+        excerpt.bsp
+    $ ls -lh excerpt.bsp
+    -rw-r----- 1 brandon brandon 1.2M Feb 11 13:36 excerpt.bsp
+
+In this case only about one-thousandth of the ephemeris's data needed to
+be downloaded, a download which took less than one minute.
 
 Getting Started With DE430
 --------------------------
@@ -178,6 +195,13 @@
    |  segment.end_i - index where segment ends
   ...
 
+* If you want to access the raw coefficients, use the segment
+  ``load_array()`` method.  It returns two floats and a NumPy array:
+
+  >>> initial_epoch, interval_length, coefficients = segment.load_array()
+  >>> print(coefficients.shape)
+  (3, 100448, 13)
+
 * The square-bracket lookup mechanism ``kernel[3,399]`` is a
   non-standard convenience that returns only the last matching segment
   in the file.  While the SPK standard does say that the last segment
@@ -195,7 +219,6 @@
   the position, and then only proceed to the velocity once you are sure
   that the light-time error is now small enough.
 
-
 High-Precision Dates
 --------------------
 
@@ -281,6 +304,22 @@
 Changelog
 ---------
 
+**2019 January 3 — Version 2.9**
+
+* Added the ``load_array()`` method to the segment class.
+
+**2018 July 22 — Version 2.8**
+
+* Switched to a making a single memory map of the entire file, to avoid
+  running out of file descriptors when users load an ephemeris with
+  hundreds of segments.
+
+**2018 February 11 — Version 2.7**
+
+* Expanded the command line tool, most notably with the ability to fetch
+  over HTTP only those sections of a large ephemeris that cover a
+  specific range of dates, producing a smaller ``.bsp`` file.
+
 **2016 December 19 — Version 2.6**
 
 * Fixed the ability to invoke the module from the command line with
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/__main__.py 
new/jplephem-2.9/jplephem/__main__.py
--- old/jplephem-2.6/jplephem/__main__.py       2016-12-20 06:21:31.000000000 
+0100
+++ new/jplephem-2.9/jplephem/__main__.py       2018-02-11 18:05:05.000000000 
+0100
@@ -1,3 +1,4 @@
-from sys import argv
+import sys
 from .commandline import main
-print(main(argv[1:]))
+sys.stdout.write(main(sys.argv[1:]))
+sys.exit(0)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/binary_pck.py 
new/jplephem-2.9/jplephem/binary_pck.py
--- old/jplephem-2.6/jplephem/binary_pck.py     1970-01-01 01:00:00.000000000 
+0100
+++ new/jplephem-2.9/jplephem/binary_pck.py     2018-10-01 14:53:48.000000000 
+0200
@@ -0,0 +1,193 @@
+"""Compute things from a NASA SPICE binary PCK kernel file.
+
+ftp://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/req/pck.html
+
+"""
+from numpy import array, empty, empty_like, rollaxis
+from .daf import DAF
+from .names import target_names
+
+T0 = 2451545.0
+S_PER_DAY = 86400.0
+
+
+def jd(seconds):
+    """Convert a number of seconds since J2000 to a Julian Date."""
+    return T0 + seconds / S_PER_DAY
+
+
+class BinaryPCK(object):
+    """A JPL binary PCK (extension ``.bcp``) kernel.
+
+    You can load a binary PCK file by specifying its filename::
+
+        kernel = BinaryPCK.open('moon_pa_de421_1900-2050.bpc')
+
+    Run ``print(kernel)`` see which segments are inside and iterate
+    across ``kernel.segments`` to access them each in turn.
+
+    To see the text comments, call ``kernel.comments()``.
+
+    """
+    def __init__(self, daf):
+        self.daf = daf
+        self.segments = [Segment(self.daf, source, descriptor)
+                         for source, descriptor in self.daf.summaries()]
+
+    @classmethod
+    def open(cls, path):
+        """Open the file at `path` and return a binary PCK instance."""
+        return cls(DAF(open(path, 'rb')))
+
+    def close(self):
+        """Close this file."""
+        self.daf.file.close()
+        for segment in self.segments:
+            if hasattr(segment, '_data'):
+                del segment._data  # TODO: explicitly close each memory map
+
+    def __str__(self):
+        daf = self.daf
+        d = lambda b: b.decode('latin-1')
+        lines = (str(segment) for segment in self.segments)
+        return 'File type {0} and format {1} with {2} segments:\n{3}'.format(
+            d(daf.locidw), d(daf.locfmt), len(self.segments), '\n'.join(lines))
+
+    def comments(self):
+        """Return the file comments, as a string."""
+        return self.daf.comments()
+
+
+class Segment(object):
+    """A single segment of a binary PCK file.
+
+    There are several items of information about each segment that are
+    loaded from the underlying PCK file, and made available as object
+    attributes:
+
+    segment.source - official ephemeris name, like 'DE-0430LE-0430'
+    segment.start_second - initial epoch, as seconds from J2000
+    segment.end_second - final epoch, as seconds from J2000
+    segment.start_jd - start_second, converted to a Julian Date
+    segment.end_jd - end_second, converted to a Julian Date
+    segment.body - integer body identifier
+    segment.frame - integer frame identifier
+    segment.data_type - integer data type identifier
+    segment.start_i - index where segment starts
+    segment.end_i - index where segment ends
+
+    """
+    def __init__(self, daf, source, descriptor):
+        self.daf = daf
+        self.source = source
+        (self.start_second, self.end_second, self.body, self.frame,
+         self.data_type, self.start_i, self.end_i) = descriptor
+        self.start_jd = jd(self.start_second)
+        self.end_jd = jd(self.end_second)
+        self._data = None
+
+    def __str__(self):
+        return self.describe(verbose=False)
+
+    def describe(self, verbose=True):
+        """Return a textual description of the segment."""
+        body = titlecase(target_names.get(self.body, 'Unknown body'))
+        text = ('{0.start_jd:.2f}..{0.end_jd:.2f} frame={0.frame}'
+                '  {1} ({0.body})'.format(self, body))
+        if verbose:
+            text += ('\n  data_type={0.data_type} source={1}'
+                     .format(self, self.source.decode('ascii')))
+        return text
+
+    def _load(self):
+        """Map the coefficients into memory using a NumPy array.
+
+        """
+        if self.data_type == 2:
+            component_count = 3
+        else:
+            raise ValueError('only binary PCK data type 2 is supported')
+
+        init, intlen, rsize, n = self.daf.read_array(self.end_i - 3, 
self.end_i)
+        initial_epoch = jd(init)
+        interval_length = intlen / S_PER_DAY
+        coefficient_count = int(rsize - 2) // component_count
+        coefficients = self.daf.map_array(self.start_i, self.end_i - 4)
+
+        coefficients.shape = (int(n), int(rsize))
+        coefficients = coefficients[:,2:]  # ignore MID and RADIUS elements
+        coefficients.shape = (int(n), component_count, coefficient_count)
+        coefficients = rollaxis(coefficients, 1)
+        return initial_epoch, interval_length, coefficients
+
+    def compute(self, tdb, tdb2, derivative=True):
+        """Generate angles and derivatives for time `tdb` plus `tdb2`.
+
+        If ``derivative`` is true, return a tuple containing both the
+        angle and its derivative; otherwise simply return the angles.
+
+        """
+        scalar = not getattr(tdb, 'shape', 0) and not getattr(tdb2, 'shape', 0)
+        if scalar:
+            tdb = array((tdb,))
+
+        data = self._data
+        if data is None:
+            self._data = data = self._load()
+
+        initial_epoch, interval_length, coefficients = data
+        component_count, n, coefficient_count = coefficients.shape
+
+        # Subtracting tdb before adding tdb2 affords greater precision.
+        index, offset = divmod((tdb - initial_epoch) + tdb2, interval_length)
+        index = index.astype(int)
+
+        if (index < 0).any() or (index > n).any():
+            final_epoch = initial_epoch + interval_length * n
+            raise ValueError('segment only covers dates %.1f through %.1f'
+                            % (initial_epoch, final_epoch))
+
+        omegas = (index == n)
+        index[omegas] -= 1
+        offset[omegas] += interval_length
+
+        coefficients = coefficients[:,index]
+
+        # Chebyshev polynomial.
+
+        T = empty((coefficient_count, len(index)))
+        T[0] = 1.0
+        T[1] = t1 = 2.0 * offset / interval_length - 1.0
+        twot1 = t1 + t1
+        for i in range(2, coefficient_count):
+            T[i] = twot1 * T[i-1] - T[i-2]
+
+        components = (T.T * coefficients).sum(axis=2)
+        if scalar:
+            components = components[:,0]
+
+        if not derivative:
+            return components
+
+        # Chebyshev differentiation.
+
+        dT = empty_like(T)
+        dT[0] = 0.0
+        dT[1] = 1.0
+        if coefficient_count > 2:
+            dT[2] = twot1 + twot1
+            for i in range(3, coefficient_count):
+                dT[i] = twot1 * dT[i-1] - dT[i-2] + T[i-1] + T[i-1]
+        dT *= 2.0
+        dT /= interval_length
+
+        rates = (dT.T * coefficients).sum(axis=2)
+        if scalar:
+            rates = rates[:,0]
+
+        return components, rates
+
+
+def titlecase(name):
+    """Title-case body `name` if it looks safe to do so."""
+    return name if name.startswith(('1', 'C/', 'DSS-')) else name.title()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/commandline.py 
new/jplephem-2.9/jplephem/commandline.py
--- old/jplephem-2.6/jplephem/commandline.py    2016-12-20 06:21:17.000000000 
+0100
+++ new/jplephem-2.9/jplephem/commandline.py    2018-02-11 19:31:21.000000000 
+0100
@@ -1,13 +1,108 @@
+"""The `python -m jplephem` command line."""
+
+from __future__ import print_function
+
 import argparse
+import sys
 from .daf import DAF
+from .excerpter import RemoteFile, write_excerpt
 from .spk import SPK
 
 def main(args):
     parser = argparse.ArgumentParser(
         prog='python -m jplephem',
         description='Describe an SPK kernel',
-        )
-    parser.add_argument('path', help='Path to a .bsp SPICE kernel file')
+    )
+    subparsers = parser.add_subparsers()
+
+    p = subparsers.add_parser(
+        'comment',
+        help="Print a file's comment blocks",
+    )
+    p.set_defaults(func=comment)
+    p.add_argument('path', help='Path to a SPICE file')
+
+    p = subparsers.add_parser(
+        'daf',
+        help="List a file's raw segment descriptors",
+    )
+    p.set_defaults(func=daf_segments)
+    p.add_argument('path', help='Path to a SPICE file')
+
+    p = subparsers.add_parser(
+        'excerpt',
+        help="Create an SPK covering a narrower range of dates",
+    )
+    p.set_defaults(func=excerpt)
+    p.add_argument('start_date', help='Start date yyyy/mm/dd', type=parse_date)
+    p.add_argument('end_date', help='End date yyyy/mm/dd', type=parse_date)
+    p.add_argument('path_or_url', help='Local filename or remote URL')
+    p.add_argument('output_path', help='Output file to create')
+
+    p = subparsers.add_parser(
+        'spk',
+        help="List the segments in an SPK file",
+    )
+    p.set_defaults(func=spk_segments)
+    p.add_argument('path', help='Path to a .bsp SPICE kernel file')
+
     args = parser.parse_args(args)
+    func = getattr(args, 'func', None)
+    if func is None:
+        parser.print_help()
+        sys.exit(2)
+
+    lines = list(func(args))
+    if lines and not lines[-1].endswith('\n'):
+        lines.append('')
+    return '\n'.join(lines)
+
+def comment(args):
+    with open(args.path, 'rb') as f:
+        d = DAF(f)
+        yield d.comments()
+
+def daf_segments(args):
+    with open(args.path, 'rb') as f:
+        d = DAF(f)
+        for i, (name, values) in enumerate(d.summaries()):
+            yield '{:2d} {} {}'.format(i + 1, name.decode('latin-1'),
+                                       ' '.join(repr(v) for v in values))
+
+def excerpt(args):
+    if args.path_or_url.startswith(('http://', 'https://')):
+        url = args.path_or_url
+        f = RemoteFile(url)
+        spk = SPK(DAF(f))
+        with open(args.output_path, 'w+b') as output_file:
+            write_excerpt(spk, output_file, args.start_date, args.end_date)
+    else:
+        path = args.path_or_url
+        with open(path, 'rb') as f:
+            spk = SPK(DAF(f))
+            with open(args.output_path, 'w+b') as output_file:
+                write_excerpt(spk, output_file, args.start_date, args.end_date)
+    return ()
+
+def spk_segments(args):
     with open(args.path, 'rb') as f:
-        return(str(SPK(DAF(f))))
+        yield str(SPK(DAF(f)))
+
+def parse_date(s):
+    try:
+        fields = [int(f) for f in s.split('/')]
+    except ValueError:
+        fields = []
+    if len(fields) < 1 or len(fields) > 3:
+        E = argparse.ArgumentTypeError
+        raise E('specify each date as YYYY or YYYY/MM or YYYY/MM/DD')
+    return julian_day(*fields)
+
+def julian_day(year, month=1, day=1):
+    """Given a proleptic Gregorian calendar date, return a Julian day int."""
+    janfeb = month < 3
+    return (day
+            + 1461 * (year + 4800 - janfeb) // 4
+            + 367 * (month - 2 + janfeb * 12) // 12
+            - 3 * ((year + 4900 - janfeb) // 100) // 4
+            - 32075)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/daf.py 
new/jplephem-2.9/jplephem/daf.py
--- old/jplephem-2.6/jplephem/daf.py    2016-12-10 19:06:39.000000000 +0100
+++ new/jplephem-2.9/jplephem/daf.py    2018-07-22 22:13:01.000000000 +0200
@@ -3,10 +3,11 @@
 http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/FORTRAN/req/daf.html
 
 """
+import io
 import mmap
-import struct
 import sys
-from numpy import ndarray
+from struct import Struct
+from numpy import array as numpy_array, ndarray
 
 FTPSTR = b'FTPSTR:\r:\n:\r\n:\r\x00:\x81:\x10\xce:ENDFTP'  # FTP test string
 LOCFMT = {b'BIG-IEEE': '>', b'LTL-IEEE': '<'}
@@ -26,12 +27,17 @@
             raise ValueError('file_object must be opened in binary "b" mode')
 
         self.file = file_object
+        self._map = None
+        self._array = None
 
         file_record = self.read_record(1)
 
         def unpack():
-            (self.nd, self.ni, self.locifn, self.fward, self.bward, self.free
-            ) = struct.unpack(self.endian + 'II60sIII', file_record[8:88])
+            fmt = self.endian + '8sII60sIII8s603s28s297s'
+            self.file_record_struct = Struct(fmt)
+            (locidw, self.nd, self.ni, self.locifn, self.fward, self.bward,
+             self.free, locfmt, self.prenul, self.ftpstr, self.pstnul
+            ) = self.file_record_struct.unpack(file_record)
 
         self.locidw = file_record[:8].upper().rstrip()
 
@@ -55,17 +61,34 @@
             raise ValueError('file starts with {0!r}, not "NAIF/DAF" or "DAF/"'
                              .format(self.locidw))
 
-        summary_size = self.nd + (self.ni + 1) // 2
-        self.summary_step = 8 * summary_size
-        self.summary_format = self.endian + 'd' * self.nd + 'i' * self.ni
-        self.summary_length = struct.calcsize(self.summary_format)
-        self.locifn = self.locifn.upper().rstrip()
+        self.locifn_text = self.locifn.rstrip()
+
+        summary_format = 'd' * self.nd + 'i' * self.ni
+
+        self.summary_control_struct = Struct(self.endian + 'ddd')
+        self.summary_struct = struct = Struct(self.endian + summary_format)
+        self.summary_length = length = struct.size
+        self.summary_step = length + (-length % 8) # pad to 8 bytes
+        self.summaries_per_record = (1024 - 8 * 3) // self.summary_step
 
     def read_record(self, n):
         """Return record `n` as 1,024 bytes; records are indexed from 1."""
-        self.file.seek(n * K - 1024)
+        self.file.seek(n * K - K)
         return self.file.read(K)
 
+    def write_record(self, n, data):
+        """Write `data` to file record `n`; records are indexed from 1."""
+        self.file.seek(n * K - K)
+        return self.file.write(data)
+
+    def write_file_record(self):
+        data = self.file_record_struct.pack(
+            self.locidw.ljust(8, b' '), self.nd, self.ni, self.locifn,
+            self.fward, self.bward, self.free, self.locfmt,
+            self.prenul, self.ftpstr, self.pstnul,
+        )
+        self.write_record(1, data)
+
     def map_words(self, start, end):
         """Return a memory-map of the elements `start` through `end`.
 
@@ -76,11 +99,19 @@
         number of extra bytes at the beginning of the return value.
 
         """
-        fileno = self.file.fileno() # requires a true file object
         i, j = 8 * start - 8, 8 * end
-        skip = i % mmap.ALLOCATIONGRANULARITY
-        r = mmap.ACCESS_READ
-        m = mmap.mmap(fileno, length=j-i+skip, access=r, offset=i-skip)
+        try:
+            fileno = self.file.fileno()
+        except (AttributeError, io.UnsupportedOperation):
+            fileno = None
+        if fileno is None:
+            skip = 0
+            self.file.seek(i)
+            m = self.file.read(j - i)
+        else:
+            skip = i % mmap.ALLOCATIONGRANULARITY
+            r = mmap.ACCESS_READ
+            m = mmap.mmap(fileno, length=j-i+skip, access=r, offset=i-skip)
         if sys.version_info > (3,):
             m = memoryview(m)  # so further slicing can return views
         return m, skip
@@ -98,35 +129,130 @@
         except UnicodeDecodeError:
             raise ValueError('DAF file comment area is not ASCII text')
 
+    def read_array(self, start, end):
+        """Return floats from `start` to `end` inclusive, indexed from 1.
+
+        The entire range of floats is immediately read into memory from
+        the file, making this efficient for small sequences of floats
+        whose values are all needed immediately.
+
+        """
+        f = self.file
+        f.seek(8 * (start - 1))
+        length = 1 + end - start
+        data = f.read(8 * length)
+        return ndarray(length, self.endian + 'd', data)
+
     def map_array(self, start, end):
-        """Return floats from `start` to `end` inclusive, indexed from 1."""
-        data, skip = self.map_words(start, end)
-        skip //= 8
-        return ndarray(end - start + 1 + skip, self.endian + 'd', data)[skip:]
+        """Return floats from `start` to `end` inclusive, indexed from 1.
 
-    def summaries(self):
-        """Yield (name, (value, value, ...)) for each summary in the file."""
+        Instead of pausing to load all of the floats into RAM, this
+        routine creates a memory map which will load data from the file
+        only as it is accessed, and then will let it expire back out to
+        disk later.  This is very efficient for large data sets to which
+        you need random access.
+
+        """
+        if self._array is None:
+            self._map, skip = self.map_words(1, self.free - 1)
+            assert skip == 0
+            self._array = ndarray(self.free - 1, self.endian + 'd', self._map)
+        return self._array[start - 1 : end]
 
+    def summary_records(self):
+        """Yield (record_number, n_summaries, record_data) for each record.
+
+        Readers will only use the second two values in each tuple.
+        Writers can update the record using the `record_number`.
+
+        """
         record_number = self.fward
+        unpack = self.summary_control_struct.unpack
+        while record_number:
+            data = self.read_record(record_number)
+            next_number, previous_number, n_summaries = unpack(data[:24])
+            yield record_number, n_summaries, data
+            record_number = int(next_number)
+
+    def summaries(self):
+        """Yield (name, (value, value, ...)) for each summary in the file."""
         length = self.summary_length
         step = self.summary_step
+        for record_number, n_summaries, summary_data in self.summary_records():
+            name_data = self.read_record(record_number + 1)
+            for i in range(0, int(n_summaries) * step, step):
+                j = self.summary_control_struct.size + i
+                name = name_data[i:i+step].strip()
+                data = summary_data[j:j+length]
+                values = self.summary_struct.unpack(data)
+                yield name, values
 
-        while record_number:
-            summary_record = self.read_record(record_number)
-            name_record = self.read_record(record_number + 1)
+    def map(self, summary_values):
+        """Return the array of floats described by a summary.
 
-            next_number, previous_number, n_summaries = struct.unpack(
-                self.endian + 'ddd', summary_record[:24])
+        Instead of pausing to load all of the floats into RAM, this
+        routine creates a memory map which will load data from the file
+        only as it is accessed, and then will let it expire back out to
+        disk later.  This is very efficient for large data sets to which
+        you need random access.
 
-            for i in range(0, int(n_summaries) * step, step):
-                j = i + 24
-                name = name_record[i:i+step].strip()
-                data = summary_record[j:j+length]
-                values = struct.unpack(self.summary_format, data)
-                yield name, values
+        """
+        return self.map_array(summary_values[-2], summary_values[-1])
 
-            record_number = int(next_number)
+    def add_array(self, name, values, array):
+        """Add a new array to the DAF file.
 
+        The summary will be initialized with the `name` and `values`,
+        and will have its start word and end word fields set to point to
+        where the `array` of floats has been appended to the file.
 
-NAIF_DAF = DAF  # a separate class supported NAIF/DAF format in jplephem 2.2
+        """
+        f = self.file
+        scs = self.summary_control_struct
 
+        record_number = self.bward
+        data = bytearray(self.read_record(record_number))
+        next_record, previous_record, n_summaries = scs.unpack(data[:24])
+
+        if n_summaries < self.summaries_per_record:
+            summary_record = record_number
+            name_record = summary_record + 1
+            data[:24] = scs.pack(next_record, previous_record, n_summaries + 1)
+            self.write_record(summary_record, data)
+        else:
+            summary_record = ((self.free - 1) * 8 + 1023) // 1024 + 1
+            name_record = summary_record + 1
+            free_record = summary_record + 2
+
+            n_summaries = 0
+            data[:24] = scs.pack(summary_record, previous_record, n_summaries)
+            self.write_record(record_number, data)
+
+            summaries = scs.pack(0, record_number, 1).ljust(1024, b'\0')
+            names = b'\0' * 1024
+            self.write_record(summary_record, summaries)
+            self.write_record(name_record, names)
+
+            self.bward = summary_record
+            self.free = (free_record - 1) * 1024 // 8 + 1
+
+        start_word = self.free
+        f.seek((start_word - 1) * 8)
+        array = numpy_array(array)  # TODO: force correct endian
+        f.write(array.view())
+        end_word = f.tell() // 8
+
+        self.free = end_word + 1
+        self.write_file_record()
+
+        values = values[:self.nd + self.ni - 2] + (start_word, end_word)
+
+        base = 1024 * (summary_record - 1)
+        offset = int(n_summaries) * self.summary_step
+        f.seek(base + scs.size + offset)
+        f.write(self.summary_struct.pack(*values))
+        f.seek(base + 1024 + offset)
+        f.write(name[:self.summary_length].ljust(self.summary_step, b' '))
+
+
+NAIF_DAF = DAF  # a separate class supported NAIF/DAF format in jplephem 2.2
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/ephem.py 
new/jplephem-2.9/jplephem/ephem.py
--- old/jplephem-2.6/jplephem/ephem.py  2016-12-10 19:06:39.000000000 +0100
+++ new/jplephem-2.9/jplephem/ephem.py  2018-02-03 21:10:31.000000000 +0100
@@ -1,5 +1,14 @@
-"""Compute positions from an ephemeris installed as a Python package."""
+"""Compute positions from an ephemeris installed as a Python package.
 
+Note: This entire module is DEPRECATED.  The idea of distributing JPL
+ephemerides as Python packages proved to be impractical (they were much
+too large for the Python Package Index to easily store and distribute),
+and it forced `jplephem` users get their ephemerides from a different
+source than mainline astronomers, who use SPICE files.  This package's
+documentation now recommends avoiding this old code, and using the
+features now build directly into the `SPK` class instead.
+
+"""
 import os
 import numpy as np
 
@@ -9,7 +18,7 @@
 
 
 class Ephemeris(object):
-    """A JPL planetary ephemeris that, given dates, computes positions."""
+    """[DEPRECATED] JPL planetary ephemeris for computing positions on 
dates."""
 
     def __init__(self, module):
         self.name = module.__name__.upper()
@@ -26,18 +35,18 @@
         self.sets = {}
 
     def path(self, filename):
-        """Compute the path to a particular file in the ephemeris."""
+        """[DEPRECATED] Compute the path to a particular file in the 
ephemeris."""
         return os.path.join(self.dirpath, filename)
 
     def load(self, name):
-        """Load the polynomial series for `name` and return it."""
+        """[DEPRECATED] Load the polynomial series for `name` and return it."""
         s = self.sets.get(name)
         if s is None:
             self.sets[name] = s = np.load(self.path('jpl-%s.npy' % name))
         return s
 
     def position(self, name, tdb, tdb2=0.0):
-        """Compute the position of `name` at time ``tdb [+ tdb2]``.
+        """[DEPRECATED] Compute the position of `name` at time ``tdb [+ 
tdb2]``.
 
         The position is returned as a NumPy array ``[x y z]``.
 
@@ -59,7 +68,7 @@
         return self.position_from_bundle(bundle)
 
     def position_and_velocity(self, name, tdb, tdb2=0.0):
-        """Compute the position and velocity of `name` at ``tdb [+ tdb2]``.
+        """[DEPRECATED] Compute the position and velocity of `name` at ``tdb 
[+ tdb2]``.
 
         The position and velocity are returned in a 2-tuple::
 
@@ -85,7 +94,7 @@
         return position, velocity
 
     def compute(self, name, tdb):
-        """Legacy routine that concatenates position and velocity vectors.
+        """[DEPRECATED] Legacy routine that concatenates position and velocity 
vectors.
 
         This routine is deprecated.  Use the methods `position()` and
         `position_and_velocity()` instead.  This method follows the same
@@ -101,7 +110,7 @@
         return np.concatenate((position, velocity))
 
     def compute_bundle(self, name, tdb, tdb2=0.0):
-        """Return a tuple of coefficients and parameters for `tdb`.
+        """[DEPRECATED] Return a tuple of coefficients and parameters for 
`tdb`.
 
         The return value is a tuple that bundles together the
         coefficients and other Chebyshev intermediate values that are
@@ -162,13 +171,13 @@
         return bundle
 
     def position_from_bundle(self, bundle):
-        """Return position, given the `coefficient_bundle()` return value."""
+        """[DEPRECATED] Return position, given the `coefficient_bundle()` 
return value."""
 
         coefficients, days_per_set, T, twot1 = bundle
         return (T.T * coefficients).sum(axis=2)
 
     def velocity_from_bundle(self, bundle):
-        """Return velocity, given the `coefficient_bundle()` return value."""
+        """[DEPRECATED] Return velocity, given the `coefficient_bundle()` 
return value."""
 
         coefficients, days_per_set, T, twot1 = bundle
         coefficient_count = coefficients.shape[2]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/excerpter.py 
new/jplephem-2.9/jplephem/excerpter.py
--- old/jplephem-2.6/jplephem/excerpter.py      1970-01-01 01:00:00.000000000 
+0100
+++ new/jplephem-2.9/jplephem/excerpter.py      2018-02-11 19:47:55.000000000 
+0100
@@ -0,0 +1,83 @@
+"""Extract data for a specific date range from an SPK file."""
+
+from sys import stderr
+try:
+    from urllib.request import URLopener
+except:
+    from urllib import URLopener
+
+from numpy import copy
+from .daf import DAF
+from .spk import S_PER_DAY, T0
+
+clip_lower = max
+clip_upper = min
+
+def _seconds(jd):
+    """Convert a Julian Date to a number of seconds since J2000."""
+    return (jd - T0) * S_PER_DAY
+
+def write_excerpt(input_spk, output_file, start_jd, end_jd):
+    start_seconds = _seconds(start_jd)
+    end_seconds = _seconds(end_jd)
+    old = input_spk.daf
+
+    # Copy the file record and the comments verbatim.
+    f = output_file
+    f.seek(0)
+    f.truncate()
+    for n in range(1, old.fward):
+        data = old.read_record(n)
+        f.write(data)
+
+    # Start an initial summary and name block.
+    summary_data = b'\0' * 1024
+    name_data = b' ' * 1024
+    f.write(summary_data)
+    f.write(name_data)
+
+    d = DAF(f)
+    d.fward = d.bward = old.fward
+    d.free = (d.fward + 1) * (1024 // 8) + 1
+    d.write_file_record()
+
+    # Copy over an excerpt of each array.
+    for name, values in old.summaries():
+        start, end = values[-2], values[-1]
+        init, intlen, rsize, n = old.read_array(end - 3, end)
+        rsize = int(rsize)
+
+        i = int(clip_lower(0, (start_seconds - init) // intlen))
+        j = int(clip_upper(n, (end_seconds - init) // intlen + 1))
+        init = init + i * intlen
+        n = j - i
+
+        extra = 4     # enough room to rebuild [init intlen rsize n]
+        excerpt = copy(old.read_array(
+            start + rsize * i,
+            start + rsize * j + extra - 1,
+        ))
+        excerpt[-4:] = (init, intlen, rsize, n)
+        values = (init, init + n * intlen) + values[2:]
+        d.add_array(b'X' + name[1:], values, excerpt)
+
+class RemoteFile(object):
+    def __init__(self, url):
+        self.opener = URLopener()
+        self.url = url
+        self.filename = url.rstrip('/').rsplit('/', 1)[-1]
+        self.offset = 0
+
+    def seek(self, offset, whence=0):
+        assert whence == 0
+        self.offset = offset
+
+    def read(self, size):
+        start = self.offset
+        end = start + size - 1
+        assert end > start
+        h = 'Range', 'bytes={}-{}'.format(start, end)
+        stderr.write('Fetching {} {}\n'.format(self.filename, h[1]))
+        self.opener.addheaders.append(h)
+        data = self.opener.open(self.url).read()
+        return data
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/spk.py 
new/jplephem-2.9/jplephem/spk.py
--- old/jplephem-2.6/jplephem/spk.py    2016-12-10 19:06:39.000000000 +0100
+++ new/jplephem-2.9/jplephem/spk.py    2019-01-04 05:15:39.000000000 +0100
@@ -48,7 +48,9 @@
         self.daf.file.close()
         for segment in self.segments:
             if hasattr(segment, '_data'):
-                del segment._data  # TODO: explicitly close each memory map
+                del segment._data
+        self.daf._array = None
+        self.daf._map = None
 
     def __str__(self):
         daf = self.daf
@@ -65,6 +67,12 @@
         """Return the file comments, as a string."""
         return self.daf.comments()
 
+    def __enter__(self):
+        return self
+
+    def __exit__(self, exc_type, exc_val, exc_tb):
+        self.close()
+
 
 class Segment(object):
     """A single segment of an SPK file.
@@ -93,6 +101,7 @@
          self.frame, self.data_type, self.start_i, self.end_i) = descriptor
         self.start_jd = jd(self.start_second)
         self.end_jd = jd(self.end_second)
+        self._data = None
 
     def __str__(self):
         return self.describe(verbose=False)
@@ -128,7 +137,7 @@
         else:
             raise ValueError('only SPK data types 2 and 3 are supported')
 
-        init, intlen, rsize, n = self.daf.map_array(self.end_i - 3, self.end_i)
+        init, intlen, rsize, n = self.daf.read_array(self.end_i - 3, 
self.end_i)
         initial_epoch = jd(init)
         interval_length = intlen / S_PER_DAY
         coefficient_count = int(rsize - 2) // component_count
@@ -140,6 +149,12 @@
         coefficients = rollaxis(coefficients, 1)
         return initial_epoch, interval_length, coefficients
 
+    def load_array(self):
+        data = self._data
+        if data is None:
+            self._data = data = self._load()
+        return data
+
     def generate(self, tdb, tdb2):
         """Generate components and differentials for time `tdb` plus `tdb2`.
 
@@ -155,12 +170,11 @@
         if scalar:
             tdb = array((tdb,))
 
-        try:
-            initial_epoch, interval_length, coefficients = self._data
-        except AttributeError:
-            self._data = self._load()
-            initial_epoch, interval_length, coefficients = self._data
+        data = self._data
+        if data is None:
+            self._data = data = self._load()
 
+        initial_epoch, interval_length, coefficients = data
         component_count, n, coefficient_count = coefficients.shape
 
         # Subtracting tdb before adding tdb2 affords greater precision.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/jplephem/test.py 
new/jplephem-2.9/jplephem/test.py
--- old/jplephem-2.6/jplephem/test.py   2016-12-20 06:32:42.000000000 +0100
+++ new/jplephem-2.9/jplephem/test.py   2019-01-04 05:21:07.000000000 +0100
@@ -9,11 +9,13 @@
 
 """
 import numpy as np
-import subprocess
+import tempfile
 from functools import partial
+from io import BytesIO
 from jplephem import Ephemeris, commandline
-from jplephem.daf import NAIF_DAF
+from jplephem.daf import DAF, FTPSTR, NAIF_DAF
 from jplephem.spk import SPK
+from struct import Struct
 try:
     from unittest import SkipTest, TestCase
 except ImportError:
@@ -39,6 +41,150 @@
     }
 
 
+class TestDAFBytesIO(TestCase):
+    def sample_daf(self):
+        word = Struct('d').pack
+        integer = Struct('i').pack
+        return BytesIO(b''.join([
+            # Record 1 - File Record
+            b'DAF/SPK ',
+            b'\x02\x00\x00\x00', # ND
+            b'\x03\x00\x00\x00', # NI
+            b'Internal Name'.ljust(60, b' '), # LOCIFN
+            b'\x03\x00\x00\x00', # FWARD
+            b'\x07\x00\x00\x00', # BWARD
+            b'\x01\x04\x00\x00', # FREE
+            b'LTL-IEEE', # LOCFMT
+            b'\0' * 603, # PRENUL
+            FTPSTR,
+            b'\0' * 297, # PSTNUL
+
+            # Record 2
+            b'Comment Record'.ljust(1024, b'\0'),
+
+            # Record 3 - first Summary Record
+            b''.join([
+                word(7), # next summary record
+                word(0), # previous summary record
+                word(1), # number of summaries
+                word(101),
+                word(202),
+                integer(303),
+                integer(1024 * 4 // 8 + 1), # Record 5 start
+                integer(1024 * 5 // 8),     # Record 5 end
+                integer(0),
+            ]).ljust(1024, b'\0'),
+
+            # Record 4 - first Name Record
+            b'Summary Name 1'.ljust(1024, b' '),
+
+            # Record 5
+            word(1001) * 128,
+
+            # Record 6
+            word(2002) * 128,
+
+            # Record 7 - second Summary Record
+            b''.join([
+                word(0), # next summary record
+                word(3), # previous summary record
+                word(1), # number of summaries
+                word(111),
+                word(222),
+                integer(333),
+                integer(1024 * 5 // 8 + 1), # Record 6 start
+                integer(1024 * 6 // 8),     # Record 6 end
+                integer(0),
+            ]).ljust(1024, b'\0'),
+
+            # Record 8 - second Name Record
+            b'Summary Name 2'.ljust(1024, b' '),
+        ]))
+
+    def test_header(self):
+        f = self.sample_daf()
+        d = DAF(f)
+        eq = self.assertEqual
+        eq(d.locidw, b'DAF/SPK')
+        eq(d.nd, 2)
+        eq(d.ni, 3)
+        eq(d.locifn_text, b'Internal Name')
+        eq(d.fward, 3)
+        eq(d.bward, 7)
+        eq(d.free, 0x401)
+        eq(d.locfmt, b'LTL-IEEE')
+
+    def test_segments(self):
+        f = self.sample_daf()
+        d = DAF(f)
+
+        summaries = list(d.summaries())
+        eq = self.assertEqual
+        eq(len(summaries), 2)
+        eq(summaries[0], (b'Summary Name 1', (101.0, 202.0, 303, 513, 640)))
+        eq(summaries[1], (b'Summary Name 2', (111.0, 222.0, 333, 641, 768)))
+
+        eq = self.assertSequenceEqual
+        eq(list(d.map(summaries[0][1])), [1001.0] * 128)
+        eq(list(d.map(summaries[1][1])), [2002.0] * 128)
+
+    def test_add_segment(self):
+        f = self.sample_daf()
+        d = DAF(f)
+
+        d.add_array(b'Summary Name 3', (121.0, 232.0, 343), [3003.0] * 128)
+
+        summaries = list(d.summaries())
+        eq = self.assertEqual
+        eq(len(summaries), 3)
+        eq(summaries[0], (b'Summary Name 1', (101.0, 202.0, 303, 513, 640)))
+        eq(summaries[1], (b'Summary Name 2', (111.0, 222.0, 333, 641, 768)))
+        eq(summaries[2], (b'Summary Name 3', (121.0, 232.0, 343, 1025, 1152)))
+
+        eq = self.assertSequenceEqual
+        eq(list(d.map(summaries[0][1])), [1001.0] * 128)
+        eq(list(d.map(summaries[1][1])), [2002.0] * 128)
+        eq(list(d.map(summaries[2][1])), [3003.0] * 128)
+
+    def test_add_segment_when_summary_block_is_full(self):
+        f = self.sample_daf()
+        d = DAF(f)
+
+        # Update n_summaries of final summary block to full.
+        d.file.seek(6 * 1024 + 16)
+        d.file.write(Struct('d').pack(d.summaries_per_record))
+
+        d.add_array(b'Summary Name 3', (121.0, 232.0, 343), [3003.0] * 200)
+
+        # Reset n_summaries of that block back to its real value.
+        d.file.seek(6 * 1024 + 16)
+        d.file.write(Struct('d').pack(1))
+
+        summaries = list(d.summaries())
+        eq = self.assertEqual
+        eq(len(summaries), 3)
+        eq(summaries[0], (b'Summary Name 1', (101.0, 202.0, 303, 513, 640)))
+        eq(summaries[1], (b'Summary Name 2', (111.0, 222.0, 333, 641, 768)))
+        eq(summaries[2], (b'Summary Name 3', (121.0, 232.0, 343, 1281, 1480)))
+
+        eq = self.assertSequenceEqual
+        eq(list(d.map(summaries[0][1])), [1001.0] * 128)
+        eq(list(d.map(summaries[1][1])), [2002.0] * 128)
+        eq(list(d.map(summaries[2][1])), [3003.0] * 200)
+
+
+class TestDAFRealFile(TestDAFBytesIO):
+    # Where "Real" = "written to disk with a real file descriptor
+    # instead of an in-memory BytesIO".
+
+    def sample_daf(self):
+        bytes_io = super(TestDAFRealFile, self).sample_daf()
+        f = tempfile.NamedTemporaryFile(mode='w+b', prefix='jplephem_test')
+        f.write(bytes_io.getvalue())
+        f.seek(0)
+        return f
+
+
 class _CommonTests(object):
 
     def check0(self, xyz, xyzdot=None):
@@ -176,6 +322,11 @@
   '2414864.50..2471184.50  Solar System Barycenter (0) -> Mars Barycenter (4)'
   '\n  frame=1 data_type=2 source=DE-0421LE-0421')
 
+    def test_loading_array(self):
+        segment = self.spk[0,4]
+        initial_epoch, interval_length, coefficients = segment.load_array()
+        self.assertEqual(coefficients.shape, (3, 1760, 11))
+
 
 class LegacyTests(_CommonTests, TestCase):
 
@@ -219,20 +370,43 @@
 class NAIF_DAF_Tests(TestCase):
 
     def test_single_position(self):
-        kernel = SPK(NAIF_DAF(open('de405.bsp', 'rb')))
-        x, y, z = kernel[0,4].compute(2457061.5)
-        # Expect rough agreement with a DE430 position from our README:
-        self.assertAlmostEqual(x, 2.05700211e+08, delta=2.0)
-        self.assertAlmostEqual(y, 4.25141646e+07, delta=2.0)
-        self.assertAlmostEqual(z, 1.39379183e+07, delta=2.0)
-        kernel.close()
+        with SPK(NAIF_DAF(open('de405.bsp', 'rb'))) as kernel:
+            x, y, z = kernel[0,4].compute(2457061.5)
+            # Expect rough agreement with a DE430 position from our README:
+            self.assertAlmostEqual(x, 2.05700211e+08, delta=2.0)
+            self.assertAlmostEqual(y, 4.25141646e+07, delta=2.0)
+            self.assertAlmostEqual(z, 1.39379183e+07, delta=2.0)
 
 
 class CommandLineTests(TestCase):
     maxDiff = 9999
 
-    def test_command_line(self):
-        self.assertEqual(commandline.main(['de405.bsp']), """\
+    def test_comment_command(self):
+        output = commandline.main(['comment', 'de405.bsp'])
+        self.assertEqual(output[:30], '; de405.bsp LOG FILE\n;\n; Creat')
+        self.assertEqual(output[-30:], "rom Standish's DE405 memo <<<\n")
+
+    def test_daf_command(self):
+        self.assertEqual(commandline.main(['daf', 'de405.bsp']), """\
+ 1 DE-405 -1577879958.8160586 1577880064.1839132 1 0 1 2 1409 202316
+ 2 DE-405 -1577879958.8160586 1577880064.1839132 2 0 1 2 202317 275376
+ 3 DE-405 -1577879958.8160586 1577880064.1839132 3 0 1 2 275377 368983
+ 4 DE-405 -1577879958.8160586 1577880064.1839132 4 0 1 2 368984 408957
+ 5 DE-405 -1577879958.8160586 1577880064.1839132 5 0 1 2 408958 438653
+ 6 DE-405 -1577879958.8160586 1577880064.1839132 6 0 1 2 438654 464923
+ 7 DE-405 -1577879958.8160586 1577880064.1839132 7 0 1 2 464924 487767
+ 8 DE-405 -1577879958.8160586 1577880064.1839132 8 0 1 2 487768 510611
+ 9 DE-405 -1577879958.8160586 1577880064.1839132 9 0 1 2 510612 533455
+10 DE-405 -1577879958.8160586 1577880064.1839132 10 0 1 2 533456 613364
+11 DE-405 -1577879958.8160586 1577880064.1839132 301 3 1 2 613365 987780
+12 DE-405 -1577879958.8160586 1577880064.1839132 399 3 1 2 987781 1362196
+13 DE-405 -1577879958.8160586 1577880064.1839132 199 1 1 2 1362197 1362208
+14 DE-405 -1577879958.8160586 1577880064.1839132 299 2 1 2 1362209 1362220
+15 DE-405 -1577879958.8160586 1577880064.1839132 499 4 1 2 1362221 1362232
+""")
+
+    def test_spk_command(self):
+        self.assertEqual(commandline.main(['spk', 'de405.bsp']), """\
 File type NAIF/DAF and format BIG-IEEE with 15 segments:
 2433282.50..2469807.50  Solar System Barycenter (0) -> Mercury Barycenter (1)
 2433282.50..2469807.50  Solar System Barycenter (0) -> Venus Barycenter (2)
@@ -248,4 +422,5 @@
 2433282.50..2469807.50  Earth Barycenter (3) -> Earth (399)
 2433282.50..2469807.50  Mercury Barycenter (1) -> Mercury (199)
 2433282.50..2469807.50  Venus Barycenter (2) -> Venus (299)
-2433282.50..2469807.50  Mars Barycenter (4) -> Mars (499)""")
+2433282.50..2469807.50  Mars Barycenter (4) -> Mars (499)
+""")
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/jplephem-2.6/setup.py new/jplephem-2.9/setup.py
--- old/jplephem-2.6/setup.py   2016-12-20 06:39:00.000000000 +0100
+++ new/jplephem-2.9/setup.py   2019-01-04 05:32:27.000000000 +0100
@@ -8,7 +8,7 @@
 description, long_description = jplephem.__doc__.split('\n', 1)
 
 setup(name = 'jplephem',
-      version = '2.6',
+      version = '2.9',
       description = description,
       long_description = long_description,
       license = 'MIT',
@@ -19,11 +19,12 @@
         'Intended Audience :: Science/Research',
         'License :: OSI Approved :: MIT License',
         'Programming Language :: Python :: 2',
-        'Programming Language :: Python :: 2.6',
         'Programming Language :: Python :: 2.7',
         'Programming Language :: Python :: 3',
         'Programming Language :: Python :: 3.3',
         'Programming Language :: Python :: 3.4',
+        'Programming Language :: Python :: 3.5',
+        'Programming Language :: Python :: 3.6',
         'Topic :: Scientific/Engineering :: Astronomy',
         ],
       packages = ['jplephem'],


Reply via email to