[Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Chris
I have some old code that uses cPickle.loads which used to work, but now
reports an error in loading the module Numeric. Since Numeric has been
replaced by numpy, this makes sense, but, how can I get cPickle.loads to
work? I tested the code again on an older machine and it works fine
there, but, I'd like to get it working again on a modern set-up as well.

Thanks!

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Pierre Haessig
Hi,

Le 25/02/2014 09:19, Chris a écrit :
 I have some old code that uses cPickle.loads which used to work, but now
 reports an error in loading the module Numeric. Since Numeric has been
 replaced by numpy, this makes sense, but, how can I get cPickle.loads to
 work? I tested the code again on an older machine and it works fine
 there, but, I'd like to get it working again on a modern set-up as well.

 Thanks!

Do you have big archives of pickled arrays ?

I have the feeling that your question is related to this SO question:
http://stackoverflow.com/questions/2121874/python-pickling-after-changing-a-modules-directory
From the accepted SO answer, I'm getting that it is not easy to manually
edit the pickled files (except in the case of the ASCII pickle protocol)

So if you still have an old setup that can open the pickled arrays, I
would suggest to use it to convert it to a format that is more
appropriate to long term archiving. Maybe a simple text format (CSV ?)
or HDF5 depending on the volume and the complexity (but I'm not a
specialist data archiving)

best,
Pierre
attachment: pierre_haessig.vcf

signature.asc
Description: OpenPGP digital signature
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Custom floating point representation to IEEE 754 double

2014-02-25 Thread Daniele Nicolodi
Hello,

I'm dealing with an instrument that transfers numerical values through
an RS232 port in a custom (?) floating point representation (56 bits, 4
bits exponent and 52 bits significand).

Of course I need to convert this format to a standard IEEE 754 double to
be able to do anything useful with it.  I came up with this simple code:

  def tofloat(data):
  # 56 bits floating point representation
  # 4 bits exponent
  # 52 bits significand
  d = frombytes(data)
  l = 56
  p = l - 4
  e = int(d  p) + 17
  v = 0
  for i in xrange(p):
  b = (d  i)  0x01
  v += b * pow(2, i - p + e)
  return v

where frombytes() is a simple function that assembles 7 bytes read from
the serial port into an integer for easing the manipulation:

  def frombytes(bytes):
  # convert from bytes string
  value = 0
  for i, b in enumerate(reversed(bytes)):
  value += b * (1  (i * 8))
  return value

I believe that tofloat() can be simplified a bit, but except
optimizations (and cythonization) of this code, there is any simpler way
of achieving this?

Thanks. Cheers,
Daniele
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Robert Kern
On Tue, Feb 25, 2014 at 8:19 AM, Chris chris.ma...@gmail.com wrote:
 I have some old code that uses cPickle.loads which used to work, but now
 reports an error in loading the module Numeric. Since Numeric has been
 replaced by numpy, this makes sense, but, how can I get cPickle.loads to
 work? I tested the code again on an older machine and it works fine
 there, but, I'd like to get it working again on a modern set-up as well.

It's relatively straightforward to subclass Unpickler to redirect it
when it goes to look for the array constructor that it expects from
the Numeric module.


from cStringIO import StringIO
import pickle

import numpy as np


TEST_NUMERIC_PICKLE = ('\x80\x02cNumeric\narray_constructor\nq\x01(K\x05\x85U'
   '\x01lU(\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00'
   '\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03'
   '\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00'
   '\x00\x00K\x01tRq\x02.')


# Constant from Numeric.
LittleEndian = 1

def array_constructor(shape, typecode, thestr, Endian=LittleEndian):
 The old Numeric array constructor for pickle, recast for numpy.

if typecode == O:
x = np.array(thestr, O)
else:
x = np.fromstring(thestr, typecode)
x.shape = shape
if LittleEndian != Endian:
return x.byteswapped()
else:
return x


class NumericUnpickler(pickle.Unpickler):
 Allow loading of pickles containing Numeric arrays and
converting them to numpy arrays.


def find_class(self, module, name):
 Return the constructor callable for a given class.

Overridden to handle Numeric.array_constructor specially.

if module == 'Numeric' and name == 'array_constructor':
return array_constructor
else:
return pickle.Unpickler.find_class(self, module, name)


def load(fp):
return NumericUnpickler(fp).load()


def loads(pickle_string):
fp = StringIO(pickle_string)
return NumericUnpickler(fp).load()


if __name__ == '__main__':
import sys
print loads(TEST_NUMERIC_PICKLE)
# Look, Ma! No Numeric!
assert 'Numeric' not in sys.modules

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: SfePy 2014.1

2014-02-25 Thread Robert Cimrman
I am pleased to announce release 2014.1 of SfePy.

Description
---
SfePy (simple finite elements in Python) is a software for solving
systems of coupled partial differential equations by the finite element
method. The code is based on NumPy and SciPy packages. It is distributed
under the new BSD license.

Home page: http://sfepy.org
Mailing list: http://groups.google.com/group/sfepy-devel
Git (source) repository, issue tracker, wiki: http://github.com/sfepy

Highlights of this release
--
- sfepy.fem was split to separate FEM-specific and general modules
- lower memory usage by creating active DOF connectivities directly from field
   connectivities
- new handling of field and variable shapes
- clean up: many obsolete modules were removed, all module names follow naming
   conventions

For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1
(rather long and technical).

Best regards,
Robert Cimrman and Contributors (*)

(*) Contributors to this release (alphabetical order):

Vladimír Lukeš, Matyáš Novák, Jaroslav Vondřejc
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.random.geometric is shifted

2014-02-25 Thread Alan G Isaac
Just got momentarily snagged by not checking the
excellent documentation, which clearly says that
numpy provides the shifted geometric.  I'm wondering
why? Who else does?  (Not Mathematica, Matlab, Maple,
or Octave.)

Thanks,
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.geometric is shifted

2014-02-25 Thread Robert Kern
On Tue, Feb 25, 2014 at 4:06 PM, Alan G Isaac alan.is...@gmail.com wrote:
 Just got momentarily snagged by not checking the
 excellent documentation, which clearly says that
 numpy provides the shifted geometric.  I'm wondering
 why?

As with most such questions, because the reference I was working from
defined it that way and gave the algorithms with that convention.

http://luc.devroye.org/rnbookindex.html
http://luc.devroye.org/chapter_ten.pdf

Page 498.

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Benjamin Root
Just to echo this sentiment a bit. I seem to recall reading somewhere that
pickles are not intended to be long-term archives as there is no guarantee
that a pickle made in one version of python would work in another version,
much less between different versions of the same (or similar) packages.

Ben Root


On Tue, Feb 25, 2014 at 3:39 AM, Pierre Haessig pierre.haes...@crans.orgwrote:

 Hi,

 Le 25/02/2014 09:19, Chris a écrit :
  I have some old code that uses cPickle.loads which used to work, but now
  reports an error in loading the module Numeric. Since Numeric has been
  replaced by numpy, this makes sense, but, how can I get cPickle.loads to
  work? I tested the code again on an older machine and it works fine
  there, but, I'd like to get it working again on a modern set-up as well.
 
  Thanks!
 
 Do you have big archives of pickled arrays ?

 I have the feeling that your question is related to this SO question:

 http://stackoverflow.com/questions/2121874/python-pickling-after-changing-a-modules-directory
 From the accepted SO answer, I'm getting that it is not easy to manually
 edit the pickled files (except in the case of the ASCII pickle protocol)

 So if you still have an old setup that can open the pickled arrays, I
 would suggest to use it to convert it to a format that is more
 appropriate to long term archiving. Maybe a simple text format (CSV ?)
 or HDF5 depending on the volume and the complexity (but I'm not a
 specialist data archiving)

 best,
 Pierre

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Alexander Belopolsky
On Tue, Feb 25, 2014 at 11:29 AM, Benjamin Root ben.r...@ou.edu wrote:

 I seem to recall reading somewhere that pickles are not intended to be
 long-term archives as there is no guarantee that a pickle made in one
 version of python would work in another version, much less between
 different versions of the same (or similar) packages.


That's not true about Python core and stdlib.  Python developers strive to
maintain backward compatibility and any instance of newer python failing to
read older pickles would be considered a bug.  This is even true across 2.x
/ 3.x line.

You mileage with 3rd party packages, especially 10+ years old ones may vary.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Julian Taylor
On Tue, Feb 25, 2014 at 5:41 PM, Alexander Belopolsky ndar...@mac.com wrote:

 On Tue, Feb 25, 2014 at 11:29 AM, Benjamin Root ben.r...@ou.edu wrote:

 I seem to recall reading somewhere that pickles are not intended to be
 long-term archives as there is no guarantee that a pickle made in one
 version of python would work in another version, much less between different
 versions of the same (or similar) packages.


 That's not true about Python core and stdlib.  Python developers strive to
 maintain backward compatibility and any instance of newer python failing to
 read older pickles would be considered a bug.  This is even true across 2.x
 / 3.x line.

 You mileage with 3rd party packages, especially 10+ years old ones may vary.

The promise to keep compatibility does still not make it a good format
for long term storage. pickles are a processing format bound to one
specific tool and it is not trivial to read it with any other.
The same applies to HDF5, it may work well now but there is no
guarantee anyone will be able to read it in 50 years when we have
moved on to the next generation of data storage formats.

For long term storage simpler formats like FITS [0] are much more suitable.
Writing a basic FITS parser in any language is easy. But in return it
is not the best format for data processing.

[0] http://fits.gsfc.nasa.gov/fits_standard.html
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Peter Cock
On Tue, Feb 25, 2014 at 4:41 PM, Alexander Belopolsky ndar...@mac.com wrote:

 On Tue, Feb 25, 2014 at 11:29 AM, Benjamin Root ben.r...@ou.edu wrote:

 I seem to recall reading somewhere that pickles are not intended to be
 long-term archives as there is no guarantee that a pickle made in one
 version of python would work in another version, much less between different
 versions of the same (or similar) packages.

 That's not true about Python core and stdlib.  Python developers strive to
 maintain backward compatibility and any instance of newer python failing to
 read older pickles would be considered a bug.  This is even true across 2.x
 / 3.x line.

 You mileage with 3rd party packages, especially 10+ years old ones may vary.

As an example of a 10+ year old project, Biopython has accidentally
broken some pickled objects from older versions of Biopython.

Accidental breakages aside, I personally would not use pickle for long
term storage. Domain specific data formats or something simple like
tabular data, or JSON seems safer.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Jonathan T. Niehof
On 02/25/2014 09:41 AM, Alexander Belopolsky wrote:
 That's not true about Python core and stdlib.  Python developers strive
 to maintain backward compatibility and any instance of newer python
 failing to read older pickles would be considered a bug.  This is even
 true across 2.x / 3.x line.

Note that this doesn't extend to forward compatibility--the default 
pickling format in Python 3 isn't readable in Python 2, and for numpy in 
particular, even version 0 pickles of numpy arrays from Python 3 aren't 
readable in Python 2.

-- 
Jonathan Niehof
ISR-3 Space Data Systems
Los Alamos National Laboratory
MS-D466
Los Alamos, NM 87545

Phone: 505-667-9595
email: jnie...@lanl.gov

Correspondence /
Technical data or Software Publicly Available
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Raul Cota
Robert is right, you can always implement your own function.

What version of numpy and Python are you using ?

There may be something you can add to your numpy installation related to 
the old Numeric support which I believe is now deprecated.

Raul




On 25/02/2014 4:28 AM, Robert Kern wrote:
 On Tue, Feb 25, 2014 at 8:19 AM, Chris chris.ma...@gmail.com wrote:
 I have some old code that uses cPickle.loads which used to work, but now
 reports an error in loading the module Numeric. Since Numeric has been
 replaced by numpy, this makes sense, but, how can I get cPickle.loads to
 work? I tested the code again on an older machine and it works fine
 there, but, I'd like to get it working again on a modern set-up as well.
 It's relatively straightforward to subclass Unpickler to redirect it
 when it goes to look for the array constructor that it expects from
 the Numeric module.


 from cStringIO import StringIO
 import pickle

 import numpy as np


 TEST_NUMERIC_PICKLE = ('\x80\x02cNumeric\narray_constructor\nq\x01(K\x05\x85U'
 
 '\x01lU(\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00'
 '\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03'
 '\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00'
 '\x00\x00K\x01tRq\x02.')


 # Constant from Numeric.
 LittleEndian = 1

 def array_constructor(shape, typecode, thestr, Endian=LittleEndian):
   The old Numeric array constructor for pickle, recast for numpy.
  
  if typecode == O:
  x = np.array(thestr, O)
  else:
  x = np.fromstring(thestr, typecode)
  x.shape = shape
  if LittleEndian != Endian:
  return x.byteswapped()
  else:
  return x


 class NumericUnpickler(pickle.Unpickler):
   Allow loading of pickles containing Numeric arrays and
  converting them to numpy arrays.
  

  def find_class(self, module, name):
   Return the constructor callable for a given class.

  Overridden to handle Numeric.array_constructor specially.
  
  if module == 'Numeric' and name == 'array_constructor':
  return array_constructor
  else:
  return pickle.Unpickler.find_class(self, module, name)


 def load(fp):
  return NumericUnpickler(fp).load()


 def loads(pickle_string):
  fp = StringIO(pickle_string)
  return NumericUnpickler(fp).load()


 if __name__ == '__main__':
  import sys
  print loads(TEST_NUMERIC_PICKLE)
  # Look, Ma! No Numeric!
  assert 'Numeric' not in sys.modules


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] shortcut nonzero?

2014-02-25 Thread Alan G Isaac
Is there a shortcut version for finding the first (k) instance(s) of nonzero 
entries?
I'm thinking of Matlab's `find(X,k)`:
http://www.mathworks.com/help/matlab/ref/find.html
Easy enough to write of course.

I thought `flatnonzero` would be the obvious place for this,
but it does not have a `first=k` option.
Is such an option worth suggesting?

Thanks,
Alan Isaac
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] shortcut nonzero?

2014-02-25 Thread Yuxiang Wang
Hi Alan,

If you are only dealing with 1d array, What about:

np.nonzero(your_array)[0][:k]

?

-Shawn


On Tue, Feb 25, 2014 at 2:20 PM, Alan G Isaac alan.is...@gmail.com wrote:

 Is there a shortcut version for finding the first (k) instance(s) of
 nonzero entries?
 I'm thinking of Matlab's `find(X,k)`:
 http://www.mathworks.com/help/matlab/ref/find.html
 Easy enough to write of course.

 I thought `flatnonzero` would be the obvious place for this,
 but it does not have a `first=k` option.
 Is such an option worth suggesting?

 Thanks,
 Alan Isaac
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] shortcut nonzero?

2014-02-25 Thread Daniele Nicolodi
 On Tue, Feb 25, 2014 at 2:20 PM, Alan G Isaac alan.is...@gmail.com
 mailto:alan.is...@gmail.com wrote:
 
 Is there a shortcut version for finding the first (k) instance(s) of
 nonzero entries?
 I'm thinking of Matlab's `find(X,k)`:
 http://www.mathworks.com/help/matlab/ref/find.html
 Easy enough to write of course.
 
 I thought `flatnonzero` would be the obvious place for this,
 but it does not have a `first=k` option.
 Is such an option worth suggesting?

On 25/02/2014 20:28, Yuxiang Wang wrote: Hi Alan,
 If you are only dealing with 1d array, What about:

 np.nonzero(your_array)[0][:k]

I believe that Alan is looking for a solution that does not need to
iterate all the array to extract only the firs k occurrences.

PS: avoid top posting, please.

Cheers,
Daniele
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.8.1 release

2014-02-25 Thread Carl Kleffner
I build wheels for 32bit and 64bit (Windows, OpenBLAS) and put them here:
https://drive.google.com/folderview?id=0B4DmELLTwYmlX05WSWpYVWJfRjgusp=sharing
Due to shortage of time I give not much more detailed informations before
1st of March.

Carl


2014-02-25 1:53 GMT+01:00 Chris Barker chris.bar...@noaa.gov:

 What's up with the OpenBLAS work?

 Any chance that might make it into official binaries? Or is is just too
 fresh?

 Also -- from an off-hand comment in the thread is looked like OpenBLAS
 could provide a library that selects for optimized code at run-time
 depending on hardware -- this would solve the superpack problem with
 wheels, which would be really nice...

 Or did I dream that?

 -Chris



 On Mon, Feb 24, 2014 at 12:40 PM, Matthew Brett 
 matthew.br...@gmail.comwrote:

 Hi,

 On Sun, Feb 23, 2014 at 10:26 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:
  Hi All,
 
  A lot of fixes have gone into the 1.8.x branch and it looks about time
 to do
  a bugfix release. There are a couple of important bugfixes still to
  backport, but if all goes well next weekend, March 1, looks like a good
  target date. So give the current 1.8.x branch a try so as to check that
 it
  covers your most urgent bugfix needs.

 I'd like to volunteer to make a .whl build for Mac.   Is there
 anything special I should do to coordinate with y'all?  It would be
 very good to put it up on pypi for seamless pip install...

 Thanks a lot,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] assigning full precision values to longdouble scalars

2014-02-25 Thread Scott Ransom
Hi All,

So I have a need to use longdouble numpy scalars in an application, and 
I need to be able to reliably set long-double precision values in them. 
  Currently I don't see an easy way to do that.  For example:

In [19]: numpy.longdouble(1.12345678901234567890)
Out[19]: 1.1234567890123456912

Note the loss of those last couple digits.

In [20]: numpy.float(1.12345678901234567890)
Out[20]: 1.1234567890123457

In [21]: numpy.longdouble(1.12345678901234567890) - 
numpy.float(1.12345678901234567890)
Out[21]: 0.0

And so internally they are identical.

In this case, the string appears to be converted to a C double (i.e. 
numpy float) before being assigned to the numpy scalar.  And therefore 
it loses precision.

Is there a good way of setting longdouble values?  Is this a numpy bug?

I was considering using a tiny cython wrapper of strtold() to do a 
conversion from a string to a long double, but it seems like this is 
basically what should be happening internally in numpy in the above example!

Thanks,

Scott

-- 
Scott M. RansomAddress:  NRAO
Phone:  (434) 296-0320   520 Edgemont Rd.
email:  sran...@nrao.edu Charlottesville, VA 22903 USA
GPG Fingerprint: 06A9 9553 78BE 16DB 407B  FFCA 9BFA B6FF FFD3 2989
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Help Understanding Indexing Behavior

2014-02-25 Thread JB
At the risk of igniting a flame war...can someone please help me understand
the indexing behavior of NumPy? I will readily I admit I come from a Matlab
background, but I appreciate the power of Python and am trying to learn more. 

From a Matlab user's perspective, the behavior of indexing in NumPy seems
very bizarre. For example, if I define an array: 

x = np.array([1,2,3,4,5,6,7,8,9,10])

If I want the first 5 elements, what do I do? Well, I say to myself, Python
is zero-based, whereas Matlab is one-based, so if I want the values 1 - 5,
then I want to index 0 - 4. So I type: x[0:4]

And get in return: array([1, 2, 3, 4]). So I got the first value of my
array, but I did not get the 5th value of the array. So the start index
needs to be zero-based, but the end index needs to be one-based. Or to put
it another way, if I type x[4] and x[0:4], the 4 means different things
depending on which set of brackets you're looking at!

It's hard for me to see this as anything by extremely confusing. Can someone
explain this more clearly. Feel free to post links if you'd like. I know
this has been discussed ad nauseam online; I just haven't found any of the
explanations satisfactory (or sufficiently clear, at any rate).


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom floating point representation to IEEE 754 double

2014-02-25 Thread Oscar Benjamin
On 25 February 2014 11:08, Daniele Nicolodi dani...@grinta.net wrote:
 Hello,

 I'm dealing with an instrument that transfers numerical values through
 an RS232 port in a custom (?) floating point representation (56 bits, 4
 bits exponent and 52 bits significand).

 Of course I need to convert this format to a standard IEEE 754 double to
 be able to do anything useful with it.  I came up with this simple code:

   def tofloat(data):
   # 56 bits floating point representation
   # 4 bits exponent
   # 52 bits significand
   d = frombytes(data)
   l = 56
   p = l - 4
   e = int(d  p) + 17
   v = 0
   for i in xrange(p):
   b = (d  i)  0x01
   v += b * pow(2, i - p + e)
   return v

 where frombytes() is a simple function that assembles 7 bytes read from
 the serial port into an integer for easing the manipulation:

   def frombytes(bytes):
   # convert from bytes string
   value = 0
   for i, b in enumerate(reversed(bytes)):
   value += b * (1  (i * 8))
   return value

 I believe that tofloat() can be simplified a bit, but except
 optimizations (and cythonization) of this code, there is any simpler way
 of achieving this?

My first approach would be that if you have an int and you want it as
bits then you can use the bin() function e.g.:
 bin(1234)
'0b10011010010'

You can then slice and reconstruct as ints with
 int('0b101', 2)
5

Similarly my first port of call for simplicity would be to just do
float(Fraction(mantissa, 2**exponent)). It doesn't really lend itself
to cythonisation but it should be accurate and easy enough to
understand.


Oscar
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help Understanding Indexing Behavior

2014-02-25 Thread Julian Taylor
On 26.02.2014 00:04, JB wrote:
 At the risk of igniting a flame war...can someone please help me understand
 the indexing behavior of NumPy? I will readily I admit I come from a Matlab
 background, but I appreciate the power of Python and am trying to learn more. 
 
From a Matlab user's perspective, the behavior of indexing in NumPy seems
 very bizarre. For example, if I define an array: 
 
 x = np.array([1,2,3,4,5,6,7,8,9,10])
 
 If I want the first 5 elements, what do I do? Well, I say to myself, Python
 is zero-based, whereas Matlab is one-based, so if I want the values 1 - 5,
 then I want to index 0 - 4. So I type: x[0:4]
 
 And get in return: array([1, 2, 3, 4]). So I got the first value of my
 array, but I did not get the 5th value of the array. So the start index
 needs to be zero-based, but the end index needs to be one-based. Or to put
 it another way, if I type x[4] and x[0:4], the 4 means different things
 depending on which set of brackets you're looking at!
 
 It's hard for me to see this as anything by extremely confusing. Can someone
 explain this more clearly. Feel free to post links if you'd like. I know
 this has been discussed ad nauseam online; I just haven't found any of the
 explanations satisfactory (or sufficiently clear, at any rate).
 
 

numpy indexing is like conventional C indexing beginning from inclusive
0 to exclusive upper bound: [0, 5[. So the selection length is upper
bound - lower bound.
as a for loop:
for (i = 0; i  5; i++)
 select(i);

This is the same way Python treats slices.

in comparison one based indexing is usually inclusive 1 to inclusive
upper bound: [1, 4]. So the selection length is upper bound - lower
bound + 1.
for (i = 1; i = 4; i++)
 select(i);
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help Understanding Indexing Behavior

2014-02-25 Thread Aaron O'Leary
Think of the python indices as the edges of the boxes, whereas the
matlab indices are the boxes themselves.

matlab: [1][2][3][4]

python: 0[ ]1[ ]2[ ]3[ ]4[ ]5

you need to do 0:5 in python or you won't contain all the boxes!

On 25 February 2014 23:04, JB jonathan.j.b...@gmail.com wrote:
 At the risk of igniting a flame war...can someone please help me understand
 the indexing behavior of NumPy? I will readily I admit I come from a Matlab
 background, but I appreciate the power of Python and am trying to learn more.

 From a Matlab user's perspective, the behavior of indexing in NumPy seems
 very bizarre. For example, if I define an array:

 x = np.array([1,2,3,4,5,6,7,8,9,10])

 If I want the first 5 elements, what do I do? Well, I say to myself, Python
 is zero-based, whereas Matlab is one-based, so if I want the values 1 - 5,
 then I want to index 0 - 4. So I type: x[0:4]

 And get in return: array([1, 2, 3, 4]). So I got the first value of my
 array, but I did not get the 5th value of the array. So the start index
 needs to be zero-based, but the end index needs to be one-based. Or to put
 it another way, if I type x[4] and x[0:4], the 4 means different things
 depending on which set of brackets you're looking at!

 It's hard for me to see this as anything by extremely confusing. Can someone
 explain this more clearly. Feel free to post links if you'd like. I know
 this has been discussed ad nauseam online; I just haven't found any of the
 explanations satisfactory (or sufficiently clear, at any rate).


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom floating point representation to IEEE 754 double

2014-02-25 Thread Daniele Nicolodi
On 26/02/2014 00:12, Oscar Benjamin wrote:
 On 25 February 2014 11:08, Daniele Nicolodi dani...@grinta.net wrote:
 Hello,

 I'm dealing with an instrument that transfers numerical values through
 an RS232 port in a custom (?) floating point representation (56 bits, 4
 bits exponent and 52 bits significand).

 Of course I need to convert this format to a standard IEEE 754 double to
 be able to do anything useful with it.  I came up with this simple code:

 
 My first approach would be that if you have an int and you want it as
 bits then you can use the bin() function e.g.:
 bin(1234)
 '0b10011010010'
 
 You can then slice and reconstruct as ints with
 int('0b101', 2)
 5

How would that be helpful?  I believe it is much more computationally
expensive than relying on simple integer math, especially in the optics
of cythonization.

 Similarly my first port of call for simplicity would be to just do
 float(Fraction(mantissa, 2**exponent)). It doesn't really lend itself
 to cythonisation but it should be accurate and easy enough to
 understand.

simpler in my original email has to be read as involving less
operations and thus more efficient, not simpler to understand, indeed it
is already a simple implementation of the definition. What I would like
to know is if there are some smart shortcuts to make the computation
more efficient.

Cheers,
Daniele

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help Understanding Indexing Behavior

2014-02-25 Thread Eelco Hoogendoorn
To elaborate on what Julian wrote: it is indeed simply a convention;
slices/ranges in python are from the start to one-past-the-end. The reason
for the emergence of this convention is that C code using iterators looks
most natural this way. This manifests in a simple for (i = 0; i  5; i++),
but also when specifying a slice of a linked list, for instance. We don't
want to terminate the loop when we are just arriving at the last item; we
want to terminate a loop when we have gone past the last item. Also, the
length of a range is simply end-start under this convention; no breaking
your head over -1 or +1. Such little nudges of elegance pop up all over C
code; and that's where the convention comes from. Same as zero-based
indexing; just a convention, and if you are going to pick a convention you
might as well pick one that minimizes the number of required operations.
Anything but zero-based indexing will require additional integer math to
find an array element, given its base pointer.


On Wed, Feb 26, 2014 at 12:15 AM, Julian Taylor 
jtaylor.deb...@googlemail.com wrote:

 On 26.02.2014 00:04, JB wrote:
  At the risk of igniting a flame war...can someone please help me
 understand
  the indexing behavior of NumPy? I will readily I admit I come from a
 Matlab
  background, but I appreciate the power of Python and am trying to learn
 more.
 
 From a Matlab user's perspective, the behavior of indexing in NumPy seems
  very bizarre. For example, if I define an array:
 
  x = np.array([1,2,3,4,5,6,7,8,9,10])
 
  If I want the first 5 elements, what do I do? Well, I say to myself,
 Python
  is zero-based, whereas Matlab is one-based, so if I want the values 1 -
 5,
  then I want to index 0 - 4. So I type: x[0:4]
 
  And get in return: array([1, 2, 3, 4]). So I got the first value of my
  array, but I did not get the 5th value of the array. So the start index
  needs to be zero-based, but the end index needs to be one-based. Or to
 put
  it another way, if I type x[4] and x[0:4], the 4 means different things
  depending on which set of brackets you're looking at!
 
  It's hard for me to see this as anything by extremely confusing. Can
 someone
  explain this more clearly. Feel free to post links if you'd like. I know
  this has been discussed ad nauseam online; I just haven't found any of
 the
  explanations satisfactory (or sufficiently clear, at any rate).
 
 

 numpy indexing is like conventional C indexing beginning from inclusive
 0 to exclusive upper bound: [0, 5[. So the selection length is upper
 bound - lower bound.
 as a for loop:
 for (i = 0; i  5; i++)
  select(i);

 This is the same way Python treats slices.

 in comparison one based indexing is usually inclusive 1 to inclusive
 upper bound: [1, 4]. So the selection length is upper bound - lower
 bound + 1.
 for (i = 1; i = 4; i++)
  select(i);
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] assigning full precision values to longdouble scalars

2014-02-25 Thread Sebastian Berg
On Di, 2014-02-25 at 17:52 -0500, Scott Ransom wrote:
 Hi All,
 
 So I have a need to use longdouble numpy scalars in an application, and 
 I need to be able to reliably set long-double precision values in them. 
   Currently I don't see an easy way to do that.  For example:
 
 In [19]: numpy.longdouble(1.12345678901234567890)
 Out[19]: 1.1234567890123456912
 
 Note the loss of those last couple digits.
 
 In [20]: numpy.float(1.12345678901234567890)
 Out[20]: 1.1234567890123457
 
 In [21]: numpy.longdouble(1.12345678901234567890) - 
 numpy.float(1.12345678901234567890)
 Out[21]: 0.0
 
 And so internally they are identical.
 
 In this case, the string appears to be converted to a C double (i.e. 
 numpy float) before being assigned to the numpy scalar.  And therefore 
 it loses precision.
 
 Is there a good way of setting longdouble values?  Is this a numpy bug?
 

Yes, this is a bug I think (never checked), we use the python parsing
functions where possible. But for longdouble python float (double) is
obviously not enough. A hack would be to split it into two:
np.float128(1.1234567890) + np.float128(1234567890e-something)

Though it would be better for the numpy parser to parse the full
precision when given a string.

- Sebastian

 I was considering using a tiny cython wrapper of strtold() to do a 
 conversion from a string to a long double, but it seems like this is 
 basically what should be happening internally in numpy in the above example!
 
 Thanks,
 
 Scott
 


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help Understanding Indexing Behavior

2014-02-25 Thread Daniele Nicolodi
On 26/02/2014 00:04, JB wrote:
 At the risk of igniting a flame war...can someone please help me understand
 the indexing behavior of NumPy? I will readily I admit I come from a Matlab
 background, but I appreciate the power of Python and am trying to learn more. 
 
From a Matlab user's perspective, the behavior of indexing in NumPy seems
 very bizarre. For example, if I define an array: 
 
 x = np.array([1,2,3,4,5,6,7,8,9,10])
 
 If I want the first 5 elements, what do I do? Well, I say to myself, Python
 is zero-based, whereas Matlab is one-based, so if I want the values 1 - 5,
 then I want to index 0 - 4. So I type: x[0:4]

The Python slicing syntax a:b defines the interval [a, b), while the
Matlab syntax defines the interval [a:b].

This post from Guido van Rossum (the creator of Python) explains the
choice of zero indexing and of this particular slice notation:

https://plus.google.com/115212051037621986145/posts/YTUxbXYZyfi

I actually find how Python works more straight forward: obtaining the
first n elements of array x is simply x[:n], and obtaining n elements
starting at index i is x[i:i+n].

Cheers,
Daniele

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help Understanding Indexing Behavior

2014-02-25 Thread Charles R Harris
On Tue, Feb 25, 2014 at 6:01 PM, Daniele Nicolodi dani...@grinta.netwrote:

 On 26/02/2014 00:04, JB wrote:
  At the risk of igniting a flame war...can someone please help me
 understand
  the indexing behavior of NumPy? I will readily I admit I come from a
 Matlab
  background, but I appreciate the power of Python and am trying to learn
 more.
 
 From a Matlab user's perspective, the behavior of indexing in NumPy seems
  very bizarre. For example, if I define an array:
 
  x = np.array([1,2,3,4,5,6,7,8,9,10])
 
  If I want the first 5 elements, what do I do? Well, I say to myself,
 Python
  is zero-based, whereas Matlab is one-based, so if I want the values 1 -
 5,
  then I want to index 0 - 4. So I type: x[0:4]

 The Python slicing syntax a:b defines the interval [a, b), while the
 Matlab syntax defines the interval [a:b].

 This post from Guido van Rossum (the creator of Python) explains the
 choice of zero indexing and of this particular slice notation:

 https://plus.google.com/115212051037621986145/posts/YTUxbXYZyfi

 I actually find how Python works more straight forward: obtaining the
 first n elements of array x is simply x[:n], and obtaining n elements
 starting at index i is x[i:i+n].


To enlarge just a bit, as said, python indexing comes from C, Matlab
indexing comes from Fortran/Matrix conventions. If you look at how Fortran
compiles, it translates to zero based under the hood, starting with a
pointer to memory one location before the actual array data, so  C just got
rid of that little wart.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion