On 15 Feb 2016, at 6:55 pm, Jeff Reback wrote:
>
> https://github.com/pydata/pandas/releases/tag/v0.18.0rc1
Ah, think I forgot about the ‘releases’ pages.
Built on OS X 10.10 + 10.11 with python 2.7.11, 3.4.4 and 3.5.1.
17 errors in the test suite + 1 failure with
On 14 Feb 2016, at 1:53 am, Jeff Reback wrote:
>
> I'm pleased to announce the availability of the first release candidate of
> Pandas 0.18.0.
> Please try this RC and report any issues here: Pandas Issues
> We will be releasing officially in 1-2 weeks or so.
>
Thanks,
> On 31 Jan 2016, at 9:48 am, Sebastian Berg <sebast...@sipsolutions.net> wrote:
>
> On Sa, 2016-01-30 at 20:27 +0100, Derek Homeier wrote:
>> On 27 Jan 2016, at 1:10 pm, Sebastian Berg <
>> sebast...@sipsolutions.net> wrote:
>>>
>>> O
On 27 Jan 2016, at 1:10 pm, Sebastian Berg wrote:
>
> On Mi, 2016-01-27 at 11:19 +, Nadav Horesh wrote:
>> Why the dot function/method is slower than @ on python 3.5.1? Tested
>> from the latest 1.11 maintenance branch.
>>
>
> The explanation I think is that you
On 27 Jan 2016, at 2:58 AM, Charles R Harris wrote:
>
> FWIW, the maintenance/1.11.x branch (there is no tag for the beta?) builds
> and passes all tests with Python 2.7.11
> and 3.5.1 on Mac OS X 10.10.
>
>
> You probably didn't fetch the tags, if they can't be
Hi Chuck,
> I'm pleased to announce that Numpy 1.11.0b1 is now available on sourceforge.
> This is a source release as the mingw32 toolchain is broken. Please test it
> out and report any errors that you discover. Hopefully we can do better with
> 1.11.0 than we did with 1.10.0 ;)
the tarball
On 16 Dec 2015, at 8:22 PM, Matthew Brett wrote:
>
>>> In [4]: %time testx = np.linalg.solve(testA, testb)
>>> CPU times: user 1min, sys: 468 ms, total: 1min 1s
>>> Wall time: 15.3 s
>>>
>>>
>>> so, it looks like you will need to buy a MKL license separately (which
>>>
On 3 Nov 2015, at 6:03 pm, Chris Barker - NOAA Federal
wrote:
>
> I was more aiming to point out a situation where the NumPy's text file reader
> was significantly better than the Pandas version, so we would want to make
> sure that we properly benchmark any significant
On 9 Apr 2015, at 9:41 pm, Andrew Collette andrew.colle...@gmail.com wrote:
Congrats! Also btw, you might want to switch to a new subject line format
for these emails -- the mention of Python 2.5 getting hdf5 support made me
do a serious double take before I figured out what was going on, and
On 26 Oct 2014, at 02:21 pm, Eelco Hoogendoorn hoogendoorn.ee...@gmail.com
wrote:
Im not sure why the memory doubling is necessary. Isnt it possible to
preallocate the arrays and write to them? I suppose this might be inefficient
though, in case you end up reading only a small subset of
On 4 Oct 2014, at 08:37 pm, Ariel Rokem aro...@gmail.com wrote:
import numpy as np
np.__version__
'1.9.0'
np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float))
[array([[ 2., 2., -1.],
[ 2., 2., -1.]]), array([[-0.5, 2.5, 5.5],
[ 1. , 1. , 1. ]])]
On the
Hi Ariel,
I think that the docstring in 1.9 is fine (has the 1.9 result). The docs
online (for all of numpy) are still on version 1.8, though.
I think that enabling the old behavior might be useful, if only so that I can
write code that behaves consistently across these two versions of
On 26 Aug 2014, at 09:05 pm, Adrian Altenhoff adrian.altenh...@inf.ethz.ch
wrote:
But you are right that the problem with using the first_values, which should
of course be valid,
somehow stems from the use of usecols, it seems that in that loop
for (i, conv) in user_converters.items():
Hi Adrian,
I tried to load data from a csv file into numpy using genfromtxt. I need
only a subset of the columns and want to apply some conversions to the
data. attached is a minimal script showing the error.
In brief, I want to load columns 1,2 and 4. But in the converter
function for the
Hi Adrian,
not sure whether to call it a bug; the error seems to arise before reading
any actual data
(even on reading from an empty string); when genfromtxt is checking the
filling_values used
to substitute missing or invalid data it is apparently testing on default
testing values of 1
On 5 Aug 2014, at 11:27 pm, Matthew Brett matthew.br...@gmail.com wrote:
OSX wheels built and tested and uploaded OK :
http://wheels.scikit-image.org
https://travis-ci.org/matthew-brett/numpy-atlas-binaries/builds/31747958
Will test against the scipy stack later on today.
Built and
On 29 Jul 2014, at 02:43 pm, Robert Kern robert.k...@gmail.com wrote:
On Tue, Jul 29, 2014 at 12:47 PM, Josè Luis Mietta
joseluismie...@yahoo.com.ar wrote:
Robert, thanks for your help!
Now I have:
* Q nodes (Q stick-stick intersections)
* a list 'NODES'=[(x,y,i,j)_1,,
On 18 Jul 2014, at 01:07 pm, josef.p...@gmail.com wrote:
Are the problems with sending out the messages with the mailing lists?
I'm getting some replies without original messages, and in some threads I
don't get replies, missing part of the discussions.
There seem to be problems with the
Hi all,
I was just having a new look into the mess that is, imo, the support for
automatic
line ending recognition in genfromtxt, and more generally, the Python file
openers.
I am glad at least reading gzip files is no longer entirely broken in Python3,
but
actually detecting in particular
On 30 Jun 2014, at 04:39 pm, Nathaniel Smith n...@pobox.com wrote:
On Mon, Jun 30, 2014 at 12:33 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
genfromtxt and loadtxt need an almost full rewrite to fix the botched
python3 conversion of these functions. There are a couple threads
On 30 Jun 2014, at 04:56 pm, Nathaniel Smith n...@pobox.com wrote:
A real need, which had also been discussed at length, is a truly performant
text IO
function (i.e. one using a compiled ASCII number parser, and optimally also
a more
memory-efficient one), but unfortunately all people
On 30.06.2014, at 23:10, Jeff Reback jeffreb...@gmail.com wrote:
In pandas 0.14.0, generic whitespace IS parsed via the c-parser, e.g.
specifying '\s+' as a separator. Not sure when you were playing last with
pandas, but the c-parser has been in place since late 2012. (version 0.8.0)
On 13.11.2013, at 3:07AM, Charles R Harris charlesr.har...@gmail.com wrote:
Python 2.4 fixes at https://github.com/numpy/numpy/pull/4049.
Thanks for the fixes; builds under OS X 10.5 now as well. There are two test
errors (or maybe as nose problem?):
NumPy version 1.7.2rc1
NumPy is installed
Hi,
On 03.11.2013, at 5:42PM, Julian Taylor jtaylor.deb...@googlemail.com wrote:
I'm happy to announce the release candidate of Numpy 1.7.2.
This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3.
on OS X 10.5, build and tests succeed for Python 2.5-3.3, but Python 2.4.4
On 23.09.2013, at 7:03PM, Charles R Harris charlesr.har...@gmail.com wrote:
I have gotten no feedback on the removal of the numarray and oldnumeric
packages. Consequently the removal will take place on 9/28. Scream now or
never...
The only thing I'd care about is the nd_image subpackage,
On 05.06.2013, at 9:52AM, Ted To rainexpec...@theo.to wrote:
From the list archives (2011), I noticed that there is a bug in the
python gzip module that causes genfromtxt to fail with python 2 but this
bug is not a problem for python 3. When I tried to use genfromtxt and
python 3 with a
On 10.05.2013, at 1:20PM, Sudheer Joseph sudheer.jos...@yahoo.com wrote:
If some one has a quick way I would like to learn from them or get a
referecence
where the formatting part is described which was
my intention while posting here. As I have been using fortran I just tried
to use it
On 10.05.2013, at 2:51PM, Daniele Nicolodi dani...@grinta.net wrote:
If you wish to format numpy arrays preceding them with a variable name,
the following is a possible solution that gives the same formatting as
in your example:
import numpy as np
import sys
def format(out, v, name):
Dear Sudheer,
On 07.05.2013, at 11:14AM, Sudheer Joseph sudheer.jos...@yahoo.com wrote:
I need to print few arrays in a tabular form for example below
array IL has 25 elements, is there an easy way to print this as 5x5 comma
separated table? in python
IL=[]
for i in
On 12.04.2013, at 2:14AM, Charles R Harris charlesr.har...@gmail.com wrote:
On Thu, Apr 11, 2013 at 5:49 PM, Colin J. Williams cjwilliam...@gmail.com
wrote:
On 11/04/2013 7:20 PM, Paul Hobson wrote:
On Wed, Apr 3, 2013 at 4:28 PM, Doug Coleman doug.cole...@gmail.com wrote:
Also, gmail
On 14.02.2013, at 3:55PM, Steve Spicklemire st...@spvi.com wrote:
I got Xcode 4,6 from the App Store. I don't think it's the SDK since the
python 2.7 version builds fine. It's just the 3.2 version that doesn't have
the -I/Library/Frameworks/Python.Framework/Versions/3.2/include/python3.2m in
On 06.12.2012, at 12:40AM, Mark Bakker wrote:
I guess I wasn't explicit enough.
Say I have an array with 100 numbers and I want to write it to a file
with 6 numbers on each line (and hence, only 4 on the last line).
Can I use savetxt to do that?
What other easy tool does numpy have to do
On 29.11.2012, at 1:21AM, Robert Love wrote:
I have a file with thousands of lines like this:
Signal was returned in 204 microseconds
Signal was returned in 184 microseconds
Signal was returned in 199 microseconds
Signal was returned in 4274 microseconds
Signal was returned in 202
On 27.07.2012, at 3:27PM, Benjamin Root wrote:
I would prefer not to use: from xxx import *,
because of the name pollution.
The name convention that I copied above facilitates avoiding the pollution.
In the same spirit, I've used:
import pylab as plb
But in that same spirit,
On 27 Jul 2012, at 17:58, Tony Yu wrote:
On Fri, Jul 27, 2012 at 11:39 AM, Derek Homeier
de...@astro.physik.uni-goettingen.de wrote:
On 27.07.2012, at 3:27PM, Benjamin Root wrote:
I would prefer not to use: from xxx import *,
because of the name pollution.
The name
On 27.07.2012, at 8:30PM, Fernando Perez fperez@gmail.com wrote:
On Fri, Jul 27, 2012 at 9:43 AM, Derek Homeier
de...@astro.physik.uni-goettingen.de wrote:
thanks, that was exactly what I was looking for - together with
c.TerminalIPythonApp.exec_lines = ['import sys
On 29 May 2012, at 15:00, Mark Bakker wrote:
Why does isscalar('hello') return True?
I thought it would check for a number?
No, it checks for something that is of 'scalar type', which probably can be
translated as 'not equivalent to an array'. Since strings can form numpy
arrays,
I guess
On 29 May 2012, at 15:42, Nathaniel Smith wrote:
I note the fine distinction between np.isscalar( ('hello') ) and
np.isscalar( ('hello'), )...
NB you mean np.isscalar( ('hello',) ), which creates a single-element
tuple. A trailing comma attached to a value in Python normally creates
a
On 06.05.2012, at 8:16AM, Paul Anton Letnes wrote:
All tests for 1.6.2rc1 pass on
Mac OS X 10.7.3
python 2.7.2
gcc 4.2 (Apple)
Passing as well on 10.6 x86_64 and on 10.5.8 ppc with
python 2.5.6/2.6.6/2.7.2 Apple gcc 4.0.1,
but I am getting one failure on Lion (same with Python
On 29 Mar 2012, at 13:54, Chao YUE wrote:
how can I check type of array in if condition expression?
In [75]: type(a)
Out[75]: type 'numpy.ndarray'
In [76]: a.dtype
Out[76]: dtype('int32')
a.dtype=='int32'?
this and
a.dtype=='i4'
a.dtype==np.int32
all work. For a more general check
On 29 Mar 2012, at 14:49, Robert Kern wrote:
all work. For a more general check (e.g. if it is any type of integer), you
can do
np.issubclass_(a.dtype.type, np.integer)
I don't recommend using that. Use np.issubdtype(a.dtype, np.integer) instead.
Sorry, you're right, this works the same
On 27.03.2012, at 1:26AM, Olivier Delalleau wrote:
len(M) will give you the number of rows of M.
For columns I just use M.shape[1] myself, I don't know if there exists a
shortcut.
You can use tuple unpacking, if that helps keeping your code conciser…
nrow, ncol = M.shape
Cheers,
On 27.03.2012, at 2:07AM, Stephanie Cooke wrote:
I am new to numpy. When I try to use the command array.shape, I get
the following error:
AttributeError: 'list' object has no attribute 'shape'
Is anyone familiar with this type of error?
It means 'array' actually is not one, more
On 20 Mar 2012, at 14:40, Chao YUE wrote:
I would be in agree. thanks!
I use gawk to separate the file into many files by year, then it would be
easier to handle.
anyway, it's not a good practice to produce such huge line txt files
Indeed it's not, but it's also not good practice to
Dear Chao,
Do we have a function in numpy that can automatically shrink a ndarray with
redundant dimension?
like I have a ndarray with shape of (13,1,1,160,1), now I have written a
small function to change the array to dimension of (13,160) [reduce the extra
dimension with length as 1].
On 23 Jan 2012, at 21:15, Emmanuel Mayssat wrote:
Is there a way to save a structured array in a text file?
My problem is not so much in the saving procedure, but rather in the
'reloading' procedure.
See below
In [3]: import numpy as np
In [4]: r = np.ones(3,dtype=[('name', '|S5'),
On 23 Jan 2012, at 22:07, Derek Homeier wrote:
In [4]: r = np.ones(3,dtype=[('name', '|S5'), ('foo', 'i8'), ('bar',
'f8')])
In [5]: r.tofile('toto.txt',sep='\n')
bash-4.2$ cat toto.txt
('1', 1, 1.0)
('1', 1, 1.0)
('1', 1, 1.0)
cnv = {0: lambda s: s.lstrip('('), -1: lambda s
On 24 Jan 2012, at 01:45, Olivier Delalleau wrote:
Note sure if there's a better way, but you can do it with some custom load
and save functions:
with open('f.txt', 'w') as f:
... f.write(str(x.dtype) + '\n')
... numpy.savetxt(f, x)
with open('f.txt') as f:
... dtype =
On 04.01.2012, at 5:10AM, questions anon wrote:
Thanks for your responses but I am still having difficuties with this
problem. Using argmax gives me one very large value and I am not sure what it
is.
There shouldn't be any issues with the shape. The latitude and longitude are
the same
On 26.12.2011, at 7:37PM, Fabian Dill wrote:
I have a problem with a structured numpy array.
I create is like this:
tiles = numpy.zeros((header[width], header[height],3), dtype =
numpy.uint8)
and later on, assignments such as this:
tiles[x, y,0] = 3
Now uint8 is not sufficient anymore,
Hi Jack,
In order to install scipy, I am trying to install numpy 1.6.1. on GNU/linux
redhat 2.6.18.
But, I got error about fortran compiler.
I have gfortran. I do not have f77/f90/g77/g90.
that's good!
I run :
python setup.py build --fcompiler=gfortran
It woks well and tells
On 20.12.2011, at 9:01PM, Jack Bryan wrote:
customize Gnu95FCompiler using config
C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3
-Wall -Wstrict-prototypes -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core
-Inumpy/core/src/npymath
On 06.12.2011, at 11:13PM, Wes McKinney wrote:
This isn't the place for this discussion but we should start talking
about building a *high performance* flat file loading solution with
good column type inference and sensible defaults, etc. It's clear that
loadtable is aiming for highest
On 07.12.2011, at 5:07AM, Olivier Delalleau wrote:
I *think* it may work better if you replace the last 3 lines in your loop by:
a=all_TSFC[0]
if len(all_TSFC) 1:
N.maximum(a, TSFC, out=a)
Not 100% sure that would work though, as I'm not entirely
On 07.12.2011, at 5:54AM, questions anon wrote:
sorry the 'all_TSFC' is for my other check of maximum using concatenate and
N.max, I know that works so I am comparing it to this method. The only reason
I need another method is for memory error issues.
I like the code I have written so far
On 03.12.2011, at 6:22PM, Robin Kraft wrote:
That does repeat the elements, but doesn't get them into the desired order.
In [4]: print a
[[1 2]
[3 4]]
In [7]: np.tile(a, 4)
Out[7]:
array([[1, 2, 1, 2, 1, 2, 1, 2],
[3, 4, 3, 4, 3, 4, 3, 4]])
In [8]: np.tile(a,
On 03.12.2011, at 6:47PM, Olivier Delalleau wrote:
Ah sorry, I hadn't read carefully enough what you were trying to achieve. I
think the double repeat solution looks like your best option then.
Considering that it is a lot shorter than fixing the tile() result, you
are probably right (I've
On 1 Dec 2011, at 17:39, Charles R Harris wrote:
On Thu, Dec 1, 2011 at 6:52 AM, Thouis (Ray) Jones tho...@gmail.com wrote:
Is this expected behavior?
np.array([-345,4,2,'ABC'])
array(['-34', '4', '2', 'ABC'], dtype='|S3')
Given that strings should be the result, this looks like a
On 1 Dec 2011, at 21:35, Chris Barker wrote:
On 12/1/2011 9:15 AM, Derek Homeier wrote:
np.array((2, 12,0.001+2j), dtype='|S8')
array(['2', '12', '(0.001+2'], dtype='|S8')
- notice the last value is only truncated because it had first been
converted into
a standard complex
Hi,
On 25 Oct 2011, at 21:14, Pauli Virtanen wrote:
25.10.2011 20:29, Matthew Brett kirjoitti:
[clip]
In [7]: (res-1) / 2**32
Out[7]: 8589934591.98
In [8]: np.float((res-1) / 2**32)
Out[8]: 4294967296.0
Looks like a bug in the C library installed on the machine, then.
It's
On 15.10.2011, at 9:21PM, Hugo Gagnon wrote:
I need to print individual elements of a float64 array to a text file.
However in the file I only get 12 significant digits, the same as with:
a = np.zeros(3)
a.fill(1./3)
print a[0]
0.
len(str(a[0])) - 2
12
whereas
On 15.10.2011, at 9:42PM, Aronne Merrelli wrote:
On Sat, Oct 15, 2011 at 1:12 PM, Matthew Brett matthew.br...@gmail.com
wrote:
Hi,
Continuing the exploration of float128 - can anyone explain this behavior?
np.float64(9223372036854775808.0) == 9223372036854775808L
True
Hi Nils,
On 11 Oct 2011, at 16:34, Nils Wagner wrote:
How do I use genfromtxt to read a file with the following
lines
11 2.2592365264892578D+01
22 2.2592365264892578D+01
13 2.669845581055D+00
On 11 Oct 2011, at 20:06, Matthew Brett wrote:
Have I missed a fast way of doing nice float to integer conversion?
By nice I mean, rounding to the nearest integer, converting NaN to 0,
inf, -inf to the max and min of the integer range? The astype method
and cast functions don't do what I
On 11.10.2011, at 9:18PM, josef.p...@gmail.com wrote:
In [42]: c = np.zeros(4, np.int16)
In [43]: d = np.zeros(4, np.int32)
In [44]: np.around([1.6,np.nan,np.inf,-np.inf], out=c)
Out[44]: array([2, 0, 0, 0], dtype=int16)
In [45]: np.around([1.6,np.nan,np.inf,-np.inf], out=d)
Out[45]:
script.
Cheers,
Derek
--
Derek Homeier Centre de Recherche Astrophysique de Lyon
ENS Lyon 46, Allée d'Italie
69364 Lyon Cedex 07, France
.
Cheers,
Derek
--
Derek Homeier Centre de Recherche Astrophysique de Lyon
ENS Lyon 46, Allée d'Italie
69364 Lyon Cedex 07, France
…
Cheers,
Derek
--
Derek Homeier Centre de Recherche Astrophysique de Lyon
ENS Lyon 46, Allée d'Italie
69364 Lyon Cedex 07, France +33
genfromtxt?)
Cheers,
Derek
--
Derek Homeier Centre de Recherche Astrophysique de Lyon
ENS Lyon 46, Allée d'Italie
69364 Lyon Cedex 07, France
-- so you
don't need to de-compress twice.
Absolutely; on compressed data the time for the extra pass jumps up to +30-50%.
Cheers,
Derek
--
Derek Homeier Centre de
--
Derek Homeier Centre de Recherche Astrophysique de Lyon
ENS Lyon 46, Allée d'Italie
69364 Lyon Cedex 07, France +33 1133 47272-8894
On 25.08.2011, at 8:42PM, Chris.Barker wrote:
On 8/24/11 9:22 AM, Anthony Scopatz wrote:
You can use Python pickling, if you do *not* have a requirement for:
I can't recall why, but it seem pickling of numpy arrays has been
fragile and not very performant.
Hmm, the pure Python version
Hi,
as the subject says, the array_* comparison functions currently do not operate
on structured/record arrays. Pull request
https://github.com/numpy/numpy/pull/146
implements these comparisons.
There are two commits, differing in their interpretation whether two
arrays with different field
Hi Hongchun,
On 16 Aug 2011, at 23:19, Hongchun Jin wrote:
I have a question regarding how to trim a string array in numpy.
import numpy as np
x = np.array(['aaa.hdf', 'bbb.hdf', 'ccc.hdf', 'ddd.hdf'])
I expect to trim a certain part of each element in the array, for example
'.hdf',
On 16 Aug 2011, at 23:51, Hongchun Jin wrote:
Thanks Derek for the quick reply. But I am sorry, I did not make it clear in
my last email. Assume I have an array like
['CAL_LID_L2_05kmCLay-Prov-V3-01.2008-01-01T00-37-48ZD.hdf'
'CAL_LID_L2_05kmCLay-Prov-V3-01.2008-01-01T00-37-48ZD.hdf'
On 10 Aug 2011, at 19:22, Russell E. Owen wrote:
A coworker is trying to load a 1Gb text data file into a numpy array
using numpy.loadtxt, but he says it is using up all of his machine's 6Gb
of RAM. Is there a more efficient way to read such text data files?
The npyio routines (loadtxt as
On 10 Aug 2011, at 22:03, Gael Varoquaux wrote:
On Wed, Aug 10, 2011 at 04:01:37PM -0400, Anne Archibald wrote:
A 1 Gb text file is a miserable object anyway, so it might be desirable
to convert to (say) HDF5 and then throw away the text file.
+1
There might be concerns about ensuring data
On 7 Aug 2011, at 04:09, Sturla Molden wrote:
Den 06.08.2011 11:18, skrev Dag Sverre Seljebotn:
We are excited to announce the release of Cython 0.15, which is a huge
step forward in achieving full Python language coverage as well as
many new features, optimizations, and bugfixes.
This
On 7 Aug 2011, at 22:31, Paul Anton Letnes wrote:
Looks like you have done some great work! I've been using f2py in the past,
but I always liked the idea of cython - gradually wrapping more and more code
as the need arises. I read somewhere that fortran wrapping with cython was
coming -
On 7 Aug 2011, at 23:27, Dag Sverre Seljebotn wrote:
Enumpy_test.c: In function ‘PyInit_numpy_test’:
numpy_test.c:11611: warning: ‘return’ with no value, in function returning
non-void
.numpy_test.cpp: In function ‘PyObject* PyInit_numpy_test()’:
numpy_test.cpp:11611: error:
Hi,
commits c15a807e and c135371e (thus most immediately addressed to Mark, but I
am sending this to the list hoping for more insight on the issue) introduce a
test failure with Python 2.5+2.6 on Mac:
FAIL: test_timedelta_scalar_construction (test_datetime.TestDateTime)
On 2 Aug 2011, at 18:57, Thomas Markovich wrote:
It appears that uninstalling python 2.7 and installing the scipy
superpack with the apple standard python removes the
Did the superpack installer automatically install numpy to the
python2.7 directory when present? Even if so, I reckon you
On 2 Aug 2011, at 19:15, Christopher Barker wrote:
In [32]: s = numpy.array(a, dtype=tfc_dtype)
---
TypeError Traceback (most recent
call last)
/Users/cbarker/ipython console in
On 29.07.2011, at 1:19AM, Stéfan van der Walt wrote:
On Thu, Jul 28, 2011 at 4:10 PM, Anne Archibald
aarch...@physics.mcgill.ca wrote:
Don't forget the everything-looks-like-a-nail approach: make all your
arrays one bigger than you need and ignore element zero.
Hehe, why didn't I think of
On 29.07.2011, at 1:38AM, Anne Archibald wrote:
The can is open and the worms are everywhere, so:
The big problem with one-based indexing for numpy is interpretation.
In python indexing, -1 is the last element of the array, and ranges
have a specific meaning. In a hypothetical one-based
On 07.07.2011, at 7:16PM, Robert Pyle wrote:
.../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/numeric.py:1922:
RuntimeWarning: invalid value encountered in absolute
return all(less_equal(absolute(x-y), atol + rtol * absolute(y)))
On 30.06.2011, at 7:32PM, Thomas K Gamble wrote:
I'm trying to convert some IDL code to python/numpy and i'm having some
trouble understanding the rules for boradcasting during some operations.
example:
given the following arrays:
a = array((2048,3577), dtype=float)
b =
On 30.06.2011, at 11:57PM, Thomas K Gamble wrote:
np.add(b.reshape(2048,3136) * c, d, out=a[:,:3136])
But to say whether this is really the equivalent result to what IDL does,
one would have to study the IDL manual in detail or directly compare the
output (e.g. check what happens to the
On 27.06.2011, at 6:36PM, Robert Kern wrote:
Some late comments on the note (I was a bit surprised that HDF5 installation
seems to be a serious hurdle to many - maybe I've just been profiting from
the fink build system for OS X here - but I also was not aware that the
current netCDF is
On 21.06.2011, at 7:58PM, Neal Becker wrote:
I think, in addition, that hdf5 is the only one that easily interoperates
with
matlab?
speaking of hdf5, I see:
pyhdf5io 0.7 - Python module containing high-level hdf5 load and save
functions.
h5py 2.0.0 - Read and write HDF5 files from
Moin Denis,
On 20 Jun 2011, at 19:04, denis wrote:
a separate question, have you run genfromtxt( xx.csv.gz ) lately ?
I haven't, and I was not particularly involved with it before this
patch, so this would possibly be better addressed to the list.
On on
On 31 May 2011, at 21:28, Pierre GM wrote:
On May 31, 2011, at 6:37 PM, Derek Homeier wrote:
On 31 May 2011, at 18:25, Pierre GM wrote:
On May 31, 2011, at 5:52 PM, Derek Homeier wrote:
I think stuff like multiple delimiters should have been dealt with
before, as the right place to insert
Hi Ralf,
On 19 Jun 2011, at 12:28, Ralf Gommers wrote:
numpy.test('full')
Running unit tests for numpy
NumPy version 1.6.1rc1
NumPy is installed in /sw/lib/python3.2/site-packages/numpy
Python version 3.2 (r32:88445, Mar 1 2011, 18:28:16) [GCC 4.0.1
(Apple Inc. build 5493)]
nose
On 18 Jun 2011, at 04:48, gary ruben wrote:
Thanks guys - I'm happy with the solution for now. FYI, Derek's
suggestion doesn't work in numpy 1.5.1 either.
For any developers following this thread, I think this might be a nice
use case for genfromtxt to handle in future.
Numpy 1.6.0 and above
Hi,
in my experience numbers loaded from text files very often require
identical conversion for all data (e.g. parsing Fortran-style double
precision, or German vs. English decimals...).
Yet loadtxt and genfromtxt in such cases require the user to construct
a dictionary of converters for
On 17.06.2011, at 8:05PM, Mark Wiebe wrote:
On Thu, Jun 16, 2011 at 8:18 PM, Derek Homeier
de...@astro.physik.uni-goettingen.de wrote:
On 17.06.2011, at 2:02AM, Mark Wiebe wrote:
ok, that was a lengthy hunt, but it's in printing the string in
make_iso_8601_date:
tmplen
Hi Gary,
On 17.06.2011, at 5:39PM, gary ruben wrote:
Thanks for the hints Olivier and Bruce. Based on them, the following
is a working solution, although I still have that itchy sense that genfromtxt
should be able to do it directly.
import numpy as np
from StringIO import StringIO
a =
On 17.06.2011, at 11:01PM, Olivier Delalleau wrote:
You were just overdoing it by already creating an array with the converter,
this apparently caused genfromtxt to create a structured array from the
input (which could be converted back to an ndarray, but that can prove
tricky as well) -
Hi Mark,
On 16.06.2011, at 5:40PM, Mark Wiebe wrote:
np.datetime64('2011-06-16 02:03:04Z', 'D')
np.datetime64('-06-16','D')
I've tried to track this down in datetime.c, but unsuccessfully so (i.e. I
could not connect it
to any of the dts-year assignments therein).
This is
On 17.06.2011, at 2:02AM, Mark Wiebe wrote:
ok, that was a lengthy hunt, but it's in printing the string in
make_iso_8601_date:
tmplen = snprintf(substr, sublen, %04 NPY_INT64_FMT, dts-year);
fprintf(stderr, printed %d[%d]: dts-year=%lld: %s\n, tmplen, sublen,
dts-year, substr);
1 - 100 of 133 matches
Mail list logo