In another thread, there is a discussion of a workshop on "Taking
NumPy In Stride" for PyData Barcelona.
I think it would be great to have something like that at SciPy in
Austin this year.
Jaime can't make it, and I don't think strides are going to fill a
four hour tutorial, so it would be good
> It seems a generalized ufunc "all_equal" with signature (i),(i)->() and short
> circuit logic once the first non equal element is encountered would be an
> important performance improvement.
How does array_equal() perform?
-CHB
___
NumPy-Discussion
> Now the user is writing back to say, "my test code is fast now, but
> numpy.test() is still about three times slower than don't manage>". When I watch htop as numpy.test() executes, sure enough,
> it's using one core
>
* if numpy.test() is supposed to be using multiple cores, why isn't it,
>
> The DyND team would be happy to answer any questions people have about DyND,
> like "what is working and what is not" or "what do we still need to do to hit
> DyND 1.0".
OK, how about:
How does the performance. I'd DyND compare to Numpy for the core
functionality they both support?
- CHB
> An extra ~2 hours of tests / 6-way parallelism is not that big a deal
> in the grand scheme of things (and I guess it's probably less than
> that if we can take advantage of existing binary builds)
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he
> Last month, numpy had ~740,000 downloads from PyPI,
Hm, given that Windows and Linux wheels have not been available, then
that's mostly source installs anyway. Or failed installs -- no
shortage of folks trying to pip install numpy on Windows and then
having questions about why it doesn't work.
What does the standard lib do for rand range? I see that randint Is closed
on both ends, so order doesn't matter, though if it raises for b wrote:
Of
>> Also, you have the problem that there is one PyPi -- so where do you put
>> your nifty wheels that depend on other binary wheels? you may need to fork
>> every package you want to build :-(
>
> Is this a real problem or a theoretical one? Do you know of some
> situation where this wheel to
> If you have some spare cycles, maybe you can open a pull request to add
> np.isclose to the "See Also" section?
That would be great.
Remember that equality for flits is bit-for but equality ( baring NaN
and inf...).
But you hardly ever actually want to do that with floats.
But probably
> There has also been some talk of adding a user type for ieee 128 bit doubles.
> I've looked once for relevant code for the latter and, IIRC, the available
> packages were GPL :(.
This looks like it's BSD-Ish:
http://www.jhauser.us/arithmetic/SoftFloat.html
Don't know if it's any good
>> I'm not talking about in place installs, I'm talking about e.g. building a
>> wheel and then tweaking one file and rebuilding -- traditionally build
>> systems go to some effort to keep track of intermediate artifacts and reuse
>> them across builds when possible, but if you always copy the
er would have helped you with that anyway :-)
>
> -CHB
>
>
>
>> for a very large dataset last year, I found that np.genfromtext() was
>> faster than pandas' read_fwf(). IIRC, pandas' text reading code fell back
>> to pure python for fixed width scenarios.
>>
>
.@gmail.com>
wrote:
>
>>
>>
>> On Thu, Oct 22, 2015 at 5:47 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:
>>>
>>>
>>>> I think it would be good to keep the usage to read binary data at
least.
>>>
I think it would be good to keep the usage to read binary data at least.
Agreed -- it's only the text file reading I'm proposing to deprecate. It
was kind of weird to cram it in there in the first place.
Oh, fromfile() has the same issues.
Chris
Or is there a good alternative to
> This is fine. Just be aware that *naive* datetimes will also have the PEP
> 495 "fold" attribute in Python 3.6. You are free to ignore it, but you will
> loose the ability to round-trip between naive stdlib datetimes and
> numpy.datetime64.
Sigh. I can see why it's there ( primarily to
Looks good to me.
This pretty exciting, actually :-)
-CHB
Sent from my iPhone
> On Oct 7, 2015, at 10:57 PM, Nathaniel Smith wrote:
>
> Hi all,
>
> Now that the governance document is in place, we need to get our legal
> ducks in a row by signing a fiscal sponsorship agreement
One of the usecases that has sneaked in during the last few numpy versions
is that object arrays contain numerical arrays where the shapes don't add
up to a rectangular array.
I think that's the wrong way to dve that problem -- we really should have a
"proper" ragged array implementation. But is
This sounds pretty cool -- and I've had a use case. So it would be
nice to get into Numpy.
But: I doubt we'd want OpenMP dependence in Numpy itself.
But maybe a pure Cython non-MP version?
Are we putting Cuthon in Numpy now? I lost track.
-CHB
Sent from my iPhone
> On Sep 29, 2015, at 7:35
> But discussing who is great community leader, etc. is frankly not obvious to
> me related to numpy governance.
Thank you Sebastian.
Could we please try to get back to the governance issues, without
naming names? There are some specific questions on the table that need
to get hashed out.
Turns out I was passing in numpy arrays that I had typed as np.int.
It worked OK two years ago when I was testing only on 32 bit pythons,
but today I got a bunch of failed tests on 64 bit OS-X -- a np.int is
now a C long!
It has always been C long. It is the C long that varies between
So one more bit of anecdotal evidence:
I just today revived some Cython code I wrote a couple years ago and
haven't tested since.
It wraps a C library that uses a lot of int typed values.
Turns out I was passing in numpy arrays that I had typed as np.int.
It worked OK two years ago when I was
Sent from my iPhone
The disadvantage I see is, that some weirder calculations would possible
work most of the times, but not always,
not sure if you can define a tolerance
reasonable here unless it is exact.
You could use a relative tolerance, but you'd still have to set that.
Better to
I _may_ be able to join -- but don't go setting up an alternative
conferencing system just for me.
But I'm planning on ring in Austin Tues in any case.
-Chris
Sent from my iPhone
On Jun 29, 2015, at 9:59 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel
Thanks for the update Matthew, it's great to see so much activity on this issue.
Looks like we are headed in the right direction --and getting close.
Thanks to all that are putting time into this.
-Chris
On May 15, 2015, at 1:37 PM, Matthew Brett matthew.br...@gmail.com wrote:
Hi,
On Fri,
Are there plans to write a vectorized version for NumPy? :)
np.isclose isn't identical, but IIRC the only difference is the defaults.
There are subtle differences in the algorithm as well. But not enough
that it makes sense to change the numpy one.
The results will be similar in most cases,
But HDF5
additionally has a fixed-storage-width UTF8 type, so we could map to a
NumPy fixed-storage-width type trivially.
Sure -- this is why *nix uses utf-8 for filenames -- it can just be a
char*. But that just punts the problem to client code.
I think a UTF-8 string type does not match the
If you are going to introduce this functionality, please don't call it
np.arr.
I agree, but..,
I would suggest calling it something like np.array_simple or
np.array_from_string, but the best choice IMO, would be
np.ndarray.from_string (a static constructor method).
Except the entire point of
On Jul 7, 2014, at 7:28 AM, Sebastian Berg sebast...@sipsolutions.net wrote:
not sure that many use np.r_ or np.c_
I actually really like those ;-)
-Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Jul 4, 2014, at 7:02 AM, Phil Elson pelson@gmail.com wrote:
Nice idea. Just a repository of courses would be a great first step.
Yup -- or really even a curated page of links and refrrences.
Maybe we can get first draft of such a thing put together during the BoF.
Feel free to add this
On Jun 27, 2014, at 8:44 PM, Charles R Harris charlesr.har...@gmail.com wrote:
Hi Kyle,
On Tue, Jun 17, 2014 at 2:40 I'd like to propose adopting something like the
Blaze standard for datetime,
+1 for some focused discussion of datetime. This has been lingering
far too long.
-Chris
On Apr 23, 2014, at 8:23 PM, Sankarshan Mudkavi smudk...@uwaterloo.ca
wrote:
I've been quite busy for the past few weeks but I should be much freer
after next week and can pick up on this (fixing the code and actually
implement things).
wonderful! Thanks.
Chris
Cheers,
Sankarshan
On Apr 23,
On Apr 1, 2014, at 4:36 PM, Nathaniel Smith n...@pobox.com wrote:
We could just ship all numpy's extension modules in the same directory
if we wanted. It would be pretty easy to stick some code at the top of
numpy/__init__.py to load them from numpy/all_dlls/ and then slot them
into the
On Mar 13, 2014, at 9:39 AM, Nicolas Rougier nicolas.roug...@inria.fr wrote:
Seems to be related to the masked values:
Good hint -- a masked array keeps the junk values in the main array.
What abs are you using -- it may not be mask-aware. ( you want a
numpy abs anyway)
Also -- I'm not sure
On Feb 28, 2014, at 1:04 AM, Sebastian Berg sebast...@sipsolutions.net wrote:
because the sequence check like that seems standard in python 3.
Whatever happened to duck typing?
Sigh.
-Chris
___
NumPy-Discussion mailing list
c = a + b: 3N
c = a + 2*b: 4N
Does python garbage collect mid-expression? I.e. :
C = (a + 2*b) + b
4 or 5 N?
Also note that when memory gets tight, fragmentation can be a problem. I.e.
if two size-n arrays where just freed, you still may not be able to
allocate a size-2n array. This seems to
On Jan 22, 2014, at 1:13 PM, Oscar Benjamin oscar.j.benja...@gmail.com wrote:
It's not safe to stop removing the null bytes. This is how numpy determines
the length of the strings in a dtype='S' array. The strings are not
fixed-width but rather have a maximum width.
Exactly--but folks have
On Jan 21, 2014, at 4:58 PM, David Goldsmith d.l.goldsm...@gmail.com wrote:
OK, well that's definitely beyond my level of expertise.
Well, it's in github--now's as good a time as any to learn github
collaboration...
-Fork the numpy source.
-Create a new file in:
numpy/doc/neps
Point folks
Oops!
Wrong list--darn auto complete!
Sorry about that,
Chris
On Oct 25, 2013, at 5:08 PM, Chris Barker chris.bar...@noaa.gov wrote:
Ned,
I think this fell off the list (I really don't like when reply is not set
to the list...)
On Fri, Oct 25, 2013 at 4:33 PM, Ned Deily n...@acm.org wrote:
Ralf Gommers ralf.gomm...@gmail.com wrote:
but the layout of that page is on
purpose. scipy.org is split into two parts: (a) a SciPy Stack part, and
(b)
a numpy scipy library part. You're looking at the stack part, and the
preferred method to install that stack is a Python distribution.
On Fri, Sep 6, 2013 at 9:19 AM, James Bergstra bergs...@iro.umontreal.cawrote:
def test_is(self):
a = np.empty(1)
b = np.empty(1)
if a.data is not b.data:
assert id(a.data) != id(b.data) # -- fail
I'm not familiar with the internals, but:
In [27]: a = np.empty(1)
In
This is good stuff, but I can't help thinking that if I needed to do an
any/all test on a number of arrays with common and/or combos --
I'd probably write a Cython function to do it.
It could be a bit tricky to make it really general, but not bad for a
couple specific dtypes / use cases.
-just a
FWIW,
You all may know this already, but a long is 64 bit on most 64 bit
platforms, but 32 bit on Windows.
Can we start using stdint.h and int32_t and friends?
-CHB
On Sep 3, 2013, at 5:18 PM, Charles R Harris charlesr.har...@gmail.com
wrote:
On Tue, Sep 3, 2013 at 6:09 PM, Christoph
On Sun, Sep 1, 2013 at 3:55 PM, Josè Luis Mietta
joseluismie...@yahoo.com.ar wrote:
Given two arbitrary sticks, i need a simple and effective algorithm that
determinate if that two sticks are conected by a 'intersected-sticks' path.
do you mean a test to see if two line segments intersect?
On Aug 22, 2013, at 11:57 PM, David Cournapeau courn...@gmail.com wrote:
npy_long is indeed just an alias to C long,
Which means it's likely broken on 32 bit platforms and 64 bit MSVC.
np.long is an alias to python's long:
But python's long is an unlimited type--it can't be mapped to a c type
On Fri, Aug 23, 2013 at 8:11 AM, Sebastian Berg
sebast...@sipsolutions.net wrote:
So this is giving us a 64 bit int--not a bad compromise, but not a
python long--I've got to wonder why the alias is there at all.
It is there because you can't remove it :).
sure we could -- not that we'd want
On Fri, Aug 23, 2013 at 8:15 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
I use 'bBhHiIlLqQ' for the C types. Long varies between 32 64 bit,
depending on the platform and 64 bit convention chosen. The C int is always
32 bits as far as I know.
Well, not in the spec, but in practice,
On Thu, Aug 22, 2013 at 12:14 PM, Russell E. Owen ro...@uw.edu wrote:
I'll be interested to learn how you make binary installers for python
3.x because the standard version of bdist_mpkg will not do it. I have
heard of two other projects (forks or variants of bdist_mpkg) that will,
but I have
Hi folks,
I had thought that maybe a numpy.long dtype was a system
(compiler)-native C long.
But on both 32 and 64 bit python on OS-X, it seems to be 64 bit. I'm
pretty sure that on OS-X 32 bit, a C long is 32 bits. (gdd, of course)
I don't have other machines to test on , but as the MS
Ralf,
Thanks for doing all this!
Building binaries for releases is currently quite complex and
time-consuming.
It sure would be nice to clean that up.
For OS X we need two different machines, because we still
provide binaries for OS X 10.5 and PPC machines. I propose to not do this
On Fri, Aug 16, 2013 at 8:20 AM, Alan G Isaac alan.is...@gmail.com wrote:
http://www.python.org/dev/peps/pep-0450/
https://groups.google.com/forum/#!topic/comp.lang.python/IV-3mobU7L0
as numpy is the right way to do this sort of stuff, I think this is
a better argument for a numpy-lite in the
On Thu, Aug 15, 2013 at 12:31 PM, Matthew Brett matthew.br...@gmail.com wrote:
I'm afraid I don't understand the discussion on timezones in
datetime64, but I have the impression that those who do think it needs
an urgent decision and some action for the short term. Is that right,
datetimers?
On Thu, Aug 15, 2013 at 3:16 PM, Matthew Brett matthew.br...@gmail.com wrote:
Chris B - are you the point man on this one? What do you think?
Only the point man in the sense that I'm poking at people to try to
get what I want ;-)
But see my other note.
-Chris
--
Christopher Barker, Ph.D.
On Tue, Aug 13, 2013 at 5:54 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
I wish it were. It seems unreasonably difficult to get constructive
feedback. Chris is pretty much the only one making the attempt and that
discussion petered out.
well, Nathaniel Smith chimed in, and Mark Weibe
On Wed, Aug 14, 2013 at 10:35 AM, Mark Wiebe mwwi...@gmail.com wrote:
Hi Mark, great to have you thinking (and coding) about this!
- Marc also commented that making datetime64 time-zone naive would be
the easy way
I've experimented a little bit with one possibility in this direction within
On Wed, Aug 14, 2013 at 2:12 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
with the Intel compilers, I have to supply --compiler and --fcompiler. Is
there any way to just do this in site config?
Maybe pip has a way to supply that info but I've never
bothered to look for it - python
On Tue, Aug 13, 2013 at 6:01 AM, Daπid davidmen...@gmail.com wrote:
Alternatively, you could also use seek to put the pointer a certain
distance from the end of the file and start from there,
That's what I'd do if the file(s) may be too large to simply dump into memory.
but this could cause
On Mon, Aug 12, 2013 at 2:41 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
Datetime64 will not be modified in this release.
I now there is neither the time nor the will for all that it needs,
but please, please, please, can we yank out the broken timezone
handling at least?
On Mon, Aug 12, 2013 at 3:14 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
Datetime64 will not be modified in this release.
I now there is neither the time nor the will for all that it needs,
but please, please, please, can we yank out the broken timezone
handling at least?
On Jul 23, 2013, at 4:57 PM, Alan G Isaac alan.is...@gmail.com wrote:
Finally, I think (?) everyone (proponents and opponents)
would be happy if .H could provide access to an iterative
view of the conjugate transpose.
Except those of us that don't think numpy needs it at all.
But I'll call
On Jul 23, 2013, at 11:54 PM, Stéfan van der Walt ste...@sun.ac.za wrote:
The .H property has been implemented in Numpy matrices and Scipy's
sparse matrices for many years.
Then we're done. Numpy is an array package, NOT a matrix package, and
while you can implement matrix math with arrays
plan
would be to remove the Matrix class from numpy over two or three
releases, and publish it as a separate package on PyPi.
Anyone willing to take ownership of it? Maybe we should still do it of
not-- at least it will make it clear that it is orphaned.
Though one plus to having matrix in
On Tue, Jul 23, 2013 at 6:09 AM, Pauli Virtanen p...@iki.fi wrote:
The .H property has been implemented in Numpy matrices and Scipy's
sparse matrices for many years.
Then we're done. Numpy is an array package, NOT a matrix package, and
while you can implement matrix math with arrays (and we
On Jul 12, 2013, at 8:51 PM, Brady McCary brady.mcc...@gmail.com wrote:
something to do with an alpha channel being present.
I'd check and see how PIL is storing the alpha channel. If it's RGBA,
then I'd expect it to work.
But I'd PIL is storing the alpha channel as a separate band, then I'm
I have some code more or less 500 lines, but very messy code. All codes
containing several functions are in one module, besides, without
documentation and testing.
Could anyone give me some advice to organize my messy code in an accurate
style including test function as well?
This is a
On Wed, Jun 12, 2013 at 5:10 AM, Nathaniel Smith n...@pobox.com wrote:
Personally I think that overloading np.empty is horribly ugly, will
continue confusing newbies and everyone else indefinitely, and I'm
100% convinced that we'll regret implementing such a warty interface
for something that
On Wed, Jun 12, 2013 at 11:49 AM, Eric Firing efir...@hawaii.edu wrote:
On 2013/06/12 4:18 AM, Nathaniel Smith wrote:
Now imagine a different new version of this page, if we overload
'empty' to add a fill= option. I don't even know how we document that
on this page. The list will remain:
On Wed, Jun 12, 2013 at 12:00 PM, Phil Hodge ho...@stsci.edu wrote:
On 06/12/2013 02:55 PM, Eric Firing wrote:
I would interpret np.filled as a test, asking whether the array is
filled. If the function is supposed to do something related to
assigning values, the name should be a verb.
or a
On Tue, Jun 11, 2013 at 10:22 AM, Nils Becker n.bec...@amolf.nl wrote:
fwiw, homebrew is not macports. it's a more recent replacement that
seems to be taking over gradually.
And then there is (or was) fink
Anyway, it would be really nice if numpy could work well out-of-the
box with the
On Thu, May 23, 2013 at 1:44 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
Just seeking some info here. The file stdint.h was part of the C99 standard
and has types for integers of specified width and thus could be used to
simplify some of the numpy configuration. I'm curious as to
On Wed, May 22, 2013 at 10:07 AM, Nicolas Rougier
U = np.zeros(1, dtype=[('x', np.float32, (4,4))])
U[0] = np.eye(4)
print U[0]
# output: ([[0.0, 1.875, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0,
1.875], [0.0, 0.0, 0.0, 0.0]],)
I get the same thing. Note:
In [86]: U[0].shape
On Wed, May 22, 2013 at 11:15 AM, eat e.antero.ta...@gmail.com wrote:
FWIW, apparently bug related to dtype of np.eye(.)
sort of -- the issue shows up when assigning a float64 array (default
for eye()) to a rank-0 array with a custom dtype that has a single
object filed that is an
On Mon, May 20, 2013 at 8:54 AM, Bakhtiyor Zokhidov
bakhtiyor_zokhi...@mail.ru wrote:
what about the following example:
new_ceil(-0.24, 0.25)
-0.0
ceil rounds toward +inf (and floor towards -inf) -- this is exactly
what you want if you're doing what I think you are...(note that
round() rounds
On May 18, 2013, at 1:53 AM, Daπid davidmen...@gmail.com wrote:
On 18 May 2013 07:11, Joe
You probably have a different version installed. Grab Python 2.7 from
python.org and install it;
Make sure you match 32/64 bit. The message is a bit out of date,
you'll get the same error if you try to
On Thu, May 9, 2013 at 1:06 AM, Sudheer Joseph sudheer.jos...@yahoo.comwrote:
Thank you Gomersall,
However writing a formatted out put looks to be bit tricky with
python relative to other programing languages.
this is just plain wrong -- working with text in python is as easy, or
On Wed, May 1, 2013 at 6:52 AM, Benjamin Root ben.r...@ou.edu wrote:
How about a tuple: (min, max)?
I am not familiar enough with numpy internals to know which is the better
approach to implement. I kind of feel that the 2xN array approach would be
more flexible in case a user wants all
of this inconsistency
We've hit this with Iris (a met/ocean analysis package - see github), and
have had to add several workarounds.
On 19 April 2013 16:55, Chris Barker - NOAA Federal chris.bar...@noaa.gov
wrote:
Hi folks,
In [264]: np.__version__
Out[264]: '1.7.0'
I just noticed
On Apr 30, 2013, at 6:37 PM, Benjamin Root ben.r...@ou.edu wrote:
I can not think of any reason not to include these functions in v1.8.
+1
Of course, the documentation for discussed before: np.minmax(). My thinking
is that it would return a 2xN array
How about a tuple: (min, max)?
-Chris
On Thu, Apr 25, 2013 at 8:19 AM, Dave Hirschfeld
dave.hirschf...@gmail.comwrote:
Hi All,I think it is time to start the runup to the 1.8 release. I don't
know of any outstanding blockers but if anyone has a PR/issue that they
feel
needs to be in the next Numpy release now is the time to make
On Mon, Apr 29, 2013 at 12:07 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
It would be good to get the utc-everywhere fix for datetime64 in there if
someone has time to look into it.
+1
I've been on vacation, so haven't written up the various notes and
comments as a NEP yet --
On Mon, Apr 29, 2013 at 12:12 PM, Chris Barker - NOAA Federal
chris.bar...@noaa.gov wrote:
It would be good to get the utc-everywhere fix for datetime64 in there if
someone has time to look into it.
I'll see if I can open an issue for the easy fix.
DONE: Issue #3290
--
Christopher
On Apr 18, 2013, at 11:33 PM, Nathaniel Smith n...@pobox.com wrote:
On 18 Apr 2013 01:29, Chris Barker - NOAA Federal chris.bar...@noaa.gov
wrote:
This has been annoying, particular as rank-zero scalars are kind of a
pain.
BTW, while we're on the topic, can you elaborate on this? I tend
On Thu, Apr 18, 2013 at 10:04 PM, K.-Michael Aye kmichael@gmail.com wrote:
On 2013-04-19 01:02:59 +, Benjamin Root said:
So why is there an error in the 2nd case, but no error in the first
case? Is there a logic to it?
When you change a dtype like that in the first one, you aren't
change? I'm trying to decide if this bugs me enough to work on
that.
-Chris
On Fri, Apr 19, 2013 at 8:03 AM, Chris Barker - NOAA Federal
chris.bar...@noaa.gov wrote:
On Apr 18, 2013, at 11:33 PM, Nathaniel Smith n...@pobox.com wrote:
On 18 Apr 2013 01:29, Chris Barker - NOAA Federal chris.bar
On Fri, Apr 19, 2013 at 8:12 AM, Ondřej Čertík ondrej.cer...@gmail.com wrote:
I'm pleased to announce the availability of the final NumPy 1.7.1 release.
Nice work -- but darn! I was hoping a change/fix to teh datetime64
timezone handlien could get into the next release -- oh well.
When do we
Hi folks,
In [264]: np.__version__
Out[264]: '1.7.0'
I just noticed that deep copying a rank-zero array yields a scalar --
probably not what we want.
In [242]: a1 = np.array(3)
In [243]: type(a1), a1
Out[243]: (numpy.ndarray, array(3))
In [244]: a2 = copy.deepcopy(a1)
In [245]: type(a2), a2
On Fri, Apr 19, 2013 at 8:46 AM, Nathaniel Smith n...@pobox.com wrote:
Nice work -- but darn! I was hoping a change/fix to teh datetime64
timezone handlien could get into the next release -- oh well.
That's probably too big a behavioural chance to go into a point
release in any case...
On Fri, Apr 19, 2013 at 10:21 AM, Robert Kern robert.k...@gmail.com wrote:
On Fri, Apr 19, 2013 at 8:45 PM, Chris Barker - NOAA Federal
chris.bar...@noaa.gov wrote:
Given that numpy scalars do exist, and have their uses -- I found this
wiki page to remind me:
http://projects.scipy.org/numpy
On Fri, Apr 19, 2013 at 11:31 AM, Nathaniel Smith n...@pobox.com wrote:
On 19 Apr 2013 19:22, Chris Barker - NOAA Federal chris.bar...@noaa.gov
wrote:
Anyway -- going to HDF, or netcdf, or role-your-own really seems like
overkill for this. I just need something fast and simple and it
doesn't
On Thu, Apr 18, 2013 at 4:04 AM, Robert Kern robert.k...@gmail.com wrote:
np.save() and company (and the NPY format itself) are for arrays, not
for scalars. np.save() uses an np.asanyarray() to coerce its input
which is why your scalar gets converted to a rank-zero array.
Fair enough -- so a
On Wed, Apr 17, 2013 at 6:05 PM, Benjamin Root ben.r...@ou.edu wrote:
Aren't we on standard time at Jan 1st? So, at that date, you would have
been -8.
yes, of course, pardon me for being an idiot.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
On Wed, Apr 17, 2013 at 11:27 PM, Joris Van den Bossche
jorisvandenboss...@gmail.com wrote:
Anyone tested this on Windows?
On Windows 7, numpy 1.7.0 (Anaconda 1.4.0 64 bit), I don't even get a wrong
answer, but an error:
In [3]: np.datetime64('1969-12-31 00')
Out[3]:
On Thu, Apr 18, 2013 at 8:31 AM, Chris Barker - NOAA Federal
chris.bar...@noaa.gov wrote:
Fair enough -- so a missing feature, not bug -- I'll need to look at
the docs and see if that can be clarified -
All I've found is the docstring docs (which also show up in the Sphinx
docs). I suggest
On Tue, Apr 16, 2013 at 9:32 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
Dude, it was the 60's, no one remembers.
I can't say I remember much from then -- but probably because I was 4
years old, not because of too much partying
-Chris
--
Christopher Barker, Ph.D.
On Tue, Apr 16, 2013 at 8:23 PM, Zachary Ploskey zplos...@gmail.com wrote:
The problem does not appear to exist on Linux with numpy version 1.6.2.
datetime64 was re-vampded a fair bit between 1.6 and 1.7
something is up here for sure with 1.7
We can be more dramatic about it:
In [5]:
On Wed, Apr 17, 2013 at 9:04 AM, Chris Barker - NOAA Federal
chris.bar...@noaa.gov wrote:
On Tue, Apr 16, 2013 at 8:23 PM, Zachary Ploskey zplos...@gmail.com wrote:
I'd say we need some more unit-tests!
speaking of which, where are the tests? I just did a quick poke at
github, and found:
https
On Tue, Apr 16, 2013 at 3:55 PM, Bob Nnamtrop bob.nnamt...@gmail.com wrote:
pss It would be most handy if datetime64 had a constructor of the form
np.datetime64(year,month,day,hour,min,sec) where these inputs were numpy
arrays and the output would have the same shape as the input arrays (but
On Wed, Apr 17, 2013 at 1:09 PM, Bob Nnamtrop bob.nnamt...@gmail.com wrote:
It would seem that before 1970 the dates do not include the time zone
adjustment while after 1970 they do. This is the source of the extra 7
hours.
In [21]: np.datetime64('1970-01-01 00')
Out[21]:
Folks,
I've discovered somethign intertesting (bug?) with numpy scalars ans
savz. If I save a numpy scalar, then reload it, ot comes back as
rank-0 array -- similar, but not the same thing:
In [144]: single_value, type(single_value)
Out[144]: (2.0, numpy.float32)
In [145]: np.savez('test.npz',
a bit more? I've never tried to use timezone
support with datetime, so I have no idea what goes wrong -- but it
looks reasonable to me. though it really punts on the hard stuff, so
maybe no point.
-Chris
Be Well
Anthony
On Fri, Apr 12, 2013 at 2:57 PM, Chris Barker - NOAA Federal
On Fri, Apr 12, 2013 at 9:52 AM, Riccardo De Maria
riccardodema...@gmail.com wrote:
Not related to leap seconds and physically accurate time deltas, I have just
noticed that SQLite has a nice API:
http://www.sqlite.org/lang_datefunc.html
that one can be inspired from. The source contains a
1 - 100 of 163 matches
Mail list logo