Hi all,
Just a heads-up that we're planning to upload Linux wheels for numpy
to PyPI soon. Unless there's some objection, these will be using
ATLAS, just like the current Windows wheels, for the same reasons --
moving to something faster like OpenBLAS would be good, but given the
concerns about
On Fri, Mar 4, 2016 at 7:30 PM, wrote:
[...]
> AFAIK, numpy doesn't provide access to BLAS/LAPACK. scipy does. statsmodels
> is linking to the installed BLAS/LAPACK in cython code through scipy. So far
> we haven't seen problems with different versions. I think scipy
On Tue, Feb 23, 2016 at 3:20 PM, Joseph Fox-Rabinovitz
wrote:
> P.S. I would like to turn `sinc` into a `ufunc` at some point if the
> community approves. It would make the computation much cleaner (e.g.,
> in-place `where`) and faster. It would also complement the
On Tue, Feb 23, 2016 at 12:23 PM, Benjamin Root <ben.v.r...@gmail.com> wrote:
>
> On Tue, Feb 23, 2016 at 3:14 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> Sure, it's totally ambiguous. These are all legal:
>
>
>
> I would argue that except
On Tue, Feb 23, 2016 at 8:45 AM, Benjamin Root wrote:
> but, it isn't really ambiguous, is it? The -1 can only refer to a single
> dimension, and if you ignore the zeros in the original and new shape, the -1
> is easily solvable, right?
Sure, it's totally ambiguous. These
Some questions it'd be good to get feedback on:
- any better ideas for naming it than "geomspace"? It's really too bad
that the 'logspace' name is already taken.
- I guess the alternative interface might be something like
np.linspace(start, stop, steps, spacing="log")
what do people think?
-n
In principle this should work (offer may be void on windows which has its
own special weirdnesses, but I assume you're not on windows). icc and gcc
should both support the same calling conventions and so forth. It sounds
like you're just running into an annoying build system configuration issue
On Sun, Feb 14, 2016 at 11:41 PM, Antony Lee wrote:
> I wonder whether numpy is using the "old" iteration protocol (repeatedly
> calling x[i] for increasing i until StopIteration is reached?) A quick
> timing shows that it is indeed slower.
Yeah, I'm pretty sure that
I believe this is basically a groupby, which is one of pandas's core
competencies... even if numpy were to add some utilities for this kind of
thing, then I doubt we'd do as well as them, so you might check whether
pandas works for you first :-)
On Feb 12, 2016 6:40 AM, "Sérgio"
Oh wow, yeah, there are tons of uses:
https://github.com/search?q=%22np.iterable%22=simplesearch=Code=%E2%9C%93
Meh, I dunno, maybe we're stuck with it. It's not a major maintenance
burden at least.
-n
On Thu, Feb 11, 2016 at 10:25 AM, Stephan Hoyer wrote:
> We certainly
On Tue, Feb 9, 2016 at 11:37 AM, Julian Taylor
<jtaylor.deb...@googlemail.com> wrote:
> On 09.02.2016 04:59, Nathaniel Smith wrote:
>> On Mon, Feb 8, 2016 at 6:07 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>> On Mon, Feb 8, 2016 at 6:04 PM, Matthew Brett <ma
On Feb 7, 2016 11:49 PM, "Matthew Brett" <matthew.br...@gmail.com> wrote:
>
> Hi Nadav,
>
> On Sun, Feb 7, 2016 at 11:13 PM, Nathaniel Smith <n...@pobox.com> wrote:
> > (This is not relevant to the main topic of the thread, but FYI I think
the
On Mon, Feb 8, 2016 at 6:04 PM, Matthew Brett <matthew.br...@gmail.com> wrote:
> On Mon, Feb 8, 2016 at 5:26 PM, Nathaniel Smith <n...@pobox.com> wrote:
>> On Mon, Feb 8, 2016 at 4:37 PM, Matthew Brett <matthew.br...@gmail.com>
>> wrote:
>> [...]
>>&g
On Mon, Feb 8, 2016 at 6:07 PM, Nathaniel Smith <n...@pobox.com> wrote:
> On Mon, Feb 8, 2016 at 6:04 PM, Matthew Brett <matthew.br...@gmail.com> wrote:
>> On Mon, Feb 8, 2016 at 5:26 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>> On Mon, Feb 8, 2016 a
On Feb 8, 2016 3:04 AM, "Nils Becker" wrote:
>
[...]
> Very superficial benchmarks (see below) seem devastating for gnu libm. It
seems that openlibm (compiled with gcc -mtune=native -O3) performs really
well and intels libm implementation is the best (on my intel CPU). I
On Mon, Feb 8, 2016 at 4:37 PM, Matthew Brett wrote:
[...]
> I can't replicate the segfault with manylinux wheels and scipy. On
> the other hand, I get a new test error for numpy from manylinux, scipy
> from manylinux, like this:
>
> $ python -c 'import scipy.linalg;
On Feb 6, 2016 12:27 PM, "Matthew Brett" wrote:
>
> Hi,
>
> As some of you may have seen, Robert McGibbon and Nathaniel have just
> guided a PEP for multi-distribution Linux wheels past the approval
> process over on distutils-sig:
>
>
On Feb 7, 2016 15:27, "Charles R Harris" <charlesr.har...@gmail.com> wrote:
>
>
>
> On Sun, Feb 7, 2016 at 2:16 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Sun, Feb 7, 2016 at 9:49 AM, Charles R Harris
>> <charlesr.har...@gmail.co
On Sun, Feb 7, 2016 at 4:39 PM, Nils Becker wrote:
> Hi all,
>
> I wanted to know if there is any sane way to build numpy while linking to a
> different implementation of libm?
> A drop-in replacement for libm (e.g. openlibm) should in principle work, I
> guess, but I did
On Sat, Feb 6, 2016 at 9:28 PM, Nadav Horesh wrote:
> Test platform: python 3.4.1 on archlinux x86_64
>
> scipy test: OK
>
> OK (KNOWNFAIL=97, SKIP=1626)
>
>
> numpy tests: Failed on long double and int128 tests, and got one error:
Could you post the complete output from
(This is not relevant to the main topic of the thread, but FYI I think the
recarray issues are fixed in 1.10.4.)
On Feb 7, 2016 11:10 PM, "Nadav Horesh" wrote:
> I have atlas-lapack-base installed via pacman (required by sagemath).
> Since the numpy installation insisted
On Feb 5, 2016 8:28 AM, "Chris Barker - NOAA Federal"
wrote:
>
> > An extra ~2 hours of tests / 6-way parallelism is not that big a deal
> > in the grand scheme of things (and I guess it's probably less than
> > that if we can take advantage of existing binary builds)
>
>
On Fri, Feb 5, 2016 at 1:16 PM, Chris Barker <chris.bar...@noaa.gov> wrote:
> On Fri, Feb 5, 2016 at 9:55 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> > If we set up a numpy-testing conda channel, it could be used to cache
>> > binary builds for all he
On Tue, Feb 2, 2016 at 8:45 AM, Pauli Virtanen wrote:
> 01.02.2016, 23:25, Ralf Gommers kirjoitti:
> [clip]
>> So: it would really help if someone could pick up the automation part of
>> this and improve the stack testing, so the numpy release manager doesn't
>> have to do this.
>
>
On Wed, Feb 3, 2016 at 9:18 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
> On Tue, Feb 2, 2016 at 8:45 AM, Pauli Virtanen <p...@iki.fi> wrote:
> > 01.02.2016, 23:25, Ralf Gommers kirjoitti:
> > [clip]
> >> So: it would really help if so
On Jan 30, 2016 9:27 AM, "Ralf Gommers" <ralf.gomm...@gmail.com> wrote:
>
>
>
> On Fri, Jan 29, 2016 at 11:39 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> It occurs to me that the best solution might be to put together a
.travis.ym
On Jan 29, 2016 9:46 AM, "Andreas Mueller" wrote:
>
> Is this the point when scikit-learn should build against it?
Yes please!
> Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule that
says it can't become an RC until after we've
AFAIK beta releases act just like regular releases, except that the pip ui
and the pypi ui continue to emphasize the older "real" release.
On Jan 28, 2016 2:03 PM, "Charles R Harris" <charlesr.har...@gmail.com>
wrote:
>
>
> On Thu, Jan 28, 2016 at 2:36 PM, Nat
What does
ldd
/usr/local/python2/2.7.8/x86_64/gcc46/New_build/lib/python2.7/site-packages/numpy/core/multiarray.so
say?
(I'm not a numpy build expert but that should at least give a hint at which
kind of brokenness you're running into... I'm also somewhat curious why
you're using such an
On Thu, Jan 28, 2016 at 2:23 PM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:
>
>
> On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
>>
>>
>>
>> On Thu, Jan 28, 2016 at 2:36 PM, Nathaniel Smith <n...@pobox
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because
pip is clever enough to recognize that this is a prerelease and should not
be
On Jan 28, 2016 3:25 PM, "Ralf Gommers" <ralf.gomm...@gmail.com> wrote:
>
>
>
> On Thu, Jan 28, 2016 at 11:57 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Thu, Jan 28, 2016 at 2:23 PM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:
>&g
+1
On Wed, Jan 27, 2016 at 10:26 AM, Sebastian Berg wrote:
> Hi all,
>
> in my PR about warnings suppression, I currently also have a commit
> which bumps the warning stacklevel to two (or three), i.e. use:
>
> warnings.warn(..., stacklevel=2)
>
> (almost)
On Mon, Jan 25, 2016 at 2:37 PM, G Young wrote:
> Hello all,
>
> I currently have a branch on my fork (not PR) where I am experimenting with
> running Appveyor CI via Virtualenv instead of Conda. I have build running
> here. What do people think of using Virtualenv (as we
On Sun, Jan 24, 2016 at 4:46 PM, Juan Nunez-Iglesias wrote:
>> Yeah, that is a real use case. I am not planning to remove the option,
>> but it would be as a `readonly` keyword argument, which means you would
>> need to make the code depend on the numpy version if you require
On Sat, Jan 23, 2016 at 1:25 PM, Sebastian Berg
wrote:
>
> Hi all,
>
> I have just opened a PR, to make as_strided writeonly (as default). The
I think you meant readonly :-)
> reasoning for this change is that an `as_strided` array often have self
> overlapping
On Thu, Jan 21, 2016 at 10:24 PM, Jonathan J. Helmus <jjhel...@gmail.com> wrote:
> On 1/21/2016 8:32 PM, Nathaniel Smith wrote:
>>
>>
>> > Does this apply in any way to the .data attribute in scipy.sparse
>> > matrices?
>>
>> Nope!
>>
&g
On Thu, Jan 21, 2016 at 4:05 PM, Sebastian Berg
wrote:
> Hi all,
>
> should we try to set FutureWarnings to errors in dev tests? I am
> seriously annoyed by FutureWarnings getting lost all over for two
> reasons. First, it is hard to impossible to find even our own
So it turns out that ndarray.data supports assignment at the Python
level, and what it does is just assign to the ->data field of the
ndarray object:
https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/getset.c#L325
This kind of assignment been deprecated at the C level since
Warnings filters can be given a regex matching the warning text, I think?
On Jan 21, 2016 5:00 PM, "Sebastian Berg" <sebast...@sipsolutions.net>
wrote:
> On Do, 2016-01-21 at 16:51 -0800, Nathaniel Smith wrote:
> > On Thu, Jan 21, 2016 at 4:44 PM, Sebastian Berg
> &
On Thu, Jan 21, 2016 at 4:44 PM, Sebastian Berg
<sebast...@sipsolutions.net> wrote:
> On Do, 2016-01-21 at 16:15 -0800, Nathaniel Smith wrote:
>> On Thu, Jan 21, 2016 at 4:05 PM, Sebastian Berg
>> <sebast...@sipsolutions.net> wrote:
>> > Hi all,
>>
On Jan 21, 2016 6:17 PM, "Juan Nunez-Iglesias" wrote:
>
> Does this apply in any way to the .data attribute in scipy.sparse
matrices?
Nope!
-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Jan 15, 2016 8:36 AM, "Li Jiajia" wrote:
>
> Hi all,
> I’m a PhD student in Georgia Tech. Recently, we’re working on a survey
paper about tensor algorithms: basic tensor operations, tensor
decomposition and some tensor applications. We are making a table to
compare the
On Jan 15, 2016 10:23 AM, "Chris Barker" wrote:
>
[...]
> So here is the problem I want to solve:
>
> > pip install numpy scipy matplotlib scikit-learn scikit-image pandas h5py
>
> last I checked, each of those is self-contained, except for python-level
dependencies, most
On Fri, Jan 15, 2016 at 12:09 PM, Chris Barker <chris.bar...@noaa.gov> wrote:
> On Fri, Jan 15, 2016 at 11:21 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> Sure. Someone's already packaged those for conda, and no one has packaged
>> them for pypi, so
On Fri, Jan 15, 2016 at 11:56 AM, Travis Oliphant wrote:
>
[...]
> I still submit that this is not the best use of time. Conda *already*
> solves the problem.My sadness is that people keep working to create an
> ultimately inferior solution rather than just help make a
I'd try storing the data in hdf5 (probably via h5py, which is a more
basic interface without all the bells-and-whistles that pytables
adds), though any method you use is going to be limited by the need to
do a seek before each read. Storing the data on SSD will probably help
a lot if you can
Yeah, that does look like a bug.
On Thu, Jan 14, 2016 at 4:48 PM, Andrew Nelson wrote:
> Hi all,
> I think there is an inconsistency with np.isclose when I compare two
> numbers:
>
np.isclose(0, np.inf)
> array([False], dtype=bool)
>
np.isclose(0, 1)
> False
>
> The
On Thu, Jan 14, 2016 at 2:13 PM, Stephan Hoyer wrote:
> On Thu, Jan 14, 2016 at 8:26 AM, Travis Oliphant
> wrote:
>>
>> I don't know enough about xray to know whether it supports this kind of
>> general labeling to be able to build your entire
On Tue, Jan 12, 2016 at 9:18 PM, Charles R Harris
wrote:
>
> Hi All,
>
> I've opened issue #7002, reproduced below, for discussion.
>>
>> Numpy umath has a file scalarmath.c.src that implements scalar arithmetic
>> using special functions that are about 10x faster than
On Jan 11, 2016 3:54 PM, "Chris Barker" wrote:
>
> On Mon, Jan 11, 2016 at 11:02 AM, David Cournapeau
wrote:
>>>
>>> If we get all that worked out, we still haven't made any progress
toward the non-standard libs that aren't python. This is the big
On Mon, Jan 11, 2016 at 6:06 AM, Robert McGibbon wrote:
> I started working on a tool for checking linux wheels for "manylinux"
> compatibility, and fixing them up if possible, based on the same ideas as
> Matthew Brett's delocate for OS X. Current WIP code, if anyone wants to
On Jan 9, 2016 04:12, "Julian Taylor" <jtaylor.deb...@googlemail.com> wrote:
>
> On 09.01.2016 04:38, Nathaniel Smith wrote:
> > On Fri, Jan 8, 2016 at 7:17 PM, Nathan Goldbaum <nathan12...@gmail.com>
wrote:
> >> Doesn't building on CentOS 5 also mean us
On Sat, Jan 9, 2016 at 3:52 AM, Robert McGibbon wrote:
> Hi all,
>
> I went ahead and tried to collect a list of all of the libraries that could
> be considered to constitute the "base" system for linux-64. The strategy I
> used was to leverage off the work done by the folks
On Jan 9, 2016 10:09, "Matthew Brett" wrote:
>
> Hi Sandro,
>
> On Sat, Jan 9, 2016 at 4:44 AM, Sandro Tosi wrote:
> >> I wrote a page on using pip with Debian / Ubuntu here :
> >> https://matthew-brett.github.io/pydagogue/installing_on_debian.html
> >
On Sat, Jan 9, 2016 at 8:58 PM, Matthew Brett <matthew.br...@gmail.com> wrote:
> On Sat, Jan 9, 2016 at 8:49 PM, Nathaniel Smith <n...@pobox.com> wrote:
>> On Jan 9, 2016 10:09, "Matthew Brett" <matthew.br...@gmail.com> wrote:
>>>
>>> Hi Sand
On Fri, Jan 8, 2016 at 4:31 PM, Robert McGibbon wrote:
>> Both Anaconda and Canopy build on a base default Linux system so that
>> the built binaries will work on many Linux systems.
>
> I think the base linux system is CentOS 5, and from my experience, it seems
> like this
On Fri, Jan 8, 2016 at 7:17 PM, Nathan Goldbaum wrote:
> Doesn't building on CentOS 5 also mean using a quite old version of gcc?
Yes. IIRC CentOS 5 ships with gcc 4.4, and you can bump that up to gcc
4.8 by using the Redhat Developer Toolset release (which is gcc +
On Fri, Jan 8, 2016 at 7:41 PM, Robert McGibbon wrote:
>> Doesn't building on CentOS 5 also mean using a quite old version of gcc?
>
> I have had pretty good luck using the (awesomely named) Holy Build Box,
> which is a CentOS 5 docker image with a newer gcc version installed
On Jan 8, 2016 20:08, "Robert McGibbon" wrote:
>
> > Continuum and Enthought both have a whole list of packages beyond
> glibc that are safe enough to link to, including a bunch of ones that
> would be big pains to statically link everywhere (libX11, etc.).
> That's the useful
On Thu, Jan 7, 2016 at 6:18 PM, Yuxiang Wang wrote:
> Dear all,
>
> I know that in Windows, we should use either Christoph's package or
> Anaconda for MKL-optimized numpy. In Linux, the fortran compiler issue
> is solved, so should I directly used pip install numpy to get
On Tue, Jan 5, 2016 at 4:55 PM, Marten van Kerkwijk
wrote:
> Hi Chuck, others,
>
> A propos __numpy_ufunc__, what is the current status? Is it still the
> undetermined result of the monster-thread
> (https://github.com/numpy/numpy/issues/5844 -- just found it again by
>
On Thu, Dec 31, 2015 at 12:10 PM, Benjamin Root wrote:
> TBH, I wouldn't have expected it to work, but now that I see it, it does
> make some sense. I would have thought that it would error out as being
> ambiguous (prepend? append?). I have always used ellipses to make it
>
On Dec 20, 2015 12:45 PM, "Ralf Gommers" <ralf.gomm...@gmail.com> wrote:
>
>
>
> On Sun, Dec 20, 2015 at 9:34 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
>> On Dec 20, 2015 12:23 PM, "Ralf Gommers" <ralf.gomm...@gmail.com> wrote:
>
On Sun, Dec 20, 2015 at 7:34 PM, Charles R Harris
<charlesr.har...@gmail.com> wrote:
>
>
> On Sun, Dec 20, 2015 at 8:11 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Sun, Dec 20, 2015 at 6:43 PM, Charles R Harris
>> <charlesr.har...@gmail.com
On Sun, Dec 20, 2015 at 6:43 PM, Charles R Harris
<charlesr.har...@gmail.com> wrote:
>
>
> On Sun, Dec 20, 2015 at 5:48 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
[...]
>> Also, am I correct that these are win64 builds only? Anyone know if it
>> would be
On Sun, Dec 20, 2015 at 4:11 PM, Charles R Harris
wrote:
>
>
>
> On Sun, Dec 20, 2015 at 4:27 PM, Charles R Harris
> wrote:
>>
>>
>>
>> On Sun, Dec 20, 2015 at 4:22 PM, Charles R Harris
>> wrote:
>>>
>>>
>>>
>>>
On Dec 20, 2015 12:23 PM, "Ralf Gommers" wrote:
>
>
>
> On Sun, Dec 20, 2015 at 8:23 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>>
>>
>>
>> On Sun, Dec 20, 2015 at 11:08 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>>>
>>>
>>>
>>> On Sun, Dec 20,
Hi all,
I'm wondering what people think of the idea of us (= numpy) stopping
providing our "official" win32 builds (the "superpack installers"
distributed on sourceforge) starting with the next release.
These builds are:
- low quality: they're linked to an old & untuned build of ATLAS, so
On Fri, Dec 18, 2015 at 1:25 PM, Ryan R. Rosario wrote:
> Hi,
>
> I have a matrix whose entries I must raise to a certain power and then
> normalize by row. After I do that, when I pass some rows to
> numpy.random.choice, I get a ValueError: probabilities do not sum to 1.
>
On Dec 18, 2015 2:22 PM, "Ian Henriksen" <
insertinterestingnameh...@gmail.com> wrote:
>
> An appveyor setup is a great idea. An appveyor build matrix with the
> various supported MSVC versions would do a lot more to prevent
> compatibility issues than periodically building installers with old
What operating system are you on and how did you install numpy? From a
package manager, from source, by downloading from somewhere...?
On Dec 16, 2015 9:34 AM, "Edward Richards"
wrote:
> I recently did a conceptual experiment to estimate the computational time
>
On Fri, Dec 11, 2015 at 11:32 PM, Juan Nunez-Iglesias
wrote:
> Nathaniel,
>
>> IMO this is better than making np.array(iter) internally call list(iter)
>> or equivalent
>
> Yeah but that's not the only option:
>
> from itertools import chain
> def
On Dec 12, 2015 10:53 AM, "Mathieu Dubois"
wrote:
>
> Le 11/12/2015 11:22, Sturla Molden a écrit :
>>
>> Mathieu Dubois wrote:
>>
>>> The point is precisely that, you can't do memory mapping with Npz files
>>> (while it works
Constructing an array from an iterator is fundamentally different from
constructing an array from an in-memory data structure like a list,
because in the iterator case it's necessary to either use a
single-pass algorithm or else create extra temporary buffers that
cause much higher memory
On Dec 11, 2015 7:46 AM, "Charles R Harris"
wrote:
>
>
>
> On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote:
>>
>> From time to time it is asked on forums how to extend precision of
computation on Numpy array. The most common answer
>> given to
On Dec 10, 2015 4:11 AM, "Andy Ray Terrel" wrote:
>
> That's essentially what datashape did over in the blaze ecosystem. It
gets a bit fancier to support ragged arrays and optional types.
>
> http://datashape.readthedocs.org/en/latest/
IIUC this is a much more modest
On Thu, Dec 10, 2015 at 4:05 PM, Jacopo Sabbatini
wrote:
> Hi,
>
> I'm experiencing random segmentation faults from numpy. I have generated a
> core dumped and extracted a stack trace, the following:
>
> #0 0x7f3a8d921d5d in getenv () from /lib64/libc.so.6
>
On Dec 9, 2015 7:23 AM, "Nadav Horesh" wrote:
>
> Is is possible that recarray are slow again?
Anything is possible, but we haven't had any reports of this yet, so if
you're seeing something weird then please elaborate?
-n
___
On Mon, Dec 7, 2015 at 12:42 PM, Peter Creasey
wrote:
>>> >
>>> > Is the interp fix in the google pipeline or do we need a workaround?
>>> >
>>>
>>> Oooh, if someone is looking at changing interp, is there any chance
>>> that fp could be extended to take complex128
On Dec 7, 2015 3:41 AM, "Sydney Shall" wrote:
> In fact, biological evolution does just the opposite. [...]
Hi all,
Can I suggest that any further follow-ups to this no-doubt fascinating
discussion be taken off-list? No need to acknowledge or apologize or
anything, just
On Dec 6, 2015 6:03 PM, "Peter Creasey"
wrote:
>
> >
> > Is the interp fix in the google pipeline or do we need a workaround?
> >
>
> Oooh, if someone is looking at changing interp, is there any chance
> that fp could be extended to take complex128 rather than just
On Fri, Dec 4, 2015 at 1:27 AM, David Cournapeau wrote:
> I would be in favour of dropping 3.3, but not 2.6 until it becomes too
> cumbersome to support.
>
> As a data point, as of april, 2.6 was more downloaded than all python 3.X
> versions together when looking at pypi
On Nov 30, 2015 12:19 PM, "Sebastian Berg"
wrote:
>
> On Mo, 2015-11-30 at 18:42 +0100, Lluís Vilanova wrote:
> > Hi,
> >
> > TL;DR: There's a pending pull request deprecating some behaviour I find
> >unexpected. Does anyone object?
> >
> > Some time ago I
On Nov 24, 2015 11:57 AM, "John Kirkham" wrote:
>
> Takes an array and tacks on arbitrary dimensions on either side, which is
returned as a view always. Here are the relevant features:
>
> * Creates a view of the array that has the dimensions before and after
tacked on to it.
On Nov 13, 2015 10:06 AM, "Charles R Harris"
wrote:
>
> Hi All,
>
> I think 1.10.0 and 1.10.1 are sufficiently buggy that they should be
removed from circulation as soon as 1.10.2 comes out. The inner product bug
for non contiguous arrays is particularly egregious. It
On Mon, Nov 9, 2015 at 4:43 PM, Charles R Harris
<charlesr.har...@gmail.com> wrote:
>
>
> On Sun, Nov 8, 2015 at 8:43 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Nov 8, 2015 6:00 PM, "Eric Firing" <efir...@hawaii.edu> wrote:
>>
On Nov 8, 2015 6:00 PM, "Eric Firing" wrote:
>
> I also prefer that there be a single convention: either the "out" kwarg
is the end of the every signature, or it is the first kwarg in every
signature. It's a very special and unusual kwarg, so it should have a
standard
On Sat, Nov 7, 2015 at 1:18 PM, aerojockey wrote:
> Hello,
>
> Recently I made some changes to a program I'm working on, and found that the
> changes made it four times slower than before. After some digging, I found
> out that one of the new costs was that I added
On Mon, Nov 2, 2015 at 5:57 PM, Nathaniel Smith <n...@pobox.com> wrote:
> On Sun, Nov 1, 2015 at 3:16 PM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:
>> 2. ``pip install .`` silences build output, which may make sense for some
>> usecases, but for numpy it
On Wed, Nov 4, 2015 at 4:40 PM, Stefan Seefeld wrote:
> Hello,
>
> is there a way to query Numpy for information about backends (BLAS,
> LAPACK, etc.) that it was compiled against, including compiler / linker
> flags that were used ?
> Consider the use-case where instead of
[Adding distutils-sig to the CC as a heads-up. The context is that
numpy is looking at deprecating the use of 'python setup.py install'
and enforcing the use of 'pip install .' instead, and running into
some issues that will probably need to be addressed if 'pip install .'
is going to become the
On Nov 2, 2015 6:51 PM, "Robert Collins" <robe...@robertcollins.net> wrote:
>
> On 3 November 2015 at 14:57, Nathaniel Smith <n...@pobox.com> wrote:
> > [Adding distutils-sig to the CC as a heads-up. The context is that
> > numpy is looking at depreca
On Oct 31, 2015 4:01 PM, "Ralf Gommers" wrote:
>
> Hi all,
>
> On Wed, Oct 28, 2015 at 11:48 PM, Ralf Gommers wrote:
>>
>>
>> Hi all, there wasn't much feedback on this FSA, but I want to point out that
>> it's actually quite important for the
On Sat, Oct 31, 2015 at 4:57 PM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:
>
> On Sun, Nov 1, 2015 at 12:53 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Oct 31, 2015 4:01 PM, "Ralf Gommers" <ralf.gomm...@gmail.com> wrote:
>> >
Hi all,
Jonathan J. Helmus (@jjhelmus) has been given commit rights -- let's all
welcome him aboard.
-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mon, Oct 26, 2015 at 11:53 PM, Juan Nunez-Iglesias
wrote:
> Is there a pip equivalent of "python setup.py develop"?
Kinda answered this already when replying to Chuck, but: yes, it's
`pip install -e ` (the -e is short for
--editable), not that you would need it necessarily
On Mon, Oct 26, 2015 at 11:33 PM, Charles R Harris
<charlesr.har...@gmail.com> wrote:
>
>
> On Tue, Oct 27, 2015 at 12:08 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Mon, Oct 26, 2015 at 11:03 PM, Charles R Harris
>> <charlesr.har...@gmail.com&g
On Tue, Oct 27, 2015 at 12:19 AM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:
>
>
> On Tue, Oct 27, 2015 at 6:44 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Mon, Oct 26, 2015 at 9:31 PM, Nathaniel Smith <n...@pobox.com> wrote:
>> [...]
&
On Mon, Oct 26, 2015 at 11:03 PM, Charles R Harris
wrote:
>
[...]
> I gave it a shot the other day. Pip keeps a record of the path to the repo
> and in order to cleanup I needed to search out the file and delete the repo
> path. There is probably a better way to do
101 - 200 of 1301 matches
Mail list logo