- numpy/numarray/_capi.c
- numpy/lib/tests/test_io.py
- numpy/core/include/numpy/npy_math.h
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
27;d', 'Zf', 'Zd', etc.)
> without calling Python code... Of course, complaints without patches
> should not be taken too seriously ;-)
We'll optimize once someone complains this makes their code slow ;)
--
Pauli Virtanen
___
Fortran code, or uses stuff such as strings
for which Lua's default implementation may be efficient.
At least in the mandelbrot example some things differ. I wonder if Lua
there takes advantage of SIMD instructions because the author of the code
has manually changed the inmost loop
g
> loop), a[...,0] itself is changed during the loop, while in the former
> case, numpy makes a copy of a[...,0] ?
Correct.
> Is this intended?
Not really. It's a "feature" we're planning to get rid of eventually,
once a way to do it without sacrificing performance in &q
0.99.1.2, on Ubuntu.
So it seems your problem is localised on the older EPD-6.1-1.
The question is: do you need to support this older version of EPD at all?
The problem does not appear to be that matplotlib has stopped plotting
masked arrays properly, or that something crucial has
recommended by our PR department @ EuroScipy :)
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ti, 2010-07-13 kello 10:06 -0700, Keith Goodman kirjoitti:
> No need to use where. You can just do a[np.isnan(a)] = 0. But you do
> have to watch out for 0d arrays, can't index into those.
You can, but the index must be appropriate:
>>> x = np.array(4)
>>> x[()] = 3
>>> x
array(3)
_
&& isnan((x)-(x)))
I'll replace it by the obvious
((x) == NPY_INFINITY || (x) == -NPY_INFINITY)
which is true only for +-inf, and cannot raise any FPU exceptions.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@sc
Testing with arithmetic can raise overflows and underflows.
I think the correct isinf is to compare to NPY_INFINITY and -NPY_INFINITY.
Patch is attached to #1500
- Alkuperäinen viesti -
> On Thu, Jul 15, 2010 at 6:42 PM, John Hunter wrote:
>
> > On Thu, Jul 15, 2010 at 7:27 PM, Charles
/* must be finite (normal or subnormal), or NaN */
> return (0);
> }
This function can generate overflows, for example for
x = 0.6 * np.finfo(np.float64).max
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
'1.00e+00'
>>> float(np.complex64(1+2j))
1.0
This should raise an exception. I guess the culprit is at
scalarmathmodule.c.src:1023.
It should at least be changed to raise a ComplexWarning, which we now
anyway do in casting.
--
Pauli V
release.
So I think we should just stick with PyCObject on 2.x, as we have done so
far. I'll just bump the version checks so that PyCapsule is used only on
3.x.
I'll commit this to Numpy SVN soon.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
b) Support Python 3.1 and 2.7.
I expect it will be easy to track changes in the trunk, even if preparing
the release will still take some time.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/ma
uired change is essentially replacing
svn checkout ...
svn update
by
git clone ...
git pull ...
in a shell script, and clearing up the checkout directories. I can take
care of this on the server machine, once the move to git has been made.
--
Sat, 17 Jul 2010 16:06:40 -0600, Charles R Harris wrote:
> At the moment... Chuck
Worksforme at the moment.
Pauli
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sun, 18 Jul 2010 21:13:43 +0800, Ralf Gommers wrote:
[clip]
> Builds fine on OS X 10.6 with both 2.7 and 3.1, and all tests pass. With
> one exception: in-place build for 3.1 is broken. Does anyone know is
> this is a distutils or numpy issue? The problem is that on import
> numpy.__config__ can no
[shameless plug]
For those of you tired of waiting for 2to3 after "rm -rf build":
http://github.com/pv/lib2to3cache
and
cd numpy/
USE_2TO3CACHE=1 python3 setup.py build
--
Pauli Virtanen
___
NumPy-Discussion ma
Preferences)
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
now works (final tweaks in r8508, 8509).
Comments are welcome.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
], dtype=complex256)
>
> i ask for Complex128 and get complex256?
What platform? On Win-64 longdouble == double.
Of course, the byte-width names should still reflect the reality. Please
file a bug ticket...
--
Pauli Virtanen
___
NumPy-Di
Sun, 18 Jul 2010 15:57:47 -0600, Charles R Harris wrote:
> On Sun, Jul 18, 2010 at 3:36 PM, Pauli Virtanen wrote:
[clip]
>> I suggest the following, aping the way the real nan works:
>>
>> - (z, nan), (nan, z), (nan, nan), where z is any fp value, are all
>> equivale
> What is the difference between these two versions? I usually check out
> the svn version (now 2.0) and it compiles well with python 2.6, 2.7 and
> 3.1.
Binary compatibility with previous versions.
Moreover, 2.0 will likely contain a refactored core.
_
> However, nans have been propagated by maximum and minimum since 1.4.0.
> There was a question, discussed on the list, as to what 'nan' complex to
> return in the propagation, but it was still a nan complex in your
> definition of such objects. The final choice was driven by using the
> first of
import numpy as np
np.array([0,0], dtype=np.complex256)
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ma, 2010-07-19 kello 15:10 +0300, Nadav Horesh kirjoitti:
> Till now I see that numpy2 plays well with PIL, Matplotlib, scipy and
> maybe some other packages. Should I expect that it might break?
If the other packages are compiled against Numpy 1.4.1 or earlier, then
yes, they are expected to bre
Numeric.Complex32
'F'
There, the width of complex number was given by specifying the size of
one element, not the total size.
Numpy should probably raise a DeprecationWarning when someone tries to
use these old Numeric type codes.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
NIT_FUNC
> initexample(void)
> {
> (void) Py_InitModule("example", wrappers);
> }
You need to call import_array(); in the init function. Otherwise,
PyArray_Type probably won't be initialized. This can cause a crash.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ke, 2010-07-21 kello 10:46 +0200, David Cournapeau kirjoitti:
[clip: assert nulps]
> I think we should go toward using those almost everywhere we need
> floating point comparison in testing,
I think we also need an assertion function that behaves like np.allclose
-- the ULPs are somewhat unintuiti
You should try using Numpy functions (these don't re-box the data) to do
this. http://docs.scipy.org/doc/numpy/reference/routines.set.html
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
he overhead involved
in making Python function calls (PyArg_*) and interpreting the
bytecode.
So as the usual optimization mantra applies here: measure first :)
Of course, if you measure and show that the expectations 1-4) are
actually wrong, that's fine.
--
Pauli Virtanen
_
he above requires no copying of data, and should be relatively fast. If
you need overlapping windows, those can be emulated with strides:
http://mentat.za.net/numpy/scipy2009/stefanv_numpy_advanced.pdf
http://conference.scipy.org/scipy201
nce it maps to either NPY_LONG or
whatever the C "int" type happens to be.
Maybe someone can correct me here, but AFAIK it goes like so.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thu, 22 Jul 2010 15:00:50 -0500, Johann Hibschman wrote:
[clip]
> Now, I'm on an older version (1.3.0), which might be the problem, but
> which is "correct" here, the code or the docs?
The documentation is incorrect.
--
Pauli Virtanen
p even
> more, but I couldn't figure out how I'd handle the different attributes
> (or specifically, how to keep them together during a sort).
>
> What're my options?
One option could be to use structured arrays to store the data, instead
of Python objects.
--
Pauli
be extended to other reduction
operations. Note that changing reduce behavior would require us to
special-case the above two operations.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
for an operation that is otherwise a *left*
> fold is very odd, no matter how you slice it. That is what looks like
> special casing...
I think I see your point now.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
wig/numpy.i)
for users in their own projects
I'm not sure what's the best place to put these in.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
, line 1, in
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in
position 0: ordinal not in range(128)
You probably meant to use byte strings, though:
string = b"".join(chr(i).encode('latin1') for i in range(256))
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Mon, 26 Jul 2010 13:57:36 +0900, David Cournapeau wrote:
> I have finally prepared and uploaded a test repository containing numpy
> code:
>
> http://github.com/numpy/numpy_svn
Some observations based on a quick look:
1)
$ git branch -r
origin/maintenance/1.1.x_5227
origin/maintenance/1.5.x
Tue, 27 Jul 2010 00:26:51 +0800, Ralf Gommers wrote:
[clip]
> > > Also, Ralf has the Git ID in format "rgommers<...email...>", but I
> > > guess that's correct?
>
> My checkout failed so can't check this, but that looks a little odd. My
> .gitconfig is normal:
> user]
> name = rgommers
> em
Aug 15: rc 1
> Aug 22: rc 2
> Aug 29: release
Seems OK. I don't remember any big changes that would still be needed.
One low-hanging fruit to fix could be np.fromfile raising MemoryError
when it encounters EOF, and other bugs in that part of
Tue, 27 Jul 2010 08:37:56 -0600, Jed Ludlow wrote:
> On Tue, Jul 27, 2010 at 2:38 AM, Pauli Virtanen wrote:
>>
>> One low-hanging fruit to fix could be np.fromfile raising MemoryError
>> when it encounters EOF, and other bugs in that part of the code.
>
> The EOF bug
Tue, 27 Jul 2010 10:50:51 -0700, Christopher Barker wrote:
> Pauli Virtanen wrote:
>> Tue, 27 Jul 2010 08:37:56 -0600, Jed Ludlow wrote:
>>> On Tue, Jul 27, 2010 at 2:38 AM, Pauli Virtanen wrote:
>>>> One low-hanging fruit to fix could be np.fromfile raising Memor
Wed, 28 Jul 2010 12:17:27 +0900, David Cournapeau wrote:
[clip]
http://github.com/numpy/numpy_svn
>
> I put a new repostory (same location)
Some more notes:
- 1.1.x branch is missing.
This is maybe because in SVN something ugly was done with this branch?
- Something is still funny with
Wed, 28 Jul 2010 12:17:27 +0900, David Cournapeau wrote:
[clip]
http://github.com/numpy/numpy_svn
>
> I put a new repostory (same location)
Compared this against git-svn produced repository. There are a number of
commits missing from the early history, apparently because numpy trunk
was mo
since adding a new field changes the size
of the array item.
> (or are recarrays pointers-of-pointers as opposed
> to contiguous memory?)
No.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
e arbitrary-size integers are
first-class citizens in the Python world. This sits less well with
Numpy: (i) Numpy tries to sit close to the hardware, and (ii) strictly
speaking, arbitrary-size integers cannot be a Numpy scalar type since
they by definition are not fixed
evious
one? IIRC, the problem was that the ziggurat broke reproducibility of
random numbers with a given seed.
So, was the ziggurat algorithm pulled out, or is it still there?
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@sci
Thu, 29 Jul 2010 01:16:14 +0200, Sturla Molden wrote:
[clip]
>> Makes sense. But couldn't a ``dtype`` argument still be useful?
>
> np.ceil(some_array).astype(int)
That's one temporary more. The dtype= argument for all ufuncs wouldn't
probably hurt to
Wed, 28 Jul 2010 18:43:30 -0400, Pierre GM wrote:
[clip]
> Mmh. I did create a PyMappingMethod structure called MyArray_as_mapping,
> and MyArray_as_mapping.mp_subscript points to the function that I want
> to use. However, I'd like the MyArray_as_mapping.length and
> MyArray.mp_ass_subscript to po
Thu, 29 Jul 2010 02:40:00 +0200, Sturla Molden wrote:
>> Mon, 26 Jul 2010 23:58:11 +0800, Ralf Gommers wrote:
>> Is the current algorithm in the trunk the ziggurat one, or the previous
>> one? IIRC, the problem was that the ziggurat broke reproducibility of
>> random numbers with a given seed.
>
>
Thu, 29 Jul 2010 23:39:19 +0800, Ralf Gommers wrote:
> The execfile builtin has disappeared in python 3.x, so I'm trying to
> find another solution for the use of it in setupegg.py. So far I've
> tried
I'd do something like this in "setup.py":
...
+ if os.environ.get('USE_SETUPTOOLS'):
+ i
ould I use? I already use the newest release
The above indeed shows that you are probably using 1.4.1, but the scipy
you import was compiled against either the SVN version of Numpy or 1.4.0.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discus
ting-point-gui.de/
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Fri, 30 Jul 2010 14:21:23 +0200, Guillaume Chérel wrote:
[clip]
> As for the details about my problem, I'm trying to compute the total
> surface of overlapping disks. I approximate the surface with a grid and
> count how many points of the grid fall into at least one disk.
HTH,
import numpy as np
sk[box] |= (grid_x[box] - xx)**2 + (grid_y[box] - yy)**2 < rr**2
# same as: mask[i0:j0,i1:j1] |= (grid_x[i0:j0,i1:j1] ...
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ce distribution?
It's supposed to be included in the source distribution --
tools/py3tool.py seems to be missing from the tarball.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
module should be used -- no need
to go mucking with in the code objects to just find the number of
arguments for a function.
Nobody has tested this part of the distutils code on Python 3, and indeed
it does not have any tests, so it's not a surprise that stuff like this
is left over :)
> the commonly-used pieces of inspect (and hacks like this) into an
> internal utility module that is fast to import.
We actually have `numpy.compat._inspect` and
from numpy.compat import getargspec
that could be used here.
--
Pauli Virtanen
___
; 719 res = do_warn(message, category, stack_level);
> (gdb)
That was probably fixed in r8394 in trunk.
But to be sure, can you supply the whole stack trace (type "bt" in the
gdb prompt).
--
Pauli Virtanen
___
NumPy-Discussion m
Mon, 02 Aug 2010 23:48:52 +0800, Ralf Gommers wrote:
> I'm trying to get building to work with Python 3.1 under Wine on OS X.
> The first thing you run into is a python distutils problem, which is
> fixed by replacing line 379 of cygwinccompiler.py with
> result = RE_VERSION.search(str(out_str
Wed, 04 Aug 2010 23:34:15 +0800, Ralf Gommers wrote:
[clip]
> I haven't started using py3k yet so I'm still a bit fuzzy about bytes
> vs string. But it's easy to try in the interpreter:
>
import re
RE_VERSION = re.compile('(\d+\.\d+(\.\d+)*)')
In the Python 3.1 version I have, this lin
ch)\n x: array([ inf nanj])\n y: array((inf+infj))')
[clip]
> Pauli or Charles, can you please have a look at these? Looks like
> that's related to your recent work on nans/infs.
These errors probably come from the fact that the platform's C library
does not handle special n
ersion of Numpy. On Python versions >= 2.6 Numpy arrays expose the buffer
interface, and array(), asarray() and other functions accept new-style buffers
as input.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy
Mon, 23 Aug 2010 15:30:19 -0500, Travis Oliphant wrote:
> I'm curious as to the status of the Github migration and if there is
> anything I can do to help. I have a couple of weeks right now and I
> would love to see us make the transition of both NumPy and SciPy to GIT.
I think the more or less
Mon, 23 Aug 2010 23:31:14 +0200, Stéfan van der Walt wrote:
[clip]
> Erk. What's the quickest route to go: compare the actual patches, or
> bring a tree up to date for each revision and compute some sort of
> working-copy checksum?
Working-copy checksumming is probably the easiest and most robust
Mon, 23 Aug 2010 21:15:55 +, Pauli Virtanen wrote:
[clip]
> in the history to have the wrong content -- so to be sure, we have to do
> a brute-force comparison of the tree against SVN for each commit. The
> particular bug here was fixed in the conversion tool, but better safe
>
, you cannot do
>>> np.int16('0xff', 16)
either -- it's the same issue. It's also a minor issue, IMHO, as I doubt
many people construct array scalars from strings, and even less do it in
bases other than 10. The fix is to change array scalar __new__, but this
is
s 10^-r, r > 1 exactly
in base-2 floating point.
So if you write "float96(0.0001)", the result is not the float96 number
closest to 0.0001, but the 96-bit representation of the 64-bit number
closest to 0.0001. Indeed,
>>> float96(0.0001), float9
://docs.scipy.org/doc/numpy/user/basics.indexing.html#indexing-multi-dimensional-arrays
http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://m
rn
values -- if you need C-contiguity, you'll need to ensure it by using
`ascontiguousarray`.
Especially when we later on implement memory access pattern optimizations
for ufuncs, such assumptions will break down even more often.
--
Pauli Virtanen
'll just need to add long double versions of NumPyOS_ascii_strtod and
NumPyOS_ftolf that call sscanf with the correct format string in the end.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thu, 02 Sep 2010 08:27:18 -0600, Charles R Harris wrote:
[clip]
> Hi Pauli, I gave it a quick spin and it looks good so far. The cloning
> was really fast, I like that ;) Is there any way to test out commiting?
> I didn't have permissions to push to the repository.
You should have push permissions
topic branches, such
as what branches/fix_float_format in SVN that got later on merged to
trunk. Github seems to show these in different colors, but in reality
they are just a part of the master branch history.
So all in all, it looks correct t
n the way in the mail, but was
> hoping to be able to fix this with mingw.
Mixing different runtimes could maybe also be a problem. I'm not an
expert on this, however.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
um(A*B, axis=1)
It does create an intermediate (n, d) matrix, however. If this is a
problem because of memory issues, you can try numexpr [1],
>>> numexpr.evaluate("sum(A*B, axis=1)")
.. [1] http://code.google.com/p/numexpr/
--
Pauli Virtanen
_
Mon, 06 Sep 2010 10:41:38 +0200, Sebastian Haase wrote:
>
> is there an URL of the weekly built CHM documentation file ?
It's the one linked from http://docs.scipy.org/doc/
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy
setup.py build
or
BLAS=/path/to/libblas.so LAPACK=/path/to/liblapack.so ...
may work. This'll probably even make it to the documentation one day...
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
easons why not to include such a wrapper, so little
persuasion is needed. The only thing is that someone should spend some
time implementing this suggestion (and I probably won't -- I don't really
need that feature myself, and there are many other things that need to be
do
Fri, 10 Sep 2010 11:46:47 +0200, Radek Machulka wrote:
> I have array (numpy.ndarray object) with non-zero elements cumulated
> 'somewhere' (like a array([[0,0,0,0],[0,1,1,0],[0,0,1,0],[0,0,0,0]]))
> and I need sub-array with just non-zero elements (array([[1,1],[0,1]])).
> I can do this with itera
Fri, 10 Sep 2010 14:35:46 +0200, Radek Machulka wrote:
> Thanks, but...
>
x = array([[0,0,0,0],[0,1,0,0],[0,0,1,1],[0,0,0,0]]) x
> array([[0, 0, 0, 0],
>[0, 1, 0, 0],
>[0, 0, 1, 1],
>[0, 0, 0, 0]])
i, j = x.any(0).nonzero()[0], x.any(1).nonzero()[0]
Should be
d the best way to get good advice is to ask
the authors of the particular product.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
t;>> import numpy as np
>>>> np.array(42).astype(">i4").dtype
> dtype('>i4')
>>>> np.array(42, dtype=">i4").astype('>i4').dtype
> dtype('int32')
Doesn't seem correct -- please file a bug ticket.
--
Mon, 13 Sep 2010 15:33:11 -0500, Travis Oliphant wrote:
> Are we ready to do this yet?
>
> I know there were some outstanding questions. Are there major concerns
> remaining?
As far as the conversion is concerned, things should be OK.
The bugs in svn-all-fast-export have been fixed in the meantim
Mon, 13 Sep 2010 21:41:18 +, Pauli Virtanen wrote:
[clip]
> I can upload a "final" repository today/tomorrow. If it seems OK, we can
> freeze SVN trunk a few days after that.
>
> Or we can freeze the trunk sooner than that after the "final" repo is
> u
Mon, 13 Sep 2010 18:08:39 -0600, Charles R Harris wrote:
[clip]
> What is the suggested work flow for the new repositories? Is the best
> way to use a github fork and push and pull from that?
Yes, I'd personally work like that. Easier to keep private stuff separate.
Pauli
__
Mon, 13 Sep 2010 18:15:01 -0600, Charles R Harris wrote:
[clip]
> I think we should freeze the svn repo as soon as possible. Pierre is
> still making commits there and unless there is an easy way to update the
> git repo from svn those sort of commits might be a small hassle.
It needs re-generatio
> Also look at CPython's objimpl.h, union _gc_head, you will see an
> unprotected usage of 'long double', so it seems that CPython requires
> that the C compiler to support 'long double'.
Long double is IIRC in C89, so compiler support is probably not a prob
Wed, 15 Sep 2010 10:58:57 +0200, Gael Varoquaux wrote:
[clip]
> Now I have a problem: at step 1 I should have created a branch. I did
> not. I need to go back and create a branch. This was happening at a
> sprint, and people that know git better than me helped me out. But the
> only way we found to
-- it shows the reflog
of HEAD, which can in some cases be confusing. The point is that HEAD is
a meta-branch that corresponds to the current checkout, and so changes
every time you use "git checkout".
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
re/__svn_version__.py
/doc/numpy.scipy.org/_build
The .\#* is for emacs. But let's do this directly in Git, so that I don't
have to bother regenerating the Numpy repo.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Dear all,
Numpy SVN repository is now frozen, and does not accept new commits.
Future development should end up in the Git repository:
http://github.com/numpy/numpy
The next things on the TODO list:
- Update any links that point to http://svn.scipy.org/svn/numpy
or talk about SVN.
Thu, 16 Sep 2010 08:58:46 +, Pauli Virtanen wrote:
> The next things on the TODO list:
>
> - Update any links that point to http://svn.scipy.org/svn/numpy
> or talk about SVN.
>
> E.g. numpy.org needs updating.
>
> - Put up documentation on how to con
Mon, 20 Sep 2010 23:34:58 +0200, Hagen Fürstenau wrote:
> I don't know if I'm overlooking something obvious, but is there a
> compact way of computing the 3-array
>
> X_{ijk} = \sum_{l} A_{il}*B_{jl}*C_{kl}
>
> out of the 2-arrays A, B, and C?
(A[:,newaxis,newaxis]*B[newaxis,:,newaxis]*C[newaxis
; 0
>
> Shouldn't real and imag return an error in such a situation?
It probably shouldn't do *that* at the least.
http://projects.scipy.org/numpy/ticket/1618
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Tue, 21 Sep 2010 21:50:08 +, Pauli Virtanen wrote:
> Tue, 21 Sep 2010 17:31:55 -0400, Michael Gilbert wrote:
>> The following example demonstrates a rather unexpected result:
>>
>>>>> import numpy
>>>>> x = numpy.array( complex( 1.0 , 1.
ementwise .real and .imag
I don't clearly see the reason for
>>> x.real is x
True
>>> x.imag
array([0], dtype=object)
But it is a minor corner case, and there may be backward compatibility
issues in changing it.
--
Pauli Virtanen
_
7;t know about this trick.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thu, 30 Sep 2010 15:13:04 -0400, Michael Droettboom wrote:
> Is the solution as simple as the attached diff? It works for me, but I
> don't understand all the implications.
More or less so, applied.
Pauli
___
NumPy-Discussion mailing list
NumP
Hi,
Should we set a date for a bugfix 1.5.1 release? There are some bugs that
would be nice to sort out in the 1.5.x series:
Any Python versions:
- #1605 (Cython vs. PEP-3118 issue: raising exceptions with active
cython buffers caused undefined behavior. Breaks Sage.)
- #1617 (Ensure c
401 - 500 of 1087 matches
Mail list logo