27;s taking a long
time... Perhaps there's a way to speed things up?
Thanks a lot for your insight,
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
hanks for your insight,
Andreas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
solutions.
>
> On Tue, Jun 8, 2010 at 2:08 PM, Andreas Hilboll <mailto:li...@hilboll.de>> wrote:
>
> Hi,
>
> > newtimes = [times[idx[x][y]] for x in range(2) for y in range(2)]
> > np.array(newtimes).reshape(2,2)
> > array([[104, 10
) pair. This should be efficient, and the
complexity is only in the encoding/decoding steps.
Best regards,
Andreas
Dr. Andreas Eisele Senior Researcher
DFKI GmbH, Language Technology Lab[EMAIL PROTECTED]
Stuhlsatzenhaus
1.6.1 also?
Or, would it be possible to send me the fixes to include it in 1.6.1 (or 1.4.1)
on my own?!
Thanks,
Andreas
This email and any attachments are intended solely for the use of the
individual or entity to whom it is addressed and may be confidential and/or
privileged.
If you are not
doas/build/numpy> find
/usr/include/ | grep Python.h
/usr/include/python2.6/Python.h
I also tried without the --distribute --no-site-packages flags, with the
same result.
Any hints are very welcome :)
Cheers,
Andreas.
___
NumPy-Discussion maili
> On Mon, Mar 19, 2012 at 6:45 PM, Andreas H. wrote:
>
>> Hi all,
>>
>> I have troube installing numpy in a virtual environment on a SuSE
>> Enterprise 11 server (ppc64).
>>
>> Here is what I did:
>>
>>curl -O https://raw.github
Have you guys actually thought about JIRA? Atlassian offers free licences
for open source projects ...
Cheers,
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ently / soon.
Plus, numpy is a lot of C code, and to me (again, as a user) it seems
more complicated to contribute because things are not as isolated.
Just my 2 ct.
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ssion
1.6.2RC1 builds fine under the following configurations (all of them
x86_64):
* Ubuntu Lucid 10.4 / Python 2.6.5 / GCC 4.4.3: OK (KNOWNFAIL=3, SKIP=5)
* Archlinux (as of today) / Python 3.2.3 / GCC 4.7.0: OK (KNOWNFAIL=5,
SKIP=5)
* Archlinux (as of today) / Python 2.7.3 / GCC: OK (KNOWNFAIL=3, SKIP=5)
Great work!
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ions exist.
Puzzled greetings,
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
> On Sun, May 20, 2012 at 9:37 AM, Charles R Harris
> mailto:charlesr.har...@gmail.com>> wrote:
>
>
>
> On Sun, May 20, 2012 at 9:09 AM, Andreas Hilboll <mailto:li...@hilboll.de>> wrote:
>
> Hi,
>
> I just notic
ects from the "scipy universe"? Is there help needed?
Many questions, but possibly quite easy to answer ...
Cheers,
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
know of its status? Is it
>> 'official'? Are there any plans in revitalizing it, possibly with adding
>> other projects from the "scipy universe"? Is there help needed?
>>
>> Many questions, but possibly quite easy to answer ...
>>
>> Cheers,
>>
> Hi Travis,
>
> On Thu, Jun 28, 2012 at 1:25 PM, Travis Oliphant
> wrote:
>> Hey all,
>>
>> I'd like to propose dropping support for Python 2.4 in NumPy 1.8 (not
>> the 1.7 release). What does everyone think of that?
>
> I think it would depend on 1.7 state. I am unwilling to drop support
>
> Hi numpy.
>
> Does anyone know if f2py supports allocatable arrays, allocated inside
> fortran subroutines? The old f2py docs seem to indicate that the
> allocatable array must be created with numpy, and dropped in the module.
> Here's more background to explain...
>
> I have a fortran subroutine
> Hi,
>
> I am pleased to announce the availability of the first release candidate
> of
> SciPy 0.11.0. For this release many new features have been added, and over
> 120 tickets and pull requests have been closed. Also noteworthy is that
> the
> number of contributors for this release has risen to
, bins_y))
AttributeError: The dimension of bins must be equal to the dimension of
the sample x.
I would expect histogram2d to return a 2d array of shape (360,180), which
is full of 256s. What am I missing here?
Cheers,
Andreas.
___
NumPy-Discussion
> Hi,
>
> would like to identify unique pairs of numbers in two arrays o in one
> bi-dimensional array, and count the observation
>
> a_clean=array([4,4,5,4,4,4])
> b_clean=array([3,5,4,4,3,4])
>
> and obtain
> (4,3,2)
> (4,5,1)
> (5,4,1)
> (4,4,2)
>
> I solved with tow loops but off course there w
Hi Everybody.
The bug is that no error is raised, right?
The docs say
where(condition, [x, y])
x, y : array_like, optional
Values from which to choose. `x` and `y` need to have the same
shape as `condition`
In the example you gave, x was a scalar.
Cheers,
Andy
On 07/27/2012 09:10 PM, Benjamin Root wrote:
On Fri, Jul 27, 2012 at 3:58 PM, Andreas Mueller
mailto:amuel...@ais.uni-bonn.de>> wrote:
Hi Everybody.
The bug is that no error is raised, right?
The docs say
where(condition, [x, y])
x, y : array_like, op
; As for packages getting updated unintentionally, easy_install and pip
> both require an argument to upgrade any existing packages (I think
> -U), so I am not sure how you are running into such a situation.
Quite easily, actually. I ran into pip wanting to upgrade numpy when I
was installing/upgrading a package depending on numpy. Problem is, -U
upgrades both the package you explicitly select *and* its dependencies.
I know there's some way around this, but it's not obvious -- at least
not for users.
Cheers, Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
resulting array.
Is there any way around this?
Thanks for your insight,
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
> On Tue, Sep 25, 2012 at 2:31 AM, Andreas Hilboll wrote:
>> I commonly have to deal with legacy ASCII files, which don't have a
>> constant number of columns. The standard is 10 values per row, but
>> sometimes, there are less columns. loadtxt doesn't support t
Andre
Gosselin, however the email bounces, so I guess he's gone.
Can anyone point me to how to proceed from here?
Cheers, Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi all.
I am very happy to announce the release of scikit-learn 0.13.
New features in this release include feature hashing for text processing,
passive-agressive classifiers, faster random forests and many more.
There have also been countless improvements in stability, consistency and
usability.
ut where to fix this? For someone
without deeper knowledge of how numpy sources are organized it's hard to
find the place where to fix things. How about adding the "source" link
to the docstrings via sphinx, like in scipy?
Cheers,
Andreas.
__
hought the docs are auto-generated, and that the "array..." result of
the docstring would be calculated by numpy while building the docs? Or
am I misunderstanding something here?
Cheers, Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion
Hi everybody.
When I did some normalization using numpy, I noticed that numpy.std uses
more ram than I was expecting.
A quick google search gave me this:
http://luispedro.org/software/ncreduce
The site claims that std and other reduce operations are implemented
naively with many temporaries.
Is tha
On 11/14/2011 04:23 PM, David Cournapeau wrote:
> On Mon, Nov 14, 2011 at 12:46 PM, Andreas Müller
> wrote:
>> Hi everybody.
>> When I did some normalization using numpy, I noticed that numpy.std uses
>> more ram than I was expecting.
>> A quick googl
On 11/15/2011 04:28 PM, Bruce Southey wrote:
On 11/14/2011 10:05 AM, Andreas Müller wrote:
On 11/14/2011 04:23 PM, David Cournapeau wrote:
On Mon, Nov 14, 2011 at 12:46 PM, Andreas Müller
wrote:
Hi everybody.
When I did some normalization using numpy, I noticed that numpy.std uses
more ram
On 11/15/2011 05:46 PM, Andreas Müller wrote:
On 11/15/2011 04:28 PM, Bruce Southey wrote:
On 11/14/2011 10:05 AM, Andreas Müller wrote:
On 11/14/2011 04:23 PM, David Cournapeau wrote:
On Mon, Nov 14, 2011 at 12:46 PM, Andreas Müller
wrote:
Hi everybody.
When I did some normalization
On 11/15/2011 06:02 PM, Warren Weckesser wrote:
On Tue, Nov 15, 2011 at 10:48 AM, Andreas Müller
mailto:amuel...@ais.uni-bonn.de>> wrote:
On 11/15/2011 05:46 PM, Andreas Müller wrote:
On 11/15/2011 04:28 PM, Bruce Southey wrote:
On 11/14/2011 10:05 AM, Andreas Müller
On 11/15/2011 07:03 PM, Gael Varoquaux wrote:
> On Tue, Nov 15, 2011 at 05:57:14PM +, Robert Kern wrote:
>> Actually, last time I suggested it, it was brought up that the online
>> algorithms can be worse numerically. I'll try to find the thread.
> Indeed, especially for smallish datasets where
to throw an error. If not, we found a bug in the hash
implementation.)
Thanks!
Andreas
pgpXxNc3SXl0l.pgp
Description: PGP signature
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Robert,
On Tue, 27 Dec 2011 10:17:41 +, Robert Kern wrote:
> On Tue, Dec 27, 2011 at 01:22, Andreas Kloeckner
> wrote:
> > Hi all,
> >
> > Two questions:
> >
> > - Are dtypes supposed to be comparable (i.e. implement '==', '!='
Hi Robert,
On Fri, 30 Dec 2011 20:05:14 +, Robert Kern wrote:
> On Fri, Dec 30, 2011 at 18:57, Andreas Kloeckner
> wrote:
> > Hi Robert,
> >
> > On Tue, 27 Dec 2011 10:17:41 +, Robert Kern
> > wrote:
> >> On Tue, Dec 27, 2011 at 01:22, Andre
64(1j) + A()
---
In my world, this should print .
It does print .
Who is casting my sized complex to a built-in complex, and why?
It can be Python's type coercion, because the behavior is the same in
Python 3.2. (And the docs say Python 3 doesn't support coercion.)
(Please cc m
that for all j=0..19,
r[j] = d[i[j]-3:i[j]+5]
In my case, the arrays are quite large (~20 instead of 100 and 20),
so something quick would be useful.
Cheers, Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
> Hey,
>
> On Wed, 2013-03-20 at 16:31 +0100, Andreas Hilboll wrote:
>> Cross-posting a question I asked on SO
>> (http://stackoverflow.com/q/15527666/152439):
>>
>>
>> Given an array
>>
>> d = np.random.randn(100)
>>
>> and an
>> As I poke at this a bit, I"m noticing that maybe time zones aren't
>> handles at all internally -- rather, the conversion is done to UTC
>> when creating a datetime64, and conversion is then done to the locale
>> when creating a strng representation -- maybe nothing inside at all.
>>
>> Does t
> Hi,
> I have encountered some problem while I was drawing a direction of
> arrow. I have point (x,y) coordinates and angle of them. What I want to
> do is that to draw arrow according to the given angle (just to show the
> point direction as an arrow in each point coordinate). Here, we should
> a
l a HTTP redirect **or** a HTTPD
rewrite on that IP. So we need to find a server to do that. Probably
easiest to ask numfocus, right?
Cheers, Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ing www.scipy.org Apache
> server to host the redirects.
Good. I have no clue about who operates which servers, and just assumed
numfocus is doing that.
BTW, is there help needed in server administration (for numpy, scipy, or
whatever)? I could happily volunteer to help out.
Cheers, Andreas.
__
On 10.05.2013 19:32, Arnaldo Russo wrote:
> Hi Andreas,
> This packaging would be much useful!
> How can I help with this?
> pyhdf is very important because HDF4-EOS does not open with another
> packages, only with pyhdf and gdal.
Hi Arnaldo,
I actually went ahead and put the pa
Any ideas?
Cheers, Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 10.07.2013 17:06, Matthew Brett wrote:
> Hi,
>
> On Wed, Jul 10, 2013 at 11:02 AM, Andreas Hilboll wrote:
>> Hi,
>>
>> there are np.flipud and np.fliplr methods to flip 2d arrays on the first
>> and second dimension, respectively. What can I do to flip an
Am 10.07.2013 17:06, schrieb Matthew Brett:
> Hi,
>
> On Wed, Jul 10, 2013 at 11:02 AM, Andreas Hilboll wrote:
>> Hi,
>>
>> there are np.flipud and np.fliplr methods to flip 2d arrays on the first
>> and second dimension, respectively. What can I do to flip an
On 24.07.2013 17:33, Pauli Virtanen wrote:
> Hi,
>
> How about splitting doc/sphinxext out from the main Numpy repository to
> a separate `numpydoc` repo under Numpy project?
+1
-- Andreas
___
NumPy-Discussion mailing list
NumPy-Discussi
ted/scipy.stats.binned_statistic_2d.html
At first glance it can do what you're trying to do.
Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
quot;)
np.memmap(fd, dtype="float32", mode="r")
---8<---
Any help is greatly appreciated :)
-- Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
osed an update to the memmap docstring to
better reflect this:
https://github.com/numpy/numpy/pull/3890
-- Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
np.memmap(fd, dtype="float32", mode="r", offset=offset)
>
> Also, there's no need to do things like "offset =
> int(fd.readlines()[0].split()[-2])"
>
> Just do "offset = int(next(fd).split()[-2])" instead. Readlines reads
> the e
Hi,
in using np.polyfit (in version 1.7.1), I ran accross
TypeError: expected a 1-d array for weights
when trying to fit k polynomials at once (x.shape = (4, ), y.shape = (4,
136), w.shape = (4, 136)). Is there any specific reason why this is not
supported?
-- Andreas
ly the time overlapping segments of the data.
>
> Does numpy or scipy offer something that may help in this?
>
> I can imagine strategies about how to approach the problem, but none
> that would be efficient. Ideas?
Take a look at pandas. It has built-
On 11.02.2014 14:22, Daniele Nicolodi wrote:
> On 11/02/2014 14:10, Andreas Hilboll wrote:
>> On 11.02.2014 14:08, Daniele Nicolodi wrote:
>>> Hello,
>>>
>>> I have two time series (2xN dimensional arrays) recorded on the same
>>> time basis, but eac
On 11.02.2014 14:47, Daniele Nicolodi wrote:
> On 11/02/2014 14:41, Andreas Hilboll wrote:
>> On 11.02.2014 14:22, Daniele Nicolodi wrote:
>>> On 11/02/2014 14:10, Andreas Hilboll wrote:
>>>> On 11.02.2014 14:08, Daniele Nicolodi wrote:
>>>>> H
G, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
lly something works!
>
>
> now to get them into proper unit tests
As one further suggestion, I think it would be nice if doing arithmetic
using np.datetime64 and datetime.timedelta objects would work:
np.datetime64(2011,1,1) + datetime.timedelta(1) ==
np.datetime64(2011,1,2)
And of course, but this is probably in the loop anyways,
np.asarray([list_of_datetime.datetime_objects]) should work as expected.
-- Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 19.04.2014 09:03, Andreas Hilboll wrote:
> On 14.04.2014 20:59, Chris Barker wrote:
>> On Fri, Apr 11, 2014 at 4:58 PM, Stephan Hoyer > <mailto:sho...@gmail.com>> wrote:
>>
>> On Fri, Apr 11, 2014 at 3:56 PM, Charles R Harris
>> m
object comparison in the future
from the "c is None" comparison. I'm wondering what would be the best
way to do this check in a future-proof way?
Best,
-- Andreas.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
On 08.04.2015 20:30, Nathaniel Smith wrote:
> On Apr 8, 2015 2:16 PM, "Andreas Hilboll" <mailto:li...@hilboll.de>> wrote:
>>
>> Hi all,
>>
>> I'm commonly using function signatures like
>>
>>def myfunc(a, b
Is this the point when scikit-learn should build against it?
Or do we wait for an RC?
Also, we need a scipy build against it. Who does that?
Our continuous integration doesn't usually build scipy or numpy, so it
will be a bit tricky to add to our config.
Would you run our master tests? [did we e
test()'"
>> This might not be viable right now, but will be made more viable
if pypi starts allowing official Linux wheels, which looks likely to
happen before 1.12... (see PEP 513)
>>
>> On Jan 29, 2016 9:46 AM, "Andreas Mueller" <mailto:t3k...@gmail.co
On 02/01/2016 04:25 PM, Ralf Gommers wrote:
It would be nice but its not realistic, I doubt most upstreams
that are
not themselves major downstreams are even subscribed to this list.
I'm pretty sure that some core devs from all major scipy stack
packages are subscribed to this l
Hi.
Where can I find the changelog?
It would be good for us to know which changes are done one purpos
without hunting through the issue tracker.
Thanks,
Andy
On 02/09/2016 09:09 PM, Charles R Harris wrote:
Hi All,
I'm pleased to announce the release of NumPy 1.11.0b3. This beta
contains add
On 02/12/2016 04:19 PM, Nathan Goldbaum wrote:
https://github.com/numpy/numpy/blob/master/doc/release/1.11.0-notes.rst
Thanks.
That doesn't cover the backward incompatible change to
assert_almost_equal and assert_array_almost_equal,
right?
___
Num
Hi there,
is there an easy way to do something like trim_zeros() does, but for a
n-dimensional array? I have a 2d array with only zeros in the first and
last rows and columns, and would like to trim this array to only the
non-zero part ...
Thanks,
Andreas
le is ~13 seconds, and to write ~5 seconds.
Which is not too bad, but also still too much ...
Thanks,
Andreas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi there,
I'm interested in the solution to a special case of the parallel thread
'2D binning', which is going on at the moment. My data is on a fine global
grid, say .125x.125 degrees. I'm looking for a way to do calculations on
coarser grids, e.g.
* calculate means()
* calculate std()
* ...
on
= [[4,2],[3,1]]
times = [100,101,102,103,104]
From these two I want to create an array
result = [[104,102],[103,101]]
How can this be done?
Thanks a lot for your insight!
Andreas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy
Hi,
> newtimes = [times[idx[x][y]] for x in range(2) for y in range(2)]
> np.array(newtimes).reshape(2,2)
> array([[104, 102],
>[103, 101]])
Great, thanks a lot!
Cheers,
Andreas.
___
NumPy-Discussion mailing list
NumP
encounter this error:
Python 3.1.2 (release31-maint, Jul 8 2010, 09:18:08)
[GCC 4.4.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
File "", lin
idea how to do it fast using numpy?
Thanks,
Andreas
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
Thanks for the two solutions.
Indeed, there are much faster than the brute force method using in.
The setmember1d is deprecate din newer python releases, therefore I already
switched to in1d().
(Thanks Josef.)
Andreas
___
NumPy-Discussion
numpy.array([55, (33,)], dtype=object)
>>> x
array([55, (33,)], dtype=object)
>>> buffer(x)
>>> str(buffer(x))
'\xb0\x1c\x17\x08l\x89\xd7\xb7'
>>> numpy.__version__
'1.1.0'
Opinions?
Andreas
signature.asc
Description: This is a digitally
On Montag 29 Dezember 2008, Robert Kern wrote:
> On Sun, Dec 28, 2008 at 19:23, Andreas Klöckner
wrote:
> > Hi all,
> >
> > I don't think PyObject pointers should be accessible via the buffer
> > interface. I'd throw an error, but maybe a (silenceable) warn
On Montag 29 Dezember 2008, Robert Kern wrote:
> On Sun, Dec 28, 2008 at 20:38, Andreas Klöckner
wrote:
> > On Montag 29 Dezember 2008, Robert Kern wrote:
> >> On Sun, Dec 28, 2008 at 19:23, Andreas Klöckner
> >>
> >
> > wrote:
> >> > Hi all,
&
fer protocol. I'm inclined not to make object a
> special case. When you ask for the raw bytes, you should get the raw
> bytes.
Ok, fair enough.
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion m
ce. :)
License
---
PyOpenCL is open-source under the MIT/X11 license and free for commercial,
academic, and private use.
Andreas
[1] http://mathema.tician.de/software/pycuda
signature.asc
Description: This is a digitally signed message part.
__
Hi all,
I just got tripped up by this behavior in Numpy 1.0.4:
>>> u = numpy.array([1,3])
>>> v = numpy.array([0.2,0.1])
>>> u+=v
>>> u
array([1, 3])
>>>
I think this is highly undesirable and should be fixed, or at least warned
about. Opinions?
A
- Raise an error, but add a lightweight wrapper, such as
int_array += downcast_ok(float_array)
to allow the operation anyway.
- Raise an error unconditionally, forcing the user to make a typecast copy.
- Silently upcast the target. This is no good because it breaks existing code
non-obviousl
#x27;m arguing that
int_array += downcast_ok(float_array)
should be the syntax for it. downcast_ok could be a view of float_array's data
with an extra flag set, or a subclass.
Andreas
signature.asc
Description: This is a digitally signed message part.
__
to make the
syntax beginner-safe. Complete loss of precision without warning is not a
meaning that I, as a toolkit designer, would assign to an innocent-looking
inplace operation. My hunch is that many people who start with Numpy will
spend an hour of their lives hunting a spurious bug
lt; Example -
>>> from numpy import *
>>> x = array([1,2,3,5])
>>> N=3
>>> vander(x,N) # Vandermonde matrix of the vector x
array([[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]])
8< -----
Andrea
Hi all,
is there a particular reason why dot() and tensordot() don't have output
arguments?
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
27;], 'langu
age': 'c', 'include_dirs': ['/users/kloeckner/mach/x86_64/pool/include']}
lapack_opt_info={'libraries':
['lapack', 'ptf77blas', 'ptcblas', 'atlas'], 'lib
I can answer my own question now:
1) Option --fcompiler=gnu95
2) Add the following to site.cfg
[atlas]
library_dirs = /users/kloeckner/mach/x86_64/pool/lib,/usr/lib
atlas_libs = lapack, f77blas, cblas, atlas
Andreas
On Sonntag 06 April 2008, Andreas Klöckner wrote:
> Hi all,
>
> I
PyKits.
>
> Really? It gives me the shivers, frankly.
Couldn't agree more.
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Hi Nadav,
On Montag 07 April 2008, Nadav Horesh wrote:
> [snip]
Try something like this:
[atlas]
library_dirs = /users/kloeckner/mach/x86_64/pool/lib,/usr/lib
atlas_libs = lapack, f77blas, cblas, atlas
Andreas
signature.asc
Description: This is a digitally signed message p
ed to numpy.* are the
polynomial functions and the convolution windows, conceptually. But in my
book that's not big enough to even think of breaking people's code for.
Andreas
Proud Member of the Flat Earth Society
signature.asc
Description: This is a digitally signed message part.
__
t
> is.
Patch attached.
Andreas
Index: numpy/lib/twodim_base.py
===
--- numpy/lib/twodim_base.py (Revision 5001)
+++ numpy/lib/twodim_base.py (Arbeitskopie)
@@ -148,7 +148,7 @@
X = vander(x,N=None)
The Vandermonde matrix of ve
olynomial functions and
move this stuff to numpy.poly for 1.1, and use this opportunity to fix the
ordering in the moved functions.
?
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-di
On Mittwoch 09 April 2008, Charles R Harris wrote:
> import numpy.linalg as la ?
Yes! :)
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
h
lication.
Agree. Let's just live with Matlab's definition.
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
k another copy of cblas with their (separate) extension. Can we be certain
that this does not lead to crashes on any platform supported by numpy?
Andreas
signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing l
and in python
>
> You might also be interested in:
>
> http://mathema.tician.de/software/pyublas
Argh-- you scooped me! :)
I'm preparing version 0.92.1 right now, due out later today. It should iron
out most of the wrinkles that are still present in the current version, 0.92.
--An
x)
> SWIG
> ctypes
IMO, all of these deal better with C than they do with C++. There is also a
number of more C++-affine solutions:
- Boost Python [1]. Especially if you want usable C++ integration. (ie. more
than basic templates, etc.)
- sip [2]. Used for PyQt.
Andreas
[1] http://www.b
as on the C++ side
and Numpy on the Python side. It is somewhat like what Hoyt describes, except
for a different environment. Here's a table:
| Hoyt | Andreas
---++
C++ Matrix Library | Blitz++| Boost.Ublas
Wrapper Ge
Q: No such
file or directory
Also, numpy doesn't use setuptools and therefore doesn't install an egg-info
on Python 2.4, so that often setuptools will wrongly conclude that it's not
installed, even if it is.
Ideas? Is this something worth fixing for 1.1.0? If so, I'll open a ti
1 - 100 of 120 matches
Mail list logo