[Numpy-discussion] ANN: BayesPy 0.3 released

2015-03-05 Thread Jaakko Luttinen
Dear all,

I am pleased to announce that BayesPy 0.3 has been released.

BayesPy provides tools for variational Bayesian inference. The user can
easily constuct conjugate exponential family models from nodes and run
approximate posterior inference. BayesPy aims to be efficient and
flexible enough for experts but also accessible for casual users.

---

This release adds several state-of-the-art VB features. Below is a list
of significant new features in this release:

* Gradient-based optimization of the nodes by using either the Euclidean
or Riemannian/natural gradient. This enables, for instance, the
Riemannian conjugate gradient method.

* Collapsed variational inference to improve the speed of learning.

* Stochastic variational inference to improve scalability.

* Pattern search to improve the speed of learning.

* Deterministic annealing to improve robustness against initializations.

* Gaussian Markov chains can use input signals.

More details about the new features can be found here:
http://www.bayespy.org/user_guide/advanced.html

--

PyPI: https://pypi.python.org/pypi/bayespy/0.3

Git repository: https://github.com/bayespy/bayespy

Documentation: http://www.bayespy.org/

Best regards,
Jaakko

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: BayesPy 0.2

2014-08-21 Thread Jaakko Luttinen
Dear all,

I am pleased to announce the release of BayesPy version 0.2.

BayesPy provides tools for Bayesian inference in Python. In particular, 
it implements variational message passing framework, which enables 
modular and efficient way to construct models and perform approximate 
posterior inference.

Download: https://pypi.python.org/pypi/bayespy/

Documentation: http://www.bayespy.org

Repository: https://github.com/bayespy/bayespy

Comments, feedback and contributions are welcome.

Best regards,
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Installation bug on Python 3.3?

2013-08-05 Thread Jaakko Luttinen
Hi,

I'm trying to install NumPy 1.7.1 for Python 3.3 using

pip install numpy

However, I get the following error after a while:

error: numpy.egg-info/dependency_links.txt: Operation not supported

Is this a bug or am I doing something wrong? If it matters, I'm using
virtualenv as I do not have root permission on this computer.

Thanks for any help!
Jaakko

PS. The end of the error log:


Command /home/jluttine/.virtualenvs/bayespy-3.3/bin/python3.3 -c import
setuptools;__file__='/home/jluttine/.virtualenvs/bayespy-3.3/build/numpy/setup.py';exec(compile(open(__file__).read().replace('\r\n',
'\n'), __file__, 'exec')) install --record
/tmp/pip-309wd8-record/install-record.txt
--single-version-externally-managed --install-headers
/home/jluttine/.virtualenvs/bayespy-3.3/include/site/python3.3 failed
with error code 1 in /home/jluttine/.virtualenvs/bayespy-3.3/build/numpy

Exception information:
Traceback (most recent call last):
  File
/home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/basecommand.py,
line 107, in main
status = self.run(options, args)
  File
/home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/commands/install.py,
line 261, in run
requirement_set.install(install_options, global_options)
  File
/home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/req.py,
line 1166, in install
requirement.install(install_options, global_options)
  File
/home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/req.py,
line 589, in install
cwd=self.source_dir, filter_stdout=self._filter_install,
show_stdout=False)
  File
/home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/util.py,
line 612, in call_subprocess
% (command_desc, proc.returncode, cwd))
pip.exceptions.InstallationError: Command
/home/jluttine/.virtualenvs/bayespy-3.3/bin/python3.3 -c import
setuptools;__file__='/home/jluttine/.virtualenvs/bayespy-3.3/build/numpy/setup.py';exec(compile(open(__file__).read().replace('\r\n',
'\n'), __file__, 'exec')) install --record
/tmp/pip-309wd8-record/install-record.txt
--single-version-externally-managed --install-headers
/home/jluttine/.virtualenvs/bayespy-3.3/include/site/python3.3 failed
with error code 1 in /home/jluttine/.virtualenvs/bayespy-3.3/build/numpy
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installation bug on Python 3.3?

2013-08-05 Thread Jaakko Luttinen
I was able to install by downloading the package for version 1.7.1 from
github and then running

python3.3 setup.py install

No errors given. So, the problem might be related to pip and the fact
that python3.3 is installed locally in my personal home folder which is
in a different filesystem than the other python versions. So it might be
because of some hardlinks over different partitions or something similar.

Anyway, it seems that I have the same problem installing other python
packages too, so it is not related to numpy but either python 3.3, pip,
virtualenv or my system set up.

But still, any help is appreciated.

-Jaakko

On 08/05/2013 12:53 PM, Jaakko Luttinen wrote:
 Hi,
 
 I'm trying to install NumPy 1.7.1 for Python 3.3 using
 
 pip install numpy
 
 However, I get the following error after a while:
 
 error: numpy.egg-info/dependency_links.txt: Operation not supported
 
 Is this a bug or am I doing something wrong? If it matters, I'm using
 virtualenv as I do not have root permission on this computer.
 
 Thanks for any help!
 Jaakko
 
 PS. The end of the error log:
 
 
 Command /home/jluttine/.virtualenvs/bayespy-3.3/bin/python3.3 -c import
 setuptools;__file__='/home/jluttine/.virtualenvs/bayespy-3.3/build/numpy/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec')) install --record
 /tmp/pip-309wd8-record/install-record.txt
 --single-version-externally-managed --install-headers
 /home/jluttine/.virtualenvs/bayespy-3.3/include/site/python3.3 failed
 with error code 1 in /home/jluttine/.virtualenvs/bayespy-3.3/build/numpy
 
 Exception information:
 Traceback (most recent call last):
   File
 /home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/basecommand.py,
 line 107, in main
 status = self.run(options, args)
   File
 /home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/commands/install.py,
 line 261, in run
 requirement_set.install(install_options, global_options)
   File
 /home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/req.py,
 line 1166, in install
 requirement.install(install_options, global_options)
   File
 /home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/req.py,
 line 589, in install
 cwd=self.source_dir, filter_stdout=self._filter_install,
 show_stdout=False)
   File
 /home/jluttine/.virtualenvs/bayespy-3.3/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg/pip/util.py,
 line 612, in call_subprocess
 % (command_desc, proc.returncode, cwd))
 pip.exceptions.InstallationError: Command
 /home/jluttine/.virtualenvs/bayespy-3.3/bin/python3.3 -c import
 setuptools;__file__='/home/jluttine/.virtualenvs/bayespy-3.3/build/numpy/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file__, 'exec')) install --record
 /tmp/pip-309wd8-record/install-record.txt
 --single-version-externally-managed --install-headers
 /home/jluttine/.virtualenvs/bayespy-3.3/include/site/python3.3 failed
 with error code 1 in /home/jluttine/.virtualenvs/bayespy-3.3/build/numpy
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] einsum and broadcasting

2013-04-04 Thread Jaakko Luttinen
I don't quite understand how einsum handles broadcasting. I get the
following error, but I don't understand why:

In [8]: import numpy as np
In [9]: A = np.arange(12).reshape((4,3))
In [10]: B = np.arange(6).reshape((3,2))
In [11]: np.einsum('ik,k...-i...', A, B)
---
ValueError: operand 0 did not have enough dimensions to match the
broadcasting, and couldn't be extended because einstein sum subscripts
were specified at both the start and end

However, if I use explicit indexing, it works:

In [12]: np.einsum('ik,kj-ij', A, B)
Out[12]:
array([[10, 13],
   [28, 40],
   [46, 67],
   [64, 94]])

It seems that it also works if I add '...' to the first operand:

In [12]: np.einsum('ik...,k...-i...', A, B)
Out[12]:
array([[10, 13],
   [28, 40],
   [46, 67],
   [64, 94]])

However, as far as I understand, the syntax
np.einsum('ik,k...-i...', A, B)
should work. Have I misunderstood something or is there a bug?

Thanks for your help!
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot/inner products with broadcasting?

2013-03-20 Thread Jaakko Luttinen
I tried using this inner1d as an alternative to dot because it uses
broadcasting. However, I found something surprising: Not only is inner1d
much much slower than dot, it is also slower than einsum which is much
more general:

In [68]: import numpy as np

In [69]: import numpy.core.gufuncs_linalg as gula

In [70]: K = np.random.randn(1000,1000)

In [71]: %timeit gula.inner1d(K[:,np.newaxis,:],
np.swapaxes(K,-1,-2)[np.newaxis,:,:])
1 loops, best of 3: 6.05 s per loop

In [72]: %timeit np.dot(K,K)
1 loops, best of 3: 392 ms per loop

In [73]: %timeit np.einsum('ik,kj-ij', K, K)
1 loops, best of 3: 1.24 s per loop

Why is it so? I thought that the performance of inner1d would be
somewhere in between dot and einsum, probably closer to dot. Now I don't
see any reason to use inner1d instead of einsum..

-Jaakko

On 03/15/2013 04:22 PM, Oscar Villellas wrote:
 In fact, there is already an inner1d implemented in
 numpy.core.umath_tests.inner1d
 
 from numpy.core.umath_tests import inner1d
 
 It should do the trick :)
 
 On Thu, Mar 14, 2013 at 12:54 PM, Jaakko Luttinen
 jaakko.lutti...@aalto.fi wrote:
 Answering to myself, this pull request seems to implement an inner
 product with broadcasting (inner1d) and many other useful functions:
 https://github.com/numpy/numpy/pull/2954/
 -J

 On 03/13/2013 04:21 PM, Jaakko Luttinen wrote:
 Hi!

 How can I compute dot product (or similar multiplysum operations)
 efficiently so that broadcasting is utilized?
 For multi-dimensional arrays, NumPy's inner and dot functions do not
 match the leading axes and use broadcasting, but instead the result has
 first the leading axes of the first input array and then the leading
 axes of the second input array.

 For instance, I would like to compute the following inner-product:
 np.sum(A*B, axis=-1)

 But numpy.inner gives:
 A = np.random.randn(2,3,4)
 B = np.random.randn(3,4)
 np.inner(A,B).shape
 # - (2, 3, 3) instead of (2, 3)

 Similarly for dot product, I would like to compute for instance:
 np.sum(A[...,:,:,np.newaxis]*B[...,np.newaxis,:,:], axis=-2)

 But numpy.dot gives:
 In [12]: A = np.random.randn(2,3,4); B = np.random.randn(2,4,5)
 In [13]: np.dot(A,B).shape
 # - (2, 3, 2, 5) instead of (2, 3, 5)

 I could use einsum for these operations, but I'm not sure whether that's
 as efficient as using some BLAS-supported(?) dot products.

 I couldn't find any function which could perform this kind of
 operations. NumPy's functions seem to either flatten the input arrays
 (vdot, outer) or just use the axes of the input arrays separately (dot,
 inner, tensordot).

 Any help?

 Best regards,
 Jaakko
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot/inner products with broadcasting?

2013-03-20 Thread Jaakko Luttinen
Well, thanks to seberg, I finally noticed that there is a dot product
function in this new module numpy.core.gufuncs_linalg, it was just named
differently (matrix_multiply instead of dot).

However, I may have found a bug in it:

import numpy.core.gufuncs_linalg as gula
A = np.arange(2*2).reshape((2,2))
B = np.arange(2*1).reshape((2,1))
gula.matrix_multiply(A, B)

ValueError: On entry to DGEMM parameter number 10 had an illegal value

-Jaakko

On 03/20/2013 03:33 PM, Jaakko Luttinen wrote:
 I tried using this inner1d as an alternative to dot because it uses
 broadcasting. However, I found something surprising: Not only is inner1d
 much much slower than dot, it is also slower than einsum which is much
 more general:
 
 In [68]: import numpy as np
 
 In [69]: import numpy.core.gufuncs_linalg as gula
 
 In [70]: K = np.random.randn(1000,1000)
 
 In [71]: %timeit gula.inner1d(K[:,np.newaxis,:],
 np.swapaxes(K,-1,-2)[np.newaxis,:,:])
 1 loops, best of 3: 6.05 s per loop
 
 In [72]: %timeit np.dot(K,K)
 1 loops, best of 3: 392 ms per loop
 
 In [73]: %timeit np.einsum('ik,kj-ij', K, K)
 1 loops, best of 3: 1.24 s per loop
 
 Why is it so? I thought that the performance of inner1d would be
 somewhere in between dot and einsum, probably closer to dot. Now I don't
 see any reason to use inner1d instead of einsum..
 
 -Jaakko
 
 On 03/15/2013 04:22 PM, Oscar Villellas wrote:
 In fact, there is already an inner1d implemented in
 numpy.core.umath_tests.inner1d

 from numpy.core.umath_tests import inner1d

 It should do the trick :)

 On Thu, Mar 14, 2013 at 12:54 PM, Jaakko Luttinen
 jaakko.lutti...@aalto.fi wrote:
 Answering to myself, this pull request seems to implement an inner
 product with broadcasting (inner1d) and many other useful functions:
 https://github.com/numpy/numpy/pull/2954/
 -J

 On 03/13/2013 04:21 PM, Jaakko Luttinen wrote:
 Hi!

 How can I compute dot product (or similar multiplysum operations)
 efficiently so that broadcasting is utilized?
 For multi-dimensional arrays, NumPy's inner and dot functions do not
 match the leading axes and use broadcasting, but instead the result has
 first the leading axes of the first input array and then the leading
 axes of the second input array.

 For instance, I would like to compute the following inner-product:
 np.sum(A*B, axis=-1)

 But numpy.inner gives:
 A = np.random.randn(2,3,4)
 B = np.random.randn(3,4)
 np.inner(A,B).shape
 # - (2, 3, 3) instead of (2, 3)

 Similarly for dot product, I would like to compute for instance:
 np.sum(A[...,:,:,np.newaxis]*B[...,np.newaxis,:,:], axis=-2)

 But numpy.dot gives:
 In [12]: A = np.random.randn(2,3,4); B = np.random.randn(2,4,5)
 In [13]: np.dot(A,B).shape
 # - (2, 3, 2, 5) instead of (2, 3, 5)

 I could use einsum for these operations, but I'm not sure whether that's
 as efficient as using some BLAS-supported(?) dot products.

 I couldn't find any function which could perform this kind of
 operations. NumPy's functions seem to either flatten the input arrays
 (vdot, outer) or just use the axes of the input arrays separately (dot,
 inner, tensordot).

 Any help?

 Best regards,
 Jaakko
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot/inner products with broadcasting?

2013-03-14 Thread Jaakko Luttinen
Answering to myself, this pull request seems to implement an inner
product with broadcasting (inner1d) and many other useful functions:
https://github.com/numpy/numpy/pull/2954/
-J

On 03/13/2013 04:21 PM, Jaakko Luttinen wrote:
 Hi!
 
 How can I compute dot product (or similar multiplysum operations)
 efficiently so that broadcasting is utilized?
 For multi-dimensional arrays, NumPy's inner and dot functions do not
 match the leading axes and use broadcasting, but instead the result has
 first the leading axes of the first input array and then the leading
 axes of the second input array.
 
 For instance, I would like to compute the following inner-product:
 np.sum(A*B, axis=-1)
 
 But numpy.inner gives:
 A = np.random.randn(2,3,4)
 B = np.random.randn(3,4)
 np.inner(A,B).shape
 # - (2, 3, 3) instead of (2, 3)
 
 Similarly for dot product, I would like to compute for instance:
 np.sum(A[...,:,:,np.newaxis]*B[...,np.newaxis,:,:], axis=-2)
 
 But numpy.dot gives:
 In [12]: A = np.random.randn(2,3,4); B = np.random.randn(2,4,5)
 In [13]: np.dot(A,B).shape
 # - (2, 3, 2, 5) instead of (2, 3, 5)
 
 I could use einsum for these operations, but I'm not sure whether that's
 as efficient as using some BLAS-supported(?) dot products.
 
 I couldn't find any function which could perform this kind of
 operations. NumPy's functions seem to either flatten the input arrays
 (vdot, outer) or just use the axes of the input arrays separately (dot,
 inner, tensordot).
 
 Any help?
 
 Best regards,
 Jaakko
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Bug in einsum?

2013-03-13 Thread Jaakko Luttinen
Hi,

I have encountered a very weird behaviour with einsum. I try to compute
something like R*A*R', where * denotes a kind of matrix
multiplication. However, for particular shapes of R and A, the results
are extremely bad.

I compare two einsum results:
First, I compute in two einsum calls as (R*A)*R'.
Second, I compute the whole result in one einsum call.
However, the results are significantly different for some shapes.

My test:
import numpy as np
for D in range(30):
A = np.random.randn(100,D,D)
R = np.random.randn(D,D)
Y1 = np.einsum('...ik,...kj-...ij', R, A)
Y1 = np.einsum('...ik,...kj-...ij', Y1, R.T)
Y2 = np.einsum('...ik,...kl,...lj-...ij', R, A, R.T)
print(D=%d % D, np.allclose(Y1,Y2), np.linalg.norm(Y1-Y2))

Output:
D=0 True 0.0
D=1 True 0.0
D=2 True 8.40339658678e-15
D=3 True 8.09995399928e-15
D=4 True 3.59428803435e-14
D=5 False 34.755610184
D=6 False 28.3576558351
D=7 False 41.5402690906
D=8 True 2.31709582841e-13
D=9 False 36.0161112799
D=10 True 4.76237746912e-13
D=11 True 4.5790782e-13
D=12 True 4.90302218301e-13
D=13 True 6.96175851271e-13
D=14 True 1.10067181384e-12
D=15 True 1.29095933163e-12
D=16 True 1.3466837332e-12
D=17 True 1.52265065763e-12
D=18 True 2.05407923852e-12
D=19 True 2.33327630748e-12
D=20 True 2.96849358082e-12
D=21 True 3.31063706175e-12
D=22 True 4.28163620455e-12
D=23 True 3.58951880681e-12
D=24 True 4.69973694769e-12
D=25 True 5.47385264567e-12
D=26 True 5.49643316347e-12
D=27 True 6.75132988402e-12
D=28 True 7.86435437892e-12
D=29 True 7.85453681029e-12

So, for D={5,6,7,9}, allclose returns False and the error norm is HUGE.
It doesn't seem like just some small numerical inaccuracy because the
error norm is so large. I don't know which one is correct (Y1 or Y2) but
at least either one is wrong in my opinion.

I ran the same test several times, and each time same values of D fail.
If I change the shapes somehow, the failing values of D might change
too, but I usually have several failing values.

I'm running the latest version from github (commit bd7104cef4) under
Python 3.2.3. With NumPy 1.6.1 under Python 2.7.3 the test crashes and
Python exits printing Floating point exception.

This seems so weird to me that I wonder if I'm just doing something stupid..

Thanks a lot for any help!
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Dot/inner products with broadcasting?

2013-03-13 Thread Jaakko Luttinen
Hi!

How can I compute dot product (or similar multiplysum operations)
efficiently so that broadcasting is utilized?
For multi-dimensional arrays, NumPy's inner and dot functions do not
match the leading axes and use broadcasting, but instead the result has
first the leading axes of the first input array and then the leading
axes of the second input array.

For instance, I would like to compute the following inner-product:
np.sum(A*B, axis=-1)

But numpy.inner gives:
A = np.random.randn(2,3,4)
B = np.random.randn(3,4)
np.inner(A,B).shape
# - (2, 3, 3) instead of (2, 3)

Similarly for dot product, I would like to compute for instance:
np.sum(A[...,:,:,np.newaxis]*B[...,np.newaxis,:,:], axis=-2)

But numpy.dot gives:
In [12]: A = np.random.randn(2,3,4); B = np.random.randn(2,4,5)
In [13]: np.dot(A,B).shape
# - (2, 3, 2, 5) instead of (2, 3, 5)

I could use einsum for these operations, but I'm not sure whether that's
as efficient as using some BLAS-supported(?) dot products.

I couldn't find any function which could perform this kind of
operations. NumPy's functions seem to either flatten the input arrays
(vdot, outer) or just use the axes of the input arrays separately (dot,
inner, tensordot).

Any help?

Best regards,
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Performance of einsum?

2013-03-13 Thread Jaakko Luttinen
Hi,

I was wondering if someone could provide some intuition on the
performance of einsum?

I have found that sometimes it is extremely efficient but sometimes it
is several orders of magnitudes slower compared to some other
approaches, for instance, using multiple dot-calls.

My intuition is that the computation time of einsum is linear with
respect to the size of the index space, that is, the product of the
maximums of the indices.

So, for instance computing the dot product of three matrices A*B*C would
not be efficient as einsum('ij,jk,kl-il', A, B, C) because there are
four indices i=1,...,I, j=1,...,J, k=1,...,K and l=1,...,L so the total
computation time is O(I*J*K*L) which is much worse than with two dot
products O(I*J*K+J*K*L), or with two einsum-calls for Y=A*B and Y*C.

On the other hand, computing einsum('ij,ij,ij-i', A, B, C) would be
efficient because the computation time is only O(I*J).

Is this intuition roughly correct or how could I intuitively understand
when the usage of einsum is bad?

Best regards,
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Leaking memory problem

2013-02-25 Thread Jaakko Luttinen
Hi!

I was wondering if anyone could help me in finding a memory leak problem
with NumPy. My project is quite massive and I haven't been able to
construct a simple example which would reproduce the problem..

I have an iterative algorithm which should not increase the memory usage
as the iteration progresses. However, after the first iteration, 1GB of
memory is used and it steadily increases until at about 100-200
iterations 8GB is used and the program exits with MemoryError.

I have a collection of objects which contain large arrays. In each
iteration, the objects are updated in turns by re-computing the arrays
they contain. The number of arrays and their sizes are constant (do not
change during the iteration). So the memory usage should not increase,
and I'm a bit confused, how can the program run out of memory if it can
easily compute at least a few iterations..

I've tried to use Pympler, but I've understood that it doesn't show the
memory usage of NumPy arrays.. ?

I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing
gc.garbage at each iteration, but that doesn't show anything.

Does anyone have any ideas how to debug this kind of memory leak bug?
And how to find out whether the bug is in my code, NumPy or elsewhere?

Thanks for any help!
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.einsum bug?

2013-02-22 Thread Jaakko Luttinen
Hi,

Is this a bug in numpy.einsum?

 np.einsum(3, [], 2, [], [])
ValueError: If 'op_axes' or 'itershape' is not NULL in theiterator
constructor, 'oa_ndim' must be greater than zero

I think it should return 6 (i.e., 3*2).

Regards,
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpydoc for python 3?

2013-01-16 Thread Jaakko Luttinen
On 01/14/2013 02:44 PM, Matthew Brett wrote:
 On Mon, Jan 14, 2013 at 10:35 AM, Jaakko Luttinen
 jaakko.lutti...@aalto.fi wrote:
 On 01/14/2013 12:53 AM, Matthew Brett wrote:
 You might be able to get away without 2to3, using the kind of stuff
 that Pauli has used for scipy recently:

 https://github.com/scipy/scipy/pull/397

 Ok, thanks, maybe I'll try to make the tests valid in all Python
 versions. It seems there's only one line which I'm not able to transform.

 In doc/sphinxext/tests/test_docscrape.py, on line 559:
 assert doc['Summary'][0] == u'öäöäöäöäö'.encode('utf-8')

 This is invalid in Python 3.0-3.2. How could I write this in such a way
 that it is valid in all Python versions? I'm a bit lost with these
 unicode encodings in Python (and in general).. And I didn't want to add
 dependency on 'six' package.
 
 Pierre's suggestion is good; you can also do something like this:
 
 # -*- coding: utf8 -*-
 import sys
 
 if sys.version_info[0] = 3:
 a = 'öäöäöäöäö'
 else:
 a = unicode('öäöäöäöäö', 'utf8')
 
 The 'coding' line has to be the first or second line in the file.

Thanks for all the comments!

I reported an issue and made a pull request:
https://github.com/numpy/numpy/pull/2919

However, I haven't been able to make nosetests work. I get error:
ValueError: Attempted relative import in non-package
Don't know how to fix it properly..

-Jaakko

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpydoc for python 3?

2013-01-13 Thread Jaakko Luttinen
On 2013-01-10 17:16, Jaakko Luttinen wrote:
 On 01/10/2013 05:04 PM, Pauli Virtanen wrote:
 Jaakko Luttinen jaakko.luttinen at aalto.fi writes:
 The files in numpy/doc/sphinxext/ and numpydoc/ (from PyPI) are a bit
 different. Which ones should be modified?

 The stuff in sphinxext/ is the development version of the package on
 PyPi, so the changes should be made in sphinxext/


 Thanks!

 I'm trying to run the tests with Python 2 using nosetests, but I get
 some errors http://pastebin.com/Mp9i8T2f . Am I doing something wrong?
 How should I run the tests?
 If I run nosetests on the numpydoc folder from PyPI, all the tests are
 successful.

I'm a bit stuck trying to make numpydoc Python 3 compatible. I made 
setup.py try to use distutils.command.build_py.build_py_2to3 in order to 
transform installed code automatically to Python 3. However, the tests 
(in tests folder) are not part of the package but rather package_data, 
so they won't get transformed. How can I automatically transform the 
tests too? Probably there is some easy and right solution to this, but 
I haven't been able to figure out a nice and simple solution.. Any 
ideas? Thanks.

-Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpydoc for python 3?

2013-01-10 Thread Jaakko Luttinen
The files in numpy/doc/sphinxext/ and numpydoc/ (from PyPI) are a bit
different. Which ones should be modified?
-Jaakko

On 01/10/2013 02:04 PM, Pauli Virtanen wrote:
 Hi,
 
 Jaakko Luttinen jaakko.luttinen at aalto.fi writes:
 I'm trying to use numpydoc (Sphinx extension) for my project written in
 Python 3.2. However, installing numpydoc gives errors shown at
 http://pastebin.com/MPED6v9G and although it says Successfully
 installed numpydoc, trying to import numpydoc raises errors..

 Could this be fixed or am I doing something wrong?
 
 Numpydoc hasn't been ported to Python 3 so far. This probably
 wouldn't a very large amount of work --- patches are accepted!
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpydoc for python 3?

2013-01-10 Thread Jaakko Luttinen
On 01/10/2013 05:04 PM, Pauli Virtanen wrote:
 Jaakko Luttinen jaakko.luttinen at aalto.fi writes:
 The files in numpy/doc/sphinxext/ and numpydoc/ (from PyPI) are a bit
 different. Which ones should be modified?
 
 The stuff in sphinxext/ is the development version of the package on
 PyPi, so the changes should be made in sphinxext/
 

Thanks!

I'm trying to run the tests with Python 2 using nosetests, but I get
some errors http://pastebin.com/Mp9i8T2f . Am I doing something wrong?
How should I run the tests?
If I run nosetests on the numpydoc folder from PyPI, all the tests are
successful.

-Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpydoc for python 3?

2013-01-09 Thread Jaakko Luttinen
Hi!

I'm trying to use numpydoc (Sphinx extension) for my project written in
Python 3.2. However, installing numpydoc gives errors shown at
http://pastebin.com/MPED6v9G and although it says Successfully
installed numpydoc, trying to import numpydoc raises errors..

Could this be fixed or am I doing something wrong?

Thanks!
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Special matrices with structure?

2012-02-23 Thread Jaakko Luttinen
Hi!

I was wondering whether it would be easy/possible/reasonable to have
classes for arrays that have special structure in order to use less
memory and speed up some computations?

For instance:
- symmetric matrix could be stored in almost half the memory required by
a non-symmetric matrix
- diagonal matrix only needs to store the diagonal vector
- Toeplitz matrix only needs to store one or two vectors
- sparse matrix only needs to store non-zero elements (some
implementations in scipy.sparse)
- and so on

If such classes were implemented, it would be nice if they worked with
numpy functions (dot, diag, ...) and operations (+, *, +=, ...) easily.

I believe this has been discussed before but google didn't help a lot..

Regards,
Jaakko
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion